Information processing in the human brain can happen fully conscious or in total absence of consciousness. Despite being far away from understanding consciousness in terms of being a subjective phenomenon based on neural activity we can at least imagine what it means to be consciously aware of a sensory perception or knowledge or ourselves. At the very moment we know that we know and what we know, the respective knowledge is consciously processed and can be verbally expressed, but what about information processing in the absence of consciousness? Can non-conscious information processing do the same just without consciousness? It is difficult to imagine what kind of information processing happens below the level of consciousness and what it actually means. What does non-conscious information look like? What does non-conscious information represent and what can it do? These are important questions to be answered in order to better understand consciousness itself. Among others a recent review reports about unconscious high-level processing in the human brain (van Gaal et al., 2011). In this review, the authors summarise scientific evidence to support the idea that decision making, an apparently conscious process, as well as other parts of highly sophisticated human behaviour can happen automatically without conscious control. This is exactly in line with the spirit of this book chapter that is written to support this notion with neuroimaging data collected via magnetoencephalography (MEG).
For those who trust the well known iceberg analogy related to Sigmund Freud’s work about the human spirit the above mentioned questions must be very exciting, because according to this analogy non-conscious (in Freud’s terminology pre- and unconscious) information processing accounts for more than 80% of all information processing. This highlights the importance and the dominance of brain functions that happen in the absence of consciousness.
According to my own view, the function of the brain is to produce controlled behaviour, besides managing basic body functions. The brain produces behaviour by processing information in three major steps. Step 1 is to process sensory input from outside and inside the body. In step 2 cognition- and emotion-related aspects of a stimulus are processed to make decisions. Finally, in step three the output of cognition and emotion is translated into motor programs that are then executed to elicit motor action which equals behaviour, at least in the view of a neurobiologist. Step 2 and to some extent perhaps step 1 contain information that can be processed either in the absence of or with full consciousness, whereas motor-related information always stays unconscious.
After all, it seems obvious that a better understanding of non-conscious information processing, especially related to cognition and emotion (step 2), leads to a better understanding and to a more accurate prediction of human behaviour. This is in contrast to focusing only on explicit measures, which just look at the tip of the iceberg.
The major goal of this chapter is to provide an overview of MEG-work about non-conscious brain processes. Traditional approaches to investigate human behaviour utilised questionnaires to acquire qualitative data and also behavioural experiments were conducted. Both strategies are limited in measuring non-conscious information processing, although some behavioural measures are able to tackle some aspects of it. Anyway, after the advent of neuroimaging techniques it turned out that neural activity measures are most adequate to demonstrate non-conscious brain processes, some of them even in the absence of any conscious behavioural consequence at the very moment of testing. One of the best examples was published by Rugg et al. (1998). In that study it was shown that repeated words that were incorrectly judged as new (so called misses) elicit different brain activity than new words that were correctly judged as new (so called correct rejections). In both cases the explicit responses were exactly the same (“I haven’t seen this word”), but objective measures (brain activities) told another story. First, such discrepancy raised a number of specific questions. Now, we do have a general answer. Our brain knows more than it admits. The MEG has proved to provide access to some of that hidden knowledge. We only begin to understand to what extent and how information processing outside consciousness actually guides human behaviour. With its excellent temporal resolution in the range of milliseconds and its localisation capacities the MEG is a highly appreciated tool (see Hari et al., 2010) to access and describe non-conscious functions. Sensory-related, motor-related as well as emotion- and cognition-related information processing have been described. Although a certain gap between non-conscious information processing and actual behaviour has yet to be bridged, we learn to accept how unaware we are of most decisions that are made in our brains to guide our behaviour. We do not know everything that our brain knows.
This chapter has a further goal. As a matter of fact, various different terms are currently being used to describe the non-conscious character of some brain processes. Unfortunately, these terms (e.g. preconscious, subconscious, unconscious, non-conscious) are used interchangeably and thus produce considerable confusion. As a consequence of that, this chapter is also written to propose a simple model about non-conscious information processing in the human brain. This model is not an attempt to modify existing views of other areas (especially psychoanalysis), but it shall help to provide clarification in the field of cognitive and affective neuroscience. The model defines anything not conscious as non-conscious and it distinguishes only between unconscious and subconscious (both are non-conscious). Unconscious refers to information that can never become conscious, whereas subconscious refers to information that can be processed in the absence or presence of consciousness. For this chapter the overall working hypothesis states that all information processing that occurs prior to semantic processing is referred to as unconscious. Again, information that is unconsciously processed can never become conscious. On the other hand, from the very moment, sensory-related information processing leads to semantic processing, we shall label processed information subconscious or conscious depending on the absence or presence of consciousness.
2. Chronology of processes from unconscious to subconscious and conscious
In terms of serial processing in the brain it is interesting to know how long it takes until information related to a physical or chemical stimulus that is translated into neural signals actually reaches cortical areas. Before reaching the cortex, sensory information is already being processed, but as a matter of fact the MEG can hardly measure it. Under the assumption that subcortical structures are not able to process semantic information and according to the above mentioned model we can conclude that any processing before cortical involvement is unconscious. From various MEG studies we know that cortical involvement (at least in the visual domain) starts at about 100 ms post-stimulus (for words: e.g. Hari, 1990, Tarkiainen et al., 1999, Tarkiainen et al., 2002) which according to the above mentioned model means that roughly the first 100 ms after visual stimulation onset are unconscious. The exact time differs between sensory systems and also depends on top-down-processing, but let’s use these 100 ms just to have a number to work with. Afterwards, earliest cortical processing is still sensory-related (primary areas and secondary areas). It then roughly takes another 50 to 100 ms until semantic information is being processed resulting in subconscious or even conscious information processing (Walla et al., 2001).
Besides semantic (cognitive) information processing any sensory input is also processed regarding its emotion aspects. Like cognitive information processing also emotion information processing can be but is not necessarily associated with consciousness.
3. Emotion-related information processing
For some reason there is not much literature about MEG and emotion. However, because meanwhile emotion also became an interesting candidate to be divided into non-conscious and conscious, it shall nevertheless be shortly dealt with. According to Peyk et al. (2008), earliest emotion processing in terms of MEG recordings occurs between 120 ms and 310 ms post stimulus in the visual system (pleasant and unpleasant pictures compared to neutral). A two stage model has been proposed with an early and a late activity component within this time window (120 ms to 170 ms and 220 ms to 310 ms). Investigations using faces as emotion stimuli reveal similar time windows within which different brain activity was found to reflect emotion-related information processing (e.g. Streit et al., 1999, Lewis et al., 2003). Only recently, Bröckelmann et al. (2011) described the effect of emotion-associated tones that attract enhanced attention at very early auditory processing (20 ms to 50 ms after stimulus onset). In another study, influences of olfaction on subjective valence intensity ratings of visual presentations were investigated by using the MEG (Walla and Deecke, 2010). Five different emotion categories (pictures: baby, flower, erotic, fear and disgust) were simultaneously associated with different odours. First, a significant interaction was found between odour condition and emotion-picture category in terms of emotion rating performance. Second, around 300 ms after stimulus onset odour-related brain activity effects were found for all emotion picture categories as revealed by MEG. Later, around 700 ms after stimulus onset, brain activity effects occurred only in the neutral (flower) and both negative (fear and disgust) emotion categories. It was concluded that the earlier time window shows pronounced olfactory and visually induced emotion interaction, whereas the later brain activity effect shows olfaction influences only for specific emotion categories.
Again, it has to be emphasised that the MEG is mainly picking up signals from cortical structures missing out most subcortical information processing.
Although nothing can be said about conscious and non-conscious emotion at this stage in terms of MEG recordings I want to emphasise that this concept makes sense and future MEG studies will provide further insight.
Because there is more to emotion than just subjective feeling, terms such as unconscious emotion have already been introduced to the scientific community (Winkielman and Berridge, 2004). Given the distinction between implicit and explicit memory (see Rugg et al., 1998) it might even be helpful to think of a similar concept for the function of emotion (implicit versus explicit emotion). In this regard, implicit emotion would be understood as the non-conscious processing of emotion-related information influencing behaviour without leading to conscious awareness of it. In contrast, explicit emotion would be understood as conscious emotion, in other words, subjective feeling.
Moving on to cognition as the second major function to control behaviour we can look at non-conscious processes in more detail.
4. Cognition-related information processing
Cognition is a widely used term, but most often it is not clearly defined. In the frame of this chapter, cognition is simply understood as a function that processes semantic information (meaning) in order to guide behaviour based on accumulated knowledge. Strikingly, it is suggested that semantic processing can occur both in the absence or presence of conscious awareness. The phenomenon of intuition might be a good example of subconscious cognition. Also this idea can’t be sufficiently supported by existing literature at this stage, but it is suggested as an interesting working hypothesis. Whatever cognition is understood as, in the following you’ll find a selection of MEG work about memory-related and olfaction-related phenomena as well as about self awareness that lead to the assumption that much of even highly sophisticated brain functions happen without what we commonly understand as consciousness.
4.1. False recognition depends on prior word encoding
The human brain is a highly sophisticated information processing organ, but as a matter of fact it does make mistakes. Although we usually tend to believe that we know what we know there is convincing evidence to support that this is definitely not the case. It is quite impressive how we trust our conscious evaluations and decisions, but only until some objective evidence proofs us wrong.
One of such mistakes is the well known phenomenon called false recognition (e.g. Walla et al., 2000, for review see Schacter and Slotnick, 2004). It occurs when new items are wrongly classified as already seen. Such wrong classifications are called false alarms. In the laboratory, false recognition happens in test phases of memory experiments where prior presented items from a study list are shown again together with new items. In such test situations some of the new items elicit feelings of familiarity leading to wrong impressions of being repetitions from the prior study list. Such familiarity arises due to similarity on either sensory or semantic processing (perceptual versus conceptual) levels (see Garoff-Eaton et al., 2007). For example, the new word
On the behavioural level a false alarm can’t be distinguished from a hit (correctly recognised repeated item) in terms of response accuracy, but it can be in terms of response time. False alarms are associated with longer response times than hits (Walla et al., 2000). From this it can already be inferred that the brain knows more than it admits, because a longer response time is the result of longer information processing, which obviously isn’t known to consciousness. Further, if brain activities are simultaneously recorded to recognition performance one always finds higher frontal brain activity elicited by false alarms (Walla et al., 2000) compared to hits. Thus, longer information processing related to false alarms seems to be due to increased frontal brain activity.
Interestingly, brain activities elicited by false alarms do not only differ from brain activities elicited by hits. They can also differ between false alarms themselves depending only on the level of word processing during the encoding of the prior study list. In particular, false alarms after deep semantic word encoding elicit significantly higher brain activities than false alarms after alphabetical word encoding (figure 2). Respective T-maps are presented in figures 3 and 4. Figure 3 shows t-maps related to raw MEG data. Any differences can be interpreted in terms of amplitude differences. On the other hand, figure 4 shows t-maps related to normalised MEG data. Any differences in these can be interpreted in terms of functional differences. (Anyway, at this point it is important to emphasise a very specific MEG data feature. That is, neural structures eliciting different brain activity are not located underneath the sensors that demonstrate significant differences in these t-maps. This is due to the fact that a neural generator produces a rotating magnetic field which has a maximum ingoing and a maximum outgoing field flux. In a MEG map one neural generator thus results in a red-coloured maximum and a blue-coloured maximum. As a matter of fact, sometimes only one of these two maxima shows significant differences between conditions of interest.) With respect to neural generators being involved in false recognition-related effects figure 5 provides genuine source localisation results. Keeping in mind that false alarms are actually new words without any prior history in the frame of a recognition experiment it seems odd that they elicit different brain activities just depending on how the repeated words they are presented together with were previously processed. The only reason for that must be because something connects false alarms with hits. The idea is that these connections are the above mentioned similarities. If similarities are really causing false alarms it seems plausible that if repeated words that are presented together with new words were semantically encoded during the prior study phase similarities to the new words are rather semantic-related than sensory-related. As a consequence of that the recognition of semantically encoded words influences the processing of any new words intermixed with them. Strikingly, all this happens outside consciousness.
Mean number of false alarms for both conditions. Note that the number of false alarms from the perceptual encoding condition is higher. The difference between these two conditions is highly significant (from Walla et al., 2001).
Grand MEG maps of both conditions of false alarms (after perceptual encoding and after conceptual encoding) for the latency region from 300 to 500 ms after stimulus onset averaged over 26 subjects. Note the higher magnetic field flux related to false alarms from the conceptual encoding condition (significant difference) (from Walla et al., 2000).
Distributions of significant differences (grande average) between the two false alarm conditions (conceptual minus perceptual) as calculated by t-tests for every single sensor location for consecutive time intervals. Red circles represent P values from 0.001 to 0.020, orange circles represent P values from 0.021 to 0.040 and yellow circles represent P values from 0.041 to 0.050 (from Walla et al., 2000).
Distributions of significant differences (grande average) between the two false alarm conditions (perceptual and conceptual) as calculated by t-tests for every single sensor location for consecutive time intervals. Red circles represent P values from 0.001 to 0.020, orange circles represent P values from 0.021 to 0.040 and yellow circles represent P values from 0.041 to 0.050 (from Walla et al., 2000).
Source localization results for a single subject including a three-dimensional reconstruction of the brain. The upper line shows the localized dipoles related to false alarms from the conceptual encoding condition and the lower line shows the localized dipoles related to false alarms from the perceptual encoding condition. Note that false alarms from the conceptual encoding condition elicited higher brain activity than false alarms from the perceptual encoding condition in terms of dipole strength (from Walla et al., 2000). The slight difference in right hemisphere dipole localisation between the two conditions has not been further analysed.
4.2. Two stages of olfactory information processing
Early stage olfactory information processing is not associated with consciousness. The human sense of olfaction has long been neglected in terms of scientific investigation. Partly, this may be due to the subjective feeling that the encoding and processing of odours is not important to us. Only people who lost this evolutionary old sense know that not only the appreciation of food is reduced to a minimum but also some abstract feeling of evaluation (see Walla, 2008).
Dynamic influences of olfaction on both word and face processing have been described (Lorig, 1999, Walla et al., 2003a, Walla et al., 2003b, Walla et al., 2005). Most often, it turned out that at least two stages (time windows) of odour-related information processing take place. Within this book chapter we do not go into further detail with respect to the actual influences of olfaction on word and face processing. The recent review by Walla (2008) summarises them anyway. However, what is important to us in relation to this book chapter is the fact that these two stages seem to process different aspects of an odour. In order to better understand these two stages and their functions a further study provides helpful support. Strikingly, that study highlights non-conscious aspects related to olfactory information processing. With MEG and a sophisticated computer-controlled device to accurately deliver olfactory stimuli it was shown that one of these two stages does not lead to conscious awareness, although odour-related brain activity occurred. This finding was revealed by comparing a group of participants who reported conscious odour perception with a group of participants who reported to have had no conscious odour perception during the course of an experiment (Walla et al., 2002). Crucially, both groups confirmed conscious perception of a first test stimulus, but as a matter of fact some participants were not aware of any odour stimulation during the course of the experiment. This might have been caused by the fact that all the attention had to be paid to words being visually presented with the instruction to make a semantic decision for each presentation. As a consequence olfaction was rather incidental anyway.
The main finding was that although no conscious olfaction was reported in the one group MEG data revealed odour-related brain activity (Walla et al., 2002). This brain activity occurred between 200ms and 500ms after stimulus onset and it was also found in the conscious perception group. On the other hand, it was only the conscious perception group that also demonstrated later odour-related brain activity between 600ms and 900ms after stimulus onset. Obviously, regardless of conscious odour perception or not odour-related brain activity occurred. It still remains unclear what aspects of an olfactory stimulus are processed at the early stage and what exactly their effects are, but it seems obvious that consciousness is not essential for them to be processed. Figure 6 demonstrates different magnetic fields (MEG maps) between the group reporting conscious olfaction and the group without conscious olfaction.
Brain activity differences between the conditions ‘study words with odor’ and ‘study words without odor’ for both the ‘unconscious perception’ group and the ‘conscious perception’ group. Note that the activity differences between 200 and 500 ms after stimulus onset are well defined in both groups whereas the activity differences between 600 and 900 ms after stimulus onset are much more conspicuous in the ‘conscious perception’ group (from Walla et al., 2002).
4.3. The subliminal effect of possessive pronouns
The human self has long been a focus of special interest. It is often treated as one of the most sophisticated and highest forms of neural information processing at all. How can an organ such as the brain become aware of itself?
Braking things down, we begin to realise that our self is in fact a combination of our own body including all its anatomical structures plus our own memories that are stored in our brains. Anatomical structures are processed through the somatosensory system (body representation) and our memories range from simple sensory experiences up to episodes of our own life. The fact that loss of our most personal material possessions (belongings) almost hurt like true physical pain may be linked with the idea that they are processed in our brain like parts of our body. While this seems nothing more than just an interesting idea, recent neuroimaging-research now provides evidence that self awareness at its deepest roots starts with processing the self as a living organism just as any other. On that level the self in our brain is no different from any other human self, perhaps not even different from any other animal. On a higher processing level though, our brain distinguishes between the self and somebody else. Interestingly, all this can happen regardless of consciousness being present or absent.
The respective neuroimaging experiment was conducted with MEG and EEG. Both methods revealed similar findings about brain activities being able to process self- versus other-related information on a non-conscious level. The multiple-aspects-theory of self awareness was developed (Walla et al. 2007, 2008) and is meanwhile confirmed by another research group (Herbert et al. 2010).
The MEG study by Walla et al. (2007) made use of language (German) processing to elicit self- versus other-related processing in the brain. In particular, combinations of possessive pronouns and nouns were used. To give an example, “my garden” and “his garden” versus the neutral condition of “a garden” were thought to ideally elicit self- versus other-related brain activities that can be distinguished from no person engagement such as in the neutral condition. Strikingly, such different word combinations (visually presented) had to be processed under three different conditions in terms of level of consciousness. In one condition, many of these word combinations were encoded following the instruction to only decide whether each noun contained a specific letter or not. This level of encoding is referred to as alphabetical and it does not include conscious semantic information processing and no active pronoun processing at all. In a further condition many new such combinations were encoded following the instruction to decide whether each noun’s meaning was living or non-living. This level of processing is referred to as semantic and it does again not include any active pronoun encoding. Finally, in a third condition, another set of pronoun-noun pairs was visually presented, but this time the instruction was to generate a short meaningful sentence (in their minds) containing the noun together with its pronoun. For example, “my garden is nice”. In addition, the living/non-living distinction had to be made. Statistical analysis revealed that level of processing had no effect on significant pronoun effects that were found. What did these pronoun effects look like?
Between 200 ms and 300 ms after stimulus onset both “my” and “his” pronoun conditions elicited similar brain activity that was different compared to the neutral condition (figure 7). In contrast, between 500 ms and 800 ms after stimulus onset the “my” pronoun condition differed from both the “his” pronoun condition and the neutral condition (figure 8). It has been suggested that the left insular cortex is involved in processing self-related aspects (Walla et al, 2007).
Early neurophysiological effect: MEG maps (magnetic field distributions) averaged across depth of word processing and across all study participants for the time interval from 200 to 300 ms after stimulus onset. First line: one map for each of the three conditions of pronoun (“ein” (“a”), “mein” (“my”), “sein” (“his”)). Second line: difference magnetic field distributions related to comparisons (subtractions) between each possible pair of pronoun condition (“mein” vs. “ein”, “sein” vs. “ein”, “sein” vs. “mein”). Sensor areas where t-tests resulted in significant differences are marked with a white dotted circle. Third line: t-maps showing the distribution of significant differences for each of the above-mentioned comparisons (raw data). Note that “mein” vs. “ein” and “sein” vs. “ein” both resulted in significant differences, whereas no differences occurred for the comparison “sein” vs. “mein”. Fourth line: t-maps showing the distribution of significant differences for each of the above-mentioned comparisons (amplitude-normalized data). Note that hardly any differences occurred (from Walla et al., 2007).
Later neurophysiological effect: MEG maps (magnetic field distributions) averaged across depth of word processing and across all study participants for the time interval from 500 to 800 ms after stimulus onset. First line: one map for each of the three conditions of pronoun (“ein” (“a”), “mein” (“my”), “sein” (“his”)). Second line: difference magnetic field distributions related to comparisons (subtractions) between each possible pair of pronoun condition (“mein” vs. “ein”, “sein” vs. “ein”, “sein” vs. “mein”). Sensor areas where t-tests resulted in significant differences are marked with a white dotted circle. Third line: t-maps showing the distribution of significant differences for each of the above-mentioned comparisons (raw data). Note that “mein” vs. “ein” and “sein” vs. “ein” both resulted in significant differences. In addition, the comparison between “sein” and “mein” also resulted in significant differences at some of the sensor sites (no such differences were found during the early period of time). Fourth line: t-maps showing the distribution of significant differences for each of the above-mentioned comparisons (normalized data) (from Walla et al., 2007).
The localized dipole in this respective region shows stronger brain activity as reflected by dipole strength (nA m) related to both personal pronouns compared to the neutral pronoun. The location of this dipole is interpreted as left occipital (from Walla et al., 2007).
The localized dipole in this respective region shows stronger brain activity as reflected by dipole strength (nA m) related to both personal pronouns compared to the neutral pronoun (strongest activity in the “mein”(“my”) condition). The location of this dipole is interpreted as left temporal. Most likely it is the left insular cortex that is able to discriminate between self and other. (from Walla et al., 2007).
Findings revealed by MEG and other methods clearly demonstrate that only a little fraction of brain processes related to even high cognitive functions such as our self are associated with consciousness. Or in other words, much of our even highest cognitive functions do happen non-consciously. It almost seems as if we mainly run non-conscious with only bits and pieces entering the stream of consciousness. It may be reasonable to believe that around 80 to 90 % of our daily activities are controlled outside our own awareness. However, we should not make the mistake to underestimate consciousness as it arises now in your brain while reading these lines. Consciousness still seems to be inevitable to appreciate a lot of what we are so much used to. The appreciation of music and art, the ability to love and to feel happy are just some of them. In fact, the more we learn about how dominant non-conscious processes guide our behaviour the more we learn to appreciate what our individual consciousness actually means to us.
The author wants to thank Lüder Deecke for providing the essential equipment and general support, Wilfried Lang for mentorship and general support. Thanks also go to Dennis Balson for helpful financial support.