Open access peer-reviewed chapter

Neurophenomenology, Enaction, and Autopoïesis

Written By

John Stewart

Submitted: 09 December 2018 Reviewed: 18 February 2019 Published: 04 April 2019

DOI: 10.5772/intechopen.85262

From the Edited Volume

Behavioral Neuroscience

Edited by Sara Palermo and Rosalba Morese

Chapter metrics overview

1,215 Chapter Downloads

View Full Metrics

Abstract

The project of neurophenomenology, initiated by Francisco Varela, aims at establishing correlations between descriptions of lived experience (as elicited by the explicitation interview technique) and brain states (as measured with increasing precision and detail by the new brain imaging techniques). However, on their own, such correlation aggravates rather than solve Chalmers’ “hard problem”–how can a neuronal state be a state of consciousness? The question that arises is thus how to interpret such correlations. I will argue that this requires putting the brain in the body of an animal living in the world. Epistemologically, this amounts to putting neuroscience in the context of cognitive science (Varela’s concept of enaction) and cognitive science in the context of biology (Maturana and Varela’s concept of autopoïesis).

Keywords

  • neurophenomenology
  • enaction
  • autopoïesis
  • conscious experience
  • cognition
  • life
  • Varela

1. Introduction

In 1995, Chalmers [1] identified what he called the “hard problem” of consciousness. The question is, how can a purely physicochemical event such as a set of action potentials in a set of brain cells (and why brains, rather than the heart or the gut?!) actually be a subjective, conscious experience. Chalmers wrote: “Why is it that when our cognitive systems engage in visual and auditory information processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? … Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.” Over 20 years later, we are hardly any nearer a solution; the problem remains entire.

In Section 2, I will present the “neurophenomenology” proposed by Varela [2], which provides empirical evidence on the neural correlates of consciousness. While entirely valid and indeed intriguing, this work does not solve the “hard problem”; I will argue that in a way it makes matters even worse, because the correlations it reveals are not explanatory, but they require explanation‑although in the conclusion, I will suggest that there is actually more to it than this.

In Section 3, I will present the concept of “enaction” [3]. In order for conscious experience to be possible, there must be something to be conscious of: “what is it like? …”, the question of qualia. Enaction is the process, whereby a living organism brings about or enacts its own characteristic lived world of experience. The classic example is the “world of the tick” as evoked by von Uexküll [4], originally in 1902, who provided a compelling view of “what it is like to be a tick.” An additional feature is that whereas the original, prototypical example of enaction is the lived world of an animal, and this concept also applies to humans. And more specifically, this means that each and every one of the humans enacts our own lived world, every minute of every day of our lives. This introduces a note of first-person subjectivity, which contrasts with the third-person objectivity more usual in scientific discourse.

In the previous paragraph, I said that enaction is the process, whereby a living organism brings forth its lived world of experience. This raises the question as to whether being alive may not be a prerequisite for experiencing consciousness. There are two aspects to this: on the one hand, a “brain in a vat” would arguably not be properly conscious; on the other, it is only in science fiction that mechanical robots with computerized “brains” can be conscious. In order to deepen our understanding of what it takes to be fully alive, in Section 4, we will look at autopoïesis, the process whereby living organisms continually fabricate themselves. This is something that every living organism does, even a lowly bacterium; but a brain in a vat, just by itself without being part of an organism, does not; neither does any man-made machine, not even the most powerful computer imaginable with or without a sophisticated connectionist architecture.

In the box below, I provide definitions of some major concepts in this field.

Definitions of major concepts

Enaction. Enaction is the process, whereby a living organism brings forth its own specific “lived world.” The prototype example is the “world of the tick,” as described by von Uexküll and detailed in the text. It is to be noted that “enaction” also applies to human beings, including each of us, ourselves, which introduces a dimension of first-person subjectivity unusual in scientific discourse.

Autopoïesis. Autopoïesis is the process, whereby living organisms continually produce themselves. They are thus processes rather than things; they only become “things” again when they are dead. This feature differentiates the humblest living organism, even a bacterium, from the most sophisticated robot with a computer “brain,” since these robots do not produce themselves, they are made and repaired from the outside, by human engineers; in other words, they are heteropoïetic.

Dissipative structure. In thermodynamically open systems (qv), there can be spontaneous generation of dynamic dissipative structures, dynamic forms which are par excellence processes rather than things. Mundane examples are tornadoes or a whirlpool in a river. Living organisms belong to the class of dissipative structures. However, whereas common examples such as whirlpools are essentially ephemeral, lasting only as long as special external conditions over which they have no controls maintained; living organisms are potentially immortal, having the capacity to perpetuate themselves indefinitely (see autopoïesis).

Thermodynamic systems: open and closed. In classical thermodynamics, there are two sorts of system. In a closed system, there are no exchanges with anything external to the system; it is in such systems that “entropy” (i.e. disorder) can be properly calculated and it demonstrably increases monotonically. In open systems, by contrast, there is a continual flow of energy and matter through the system. In this case, “entropy” cannot be properly calculated, and so no longer increases. In such systems, dynamic dissipative structures (qv) can arise spontaneously, and maintain themselves indefinitely.

Finally, in conclusion, I will attempt to draw these various threads together.

Advertisement

2. Neurophenomenology

A notable attempt to solve the “hard problem”—or rather, to dissolve it away—is the neurophenomenology proposed by Varela [2]. There are two strands to neurophenomenology. On one hand, there are the modern techniques of brain imaging, which provide rich empirical data—fine-grained, precisely localized, in real time—on cerebral activity. On the other hand, there is a serious attempt to obtain valid descriptions of first-person subject consciousness. It is notoriously difficult to obtain such descriptions; spontaneous introspection is disconcertingly unreliable. In a well-known experiment, repeated by various authors, two similar subjects are shown, but slightly with different portraits, and were asked to designate the one they prefer. They are then shown a slightly enlarged version of the portrait, other portrait, and asked to explain why they chose it. Blithely unaware of the subterfuge, the subjects dutifully spin a yarn explaining why they preferred the portrait they had not chosen. At a more basic level, when asked to describe their experience of a visual perception, naïve subjects base themselves on what they know, or think they know, about the object in question (an apple, a house, a dog…). It was in order to counter this tendency that Husserl [5], the founding father of phenomenology, developed the practice of “eidetic reduction,” that is, putting aside or bracketing out one’s knowledge of the object, in order to focus on the actual experience itself. This is not easy, but possible if due care is taken. In Varela’s own work, he employed an “interview method” to obtain reliable descriptions of subjective experience [6].

With these methods and also drawing on the literature, Varela obtained some convincing results. The major results are detailed in Table 1.

Attention. Three attentional networks contribute to distinguishing conscious from nonconscious cognitive events: orienting to sensory stimulation, activating patterns from memory, and maintaining an alert state.
Present-time consciousness. Phenomenological studies reveal a basic three-part structure of the present, with threads into past and future horizons. These structural invariants link naturally to observations in cognitive neuroscience, which show that there is a minimal time required for the emergence of neural events that correlate to a cognitive event. This noncompressible time framework can be analyzed as a manifestation of the long-range neuronal integration in the brain linked to a widespread synchrony.
Body image and voluntary motion. The nature of the will as expressed in the initiation of a voluntary action is inseparable from consciousness and its examination. Recent studies give an important role to neural correlates, which precede and prepare voluntary action, and the role of imagination in the constitution of a voluntary act.
Perceptual filling-in. This phenomenon involves the spontaneous completion of a percept, so that the appearance (i.e., a visual contour) is distinct from the physical correlate (incomplete borders, as in illusory contours). The neuronal data on filling-in correlate well with the phenomenological observation: there is an important difference between “seeing as” (visual appearance) and “seeing what” (a visual judgment).
Fringe and center. In traditional phenomenology, there is a two-part structure of the visual field between a “center” and a “fringe.” This correlates well with the structure of the retinal, between “foveal” area with a high density of retinal cells and a “peripheral” area with much lower density.
Emotion. In recent years, there have been significant advances in identifying the brain correlates of emotions. Evidence points to the importance of specific neuronal structures such as the amygdala, the lateralization of the processes involved, and on the role of arousal in emotional memory.

Table 1.

Basic brain mechanisms for consciousness.

So where does this leave us? These neural correlates of consciousness are, in their own way, impressive indeed. However, with respect to the “hard problem,” there is a sense in which we are worse off than ever. The point is that these correlates are just that; they do not actually explain anything, and indeed they themselves require explanation. There is actually something more to Varela’s neurophenomenology than this, and I will return to this point in conclusion. However for the moment, we will now take a look at what might lie behind these correlations.

Advertisement

3. Enaction

To start with, it will be salutary to go back to basics and look at what it is that neurons actually do. Physiologically, their primary role is to connect sensory inputs to motor outputs. They are well suited to this task, since their basic mode of action is the action potential; and the cells of both sensory organs (eyes, ears, noses, tactile receptors, etc.) and effector organs (principally muscles) also function with action potentials. It is thus quite convenient to connect receptor organs to neurons, and neurons to effector organs, by means of synapses. The point here is that by this means, the actual connections between sensory organs and effector organs is quite flexible: any sensory organs can be connected to any effector organ, and the connection can be excitatory or inhibitory depending on what is appropriate in terms of the sensory-motor dynamics thus set up. In order to appreciate the ecological significance of this, the time has come to introduce the notion of enaction, the process whereby a living organism enacts, or brings about, its own characteristic “lived world.”

The prototype example is the “world of the tick” as described by von Uexküll [4]. This lived world is enacted by three simple sensation-action cycles. (i) The female tick climbs to the end of the branch of a bush, and … waits, maybe for weeks. If she senses an odor of butyric acid, she lets herself fall. (ii) If she falls on a hairy surface, she crawls until she finds a smooth area (if not, she climbs back up onto the bush and starts over). (iii) She then sticks her proboscis through the surface, and if she finds a liquid at 37°C underneath, she sucks to satiety. This makes sense when we know that (in context) butyric acid is secreted by the sweat glands of mammals; the hairy surface will then be the fur of the mammal; and the liquid at 37°C will be the blood of the mammal (which the tick needs to feed her eggs; she then fertilizes the eggs from a store of sperm, after which she lays the eggs and her life cycle is complete). Of course, the tick does not know that, as such; indeed, she is barely conscious. But the important point for us here is that these simple sensory motor dynamics are set up by the wiring of the tick’s simple peripheral nervous system. Note that this would not work if activation of the butyric acid sensors did anything other than triggering the motor action of letting herself fall; if the tactile sensors of a hairy versus smooth surface triggered anything other than crawling to a smooth surface, and sticking the proboscis through the surface; and if the temperature sensors of liquid at 37°C did not trigger the action of sucking. The example is magnificent; in this way, the tick, a tiny animal, which can only crawl slowly, manages the feat of catching a mammal far bigger and faster than herself, and not only that but getting to suck its blood. Thus, although the tick is barely conscious, the wiring of her nervous system is instrumental in setting up the meaningful lived world that she is minimally conscious of. This is indeed a basic point; Husserl noted that “consciousness” is always consciousness of, consciousness is always “intentional,” aiming at something meaningful. So what we see here is that way before we get to the higher level consciousness experienced by humans, the nervous system is fundamental to setting up the whole situation.

Before moving on, there is another aspect to enaction; that is, as conscious living beings, each and every one of us enacts our own lived world, every minute of every day. This introduces a note of first-person subjectivity that is unusual in scientific discourse; I will return to this point also in the conclusion.

Advertisement

4. Autopoïesis

I now address the point that consciousness would seem to be a feature reserved to living organisms; it is only in science fiction that computers and computerized robots are conscious (the film “Space Odyssey 2001” with the computer “Hal,” and the film “Her” in which the hero falls in love with his “operating system”). This raises the question as to what it is about living organisms that makes them candidates for consciousness. After all, computerized robots can perfectly have artificial nervous systems that set up their sensory motor dynamics. One lead is that all living organisms, even the simplest, have the property that they are autopoïetic, that is, they are pure processes that are continually “making themselves”; and this is something that even the most powerful computer, or the most sophisticated robot with a “brain” having a connectionist architecture, even attempts to do. Living organisms are thermodynamically open systems, with a continuous flow of energy and matter, which are essential to their existence (a living organism only becomes a “thing” again when it is a dead corpse). In this respect, living organisms are similar to other “dissipative structures” such as cyclones, or eddy currents in a river; with the difference that they actively promote the conditions that will enable them to perdure, which cyclones and rivers do not. It is in order to do this that they engage actively with their environment, in the course of which they enact their lived worlds. So maybe this is why a “brain in a vat” would arguable not be conscious; it is a brain in a living animal, in the world, which appears as the seat of consciousness. Exactly why consciousness is reserved to living organisms is not entirely clear, and worthy of deeper study.

Ironically, it is not the most highly evolved forms of human consciousness that are the most mysterious; Jaynes [7] has provided a highly suggestive account of how sophisticated reflexive consciousness might have arisen through the “breakdown of the bicameral mind”. The “bicameral mind” in question is the sort of trance in which prophets (Old Testament) and heroes (Odyssey) “heard voices” telling them what to do. Nowadays, we call this “auditory hallucination” and shut up those concerned in a psychiatric asylum, whereas at that time, they were valued members of their communities. However, that may be, “hallucinations” are clearly the result of cerebral activity, as are dreams to which they are closely related. No; what is most mysterious is the basic form of “animal consciousness,” which seems not only to be all-or-nothing but also to increase gradually along the evolutionary scale from fish to reptiles to mammals. This correlates with brain size but once again, correlation is not cause; a correlation is not an explanation, but rather a feature, which remains to be explained. Discussions of a scientific approach to consciousness are sometimes compared to the question of vitalism; living organisms were long considered as mysterious. Optimistic objectivists now consider that the advances in biology have rendered vitalism a problem of the past; with the suggestion that if we just continue with enough good natural science, the same will happen for the question of consciousness. I would like to suggest that things may not be so entirely straightforward.

Advertisement

5. Conclusion

In conclusion, I would like to return to some issues that I raised earlier. As we have seen, Varela’s “neurophenomenology” highlights some of the neural correlates of consciousness. But as I said, if this were all, it would aggravate rather than solving Chalmers’ “hard problem.” But in fact it is not all. Varela’s main point—which may be well missed by those reading it as one more academic text—is that if we really wish to address the phenomenon of consciousness, we must be prepared to take into account the fact that it is a phenomenon that is only properly instantiated in the form of first-person subjectivity. And this means that if we wish to seriously engage with it, we cannot escape the necessity of ourselves bringing into play our own individual consciousness. Varela writes:

One must take seriously the double challenge my proposal represents. First, it demands a re-learning and a mastery of the skill of phenomenological description. Second: a call for transforming the style and values of the research community itself.

To the long-standing tradition of objectivist science this sounds anathema, and it is. But this is not a betrayal of science: it is a necessary extension and complement. Science and experience constrain and modify each other as in a dance. This is where the potential for transformation lies. It is also the key for the difficulties this position has found within the scientific community. It requires us to leave behind a certain image of how science is done, and to question a style of training in science which is part of the very fabric of our cultural identity”.

The consequence is that with respect to the “hard problem,” the nature of “hard” becomes reframed in two senses:

  1. It is hard work to train and stabilize new methods to explore experience.

  2. It is hard to change the habits of science in order for it to accept that new tools are needed for the transformation of what it means to conduct research on mind and for the training of succeeding generations.

Interestingly enough, a similar theme comes up with respect to the concept of enaction. As I have already indicated, there is potentially an existential dimension to enaction; scientists are after all human beings, as thus like all living organisms enact their lived world every minute of every day. Taking this personally, it means that I am actually responsible for the quality of what my enacted world leads me to experience. We are on dangerous ground here for “normal science”: science is supposed to aim at objectivity; and it is very widely supposed that attaining objectivity requires the elimination of subjectivity. But subjectivity, if it is assumed as such, is neither more nor less than first-person experience as lived from the inside; and we have just seen that precisely, which is at the core of enaction. In other words, enaction, if it is taken seriously in what I personally see as its core, poses a manifest threat to our normal functioning as scientists.

Of course what happens is that any “normal” scientist will seek to defuse this threat – both individually and as a community. I do hope I am not being pretentious or disdainful here; I think I understand too well what is going on, because I know the cost. But still I do want to stand my ground, to stand up, and be counted. I maintain that if enaction is defused, it is betrayed. I propose that we take a closer look at this.

One of the main ways—certainly not the only one—that enaction defused is by converting it into a much safer research program of what has been called “4E cognition” [8]. The “4Es” are: embodied, embedded, extended, and enacted cognition. This is a smart move (if one is indeed trying to defuse enaction so as to get back into the comfort zone of “normal” science), for the following reason. Varela himself envisaged enaction as the framework for a possible paradigm in Cognitive Science, and others have attempted to follow up on this [3]. Now, in any such attempt, the notions that cognition is embodied, embedded (it is more usual to say “situated”), and extended undeniably play key roles. So as a proponent of “existential enaction,” I cannot protest against the association of enaction with the other three “Es.” However, what I can and do protest about is adding in “enacted” as an ancillary element at the end of the list. In my view, these three E’s are subservient to the over-riding theme of enaction. Mixing them up indiscriminately, in the way that is done by proponents of the “4Es,” leads to missing the wood for the trees.

To sum up, then, I would like to conclude by an invitation. As I have tried to explain, the neuroscience of consciousness potentially opens up a breach in our normal functioning as scientists; it offers the opportunity for those so inclined to introduce a subjective, existential dimension into their work. Of course, one can bring a horse to the water, but one cannot force him to drink. This is particularly so here, where intimate personal attitudes are at stake. But I hope that I have done enough to make the invitation appealing.

References

  1. 1. Chalmers D. Facing up to the problem of consciousness. Journal of Consciousness Studies. 1995;2:200-219
  2. 2. Varela F. Neurophenomenology: A methodological remedy for the hard problem. Journal of Consciousness Studies. 1996;3:330-335
  3. 3. Stewart J, Gapenne O, Di Paolo E, editors. Enaction: Toward a New Paradigm for Cognitive Science. Boston: MIT Press; 2010
  4. 4. von Uexküll J. A stroll through the worlds of animals and men: A picture book of invisible worlds. Semiotica. 1992;89(4):319-391
  5. 5. Husserl E. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy—First Book: General Introduction to a Pure Phenomenology. The Hague: Nijhoff; 1913 Translated by F. Kersten, 1982
  6. 6. Petitmengin C. Describing one’s subjective experience in the second person. An interview method for the science of consciousness. Phenomenology and Cognitive Sciences. 2006;5:229-269
  7. 7. Jaynes J. The Origin of Consciousness in the Breakdown of the Bicameral Mind. Boston: Houghton Mifflin; 1976
  8. 8. Menary R. Introduction to the Special Issue on 4E Cognition. Phenomenology and the Cognitive Sciences. 2010;9:459-463

Written By

John Stewart

Submitted: 09 December 2018 Reviewed: 18 February 2019 Published: 04 April 2019