Open access peer-reviewed chapter

Attention and Working Memory in Human Auditory Cortex

Written By

Brian Barton and Alyssa A. Brewer

Submitted: 18 September 2018 Reviewed: 28 February 2019 Published: 12 June 2019

DOI: 10.5772/intechopen.85537

From the Edited Volume

The Human Auditory System - Basic Features and Updates on Audiological Diagnosis and Therapy

Edited by Stavros Hatzopoulos, Andrea Ciorba and Piotr H. Skarzynski

Chapter metrics overview

1,404 Chapter Downloads

View Full Metrics


Human sensory systems are organized into processing hierarchies within cortex, such that incoming sensory information is analyzed and compiled into our vivid sensory experiences. Computations that are common to these sensory systems include the abilities to maintain enhanced focus on particular aspects of incoming sensory information (i.e., attention) and to retain sensory information in a short-term memory store after such sensory information is no longer available (i.e., working memory). In at least the auditory and visual systems, the necessary computational steps to create these experiences take place in cloverleaf clusters of cortical field maps (CFMs). The human auditory CFMs represent the spectral (i.e., tones) and temporal (i.e., period) aspects of sound, which are represented along the cortical surface as two orderly gradients that are physically orthogonal to one another: tonotopy and periodotopy, respectively. Knowledge of the properties of such CFMs is the foundation for understanding the specific sensory computations carried out in particular cortical regions. This chapter reviews current research into auditory nonverbal attention, auditory working memory, and auditory CFMs, and introduces the next steps to measure the effects of attention and working memory across the known auditory CFMs in human cortex using functional MRI.


  • human auditory cortex
  • fMRI
  • tonotopy
  • periodotopy
  • cloverleaf cluster
  • cortical field maps
  • attention
  • working memory

1. Introduction

Mammalian sensory systems are composed in cortex of many functionally specialized areas organized into hierarchical networks [1, 2, 3, 4, 5, 6]. The most fundamental sensory information is embodied by the organization of the sensory receptors, which is maintained throughout most of the cortical hierarchy of sensory regions with repeating representations of this topography in cortical field maps (CFMs) [5, 7, 8, 9, 10, 11, 12, 13]. Accordingly neurons with receptive fields situated next to one another in sensory feature space are positioned next to one another in cortex within a CFM.

In auditory cortex, auditory field maps (AFMs) are identified by two orthogonal sensory representations: tonotopic gradients from the spectral aspects of sound (i.e., tones), and periodotopic gradients from the temporal aspects of sound (i.e., period or temporal envelope) [5, 10, 14]. On a larger scale across cortex, AFMs are grouped into cloverleaf clusters, another fundamental organizational structure also common to visual cortex [8, 10, 15, 16, 17, 18, 19, 20]. CFMs within clusters tend to share properties such as receptive field distribution, cortical magnification, and processing specialization (e.g., [18, 19, 21]).

Across the cortical hierarchy, there is generally a progressive increase in the complexity of sensory computations from simple sensory stimulus features (e.g., frequency content) to higher levels of cognition (e.g., attention and working memory) [6, 13, 22]. CFM organization likely serves as a framework for integrating bottom-up inputs from sensory receptors with top-down attentional processing [12, 17]. With the recent ability to measure AFMs in the core and belt regions of human auditory cortex along Heschl’s gyrus (HG) using high-resolution functional magnetic resonance imaging (fMRI), the stage is now set for investigation into this integration of basic auditory processing with higher-order auditory attention and working memory within human AFMs (Figure 1) [5, 12, 15, 23].

Figure 1.

Primary auditory cortex. (A) The lateral view of the left hemisphere is shown in the schematic. Major sulci are marked by black lines. The approximate position of primary auditory cortex (PAC) is shown with the red overlay inside the black dotted line. The white dotted line within the red region indicates the extension of PAC into the lateral sulcus (LS) along Heschl’s gyrus (HG; hidden within the sulcus in this view). Inset refers to anatomical directions as A: anterior; P: posterior; S: superior; I: inferior. PAC: primary auditory cortex (red); LS: lateral sulcus (green; also known as the lateral fissure or Sylvian fissure); CS: central sulcus (purple); STG: superior temporal gyrus (blue); STS: superior temporal sulcus (orange). (B) The cortical surface of the left hemisphere of one subject (S2) is displayed as a typical inflated 3-D rendering created from high-resolution, anatomical MRI measurements. Light gray regions denote gyri; dark gray regions denote sulci. The exact location of this subject’s hA1 auditory field map is shown in red within the black dotted lines. Note that HG in S2 is composed of a double peak, seen here as two light gray stripes, rather than the more common single gyrus. The locations of the three cloverleaf clusters composed of the core and belt AFMs are shown along HG by three colored overlays as yellow: hCM/hCL cluster; red: HG cluster including hA1, hR, hRM, hMM, hML, hAL; and magenta: hRTM/hRT/hRTL cluster (cite?). Additional cloverleaf clusters are under investigation along PP, PT, STG, and the STS. Green-labeled anatomical regions are sections within the lateral sulcus—CG: Circular gyrus (green); PP: planum polare (green); PT: planum temporale (green). (C) This single T1 image shows a coronal view of hA1 on HG (red within dotted white line). Adapted from Refs. [5, 12].

This chapter first provides a brief history of research into models of auditory nonverbal attention and working memory, with comparisons to their visual counterparts. Next, we discuss the current state of research into AFMs within human auditory cortex. Finally, we propose directions of future research investigating auditory attention and working memory within these AFMs to illuminate how these higher-order cognitive processes interact with low-level auditory processing.


2. Attention and working memory in human audition

2.1 Models of attention and working memory

Attention, the ability to select and attend to aspects of the sensory environment while simultaneously ignoring or inhibiting others, is a fundamental aspect of human sensory systems (for reviews, see [24, 25, 26, 27]). Given the limited resources of the human brain, attention allows for greater resources to be allocated to processing of important incoming sensory stimuli by diverting precious resources from currently unimportant stimuli. Such allocation can be controlled cognitively, in what is generally referred to as ‘top-down’ attentional control in models of attention, in reference to the higher-order cognitive processes controlling attention from the ‘top’ of the sensory-processing hierarchy and acting ‘down’ on the lower levels (Figure 2) [24, 28, 29, 30, 31]. Despite lower priority being assigned to the currently unimportant stimulus locations, change is constant, so the resource diversion to attended stimuli is not absolute, allowing for the sensory environment to continue to be monitored. If, instead, processing resources were evenly distributed throughout the sensory field, without regard to salience, more resources would be wasted on unimportant aspects of the field. If something in the unattended sensory field should become important, the system requires a mechanism to reorient attention to that aspect of the field. Such stimulus-driven attentional control is referred to as ‘bottom-up’, referring to the ability of incoming sensory input at the bottom of the hierarchy to orient the higher-order attention system. This broad framework of attentional models is common at least to the senses most commonly studied, vision and audition [25, 27, 31, 32].

Figure 2.

Attention and working-memory model. A model of the interactions between perception, trace memory, attention, working memory, and long-term memory in the visual and auditory systems, as well as the central executive. Ovals represent neural systems. Arrows represent actions of one system on another. Attention is the term for the action of perception and trace memory on working memory and vice versa. Rehearsal is the term for maintaining information in working memory. This model is not intended to indicate that these systems are discrete or independent; within each sense, they are in fact highly integrated.

In the effort to elucidate the parameters of auditory attention, researchers have taken a myriad of approaches in numerous contexts. Researchers have attempted to decipher at what level of the sensory-processing hierarchy stimulus-driven attention occurs (after which sensory-processing steps does attention act) [24, 30, 31, 33, 34, 35], how attention can be deployed (to locations in space or particular sensory features) [36, 37, 38, 39, 40], and how can attention be distributed (to how many ‘objects’ or ‘streams’ can attention be simultaneously deployed) [41, 42, 43, 44]. Many studies have narrowed the range of possibilities without precisely answering these questions, and so remain active areas of research. Modern models of attention generally agree that stimuli are processed to some degree before attention acts, accounting for the stimulus-driven ‘bottom-up’ attentional shifts, though it is unclear to precisely which degree [24, 30, 33]. Neuroscientific evidence suggests that attention acts throughout sensory-processing hierarchies, so the idea of attention being located at a particular ‘height’ in the hierarchy may not be a particularly useful insight for identifying the cortical locus of attentional control [45, 46]. Modern attentional models also generally agree that attention can be deployed to locations in or features of sensory space, both of which are fundamental aspects to the sensory-processing hierarchy [24, 35]. Finally, modern models of attention agree that attention is very limited, but not about precisely how it is limited. Some models are still fundamentally ‘spotlight’ models [25, 44], in which attention is limited to a single location or feature set, while others posit that attention can be divided between a small number of locations or features [41, 47]. Based on related working-memory research, the latter theory is gaining prominence as likely correct.

Working memory (i.e., a more accurate term for ‘short-term memory’) is the ability to maintain and manipulate information within the focus of attention over a short period of time after the stimulus is no longer perceptible (for reviews, see [48, 49, 50, 51]). Without explicit maintenance, this retention period is approximately 1–2 s, but is theoretically indefinite with explicit maintenance. Working memory should not to be confused with ‘sensory memory’, also known as ‘iconic memory’ in vision and ‘echoic memory’ in audition [52]. Sensory memory is a fundamental aspect of sensory systems in which a sensory trace available to attention and working-memory systems persists for less than ~100 ms after stimuli are no longer perceptible. Models of working memory are nearly indistinguishable from models of attention; the key difference is that working memory is a ‘memory’ of previously perceptible stimuli, whereas attention is thought to act on perceptible stimuli or sensory traces thereof. Working-memory models posit, by definition, that working memory acts after perception processing has occurred (Figure 2; for review, see [53]). However, it has been difficult to isolate exactly where working-memory control resides along the cortical hierarchy of sensory processing, likely because low-level perceptual cortex is recruited at least for visual working memory and attention [40, 46, 54, 55].

Like attention, working-memory models also posit that working memory is a highly limited resource, in which a small set of locations or objects (e.g., 3–4 items on average) can be simultaneously maintained [42, 49]. In fact, some modern measures of attention and working memory are nearly identical. The change-detection task is a ubiquitous one in which subjects are asked to view a sensory array, then compare that sensory array to a second one in which some aspect of the array may have changed, and indicate whether a change has occurred (Figure 3) [56, 57, 58, 59, 60]. A short delay period (i.e., retention interval) is included during each array, which may include a neutral presentation or, if desired, a mask of the sensory stimuli to prevent the use of ‘sensory memory’. The length of the delay period can be then be altered to either measure attention or working memory. If the delay period is on the order of ~0–200 ms, it is considered an attentional task; if it is longer, on the order of 1–2 s, it is considered a working-memory task [53]. Therefore, attention and working-memory systems are at a minimum heavily intertwined and very likely the same system studied in slightly different contexts, with attention being a component of a larger working-memory framework.

Figure 3.

Visual change-detection task. This task can be used to probe visual attention or working memory and is very similar to its auditory counterpart. Such tasks have three phases: first is encoding, when subjects are given ~100–500 ms to view the sample array; next is maintenance, which is short (~0–200 ms) for measuring attention and longer (~1000 ms) for working memory; last is the probe (lasting until the subject responds or with a time limit, often ~2000 ms). In this example, a set size of four is presented for the sample array and a probe array of one is used, though different set sizes are commonplace and often the probe array will be the same set size as the encoding array with a possibility of one object being changed. Typically there is an equal chance (50%) of the probe array containing a change or not. Generally subjects will be required to fixate centrally, particularly if fMRI, EEG, or PET recordings are being made. (A) Simple colored square stimuli are depicted here, often drawn from a small set of easily distinguished hues (in this case, 6). As a result, changes are always low in similarity, requiring low resolution to make accurate comparisons between encoding and test arrays, which is important at least for visual working-memory measurements. More complex stimuli can also be used as in (B) and (C). These stimuli are shaded cubes with the same hue set as in (A), but also have 6 possible shading patterns with the dark, medium, and light shaded sides on each cube. Changes between hues, as in (B), are equivalently low similarity to (A) and result in similar performance under visual working-memory conditions. Changes in shading patterns, as in (C), result in worse performance than (B) despite having the same number of possible pattern changes as hue changes in (A) or (B), because such changes require higher resolution representations in visual working memory. Adapted from Barton and Brewer [50].

With the relatively recent invention of fMRI, researchers have been able to begin to localize these models of attention and working memory to their cortical underpinnings (e.g., [6, 37, 40, 50, 55, 61, 62]). FMRI, through its exquisite ability to localize blood oxygenation-level dependent (BOLD) signals (and thus the underlying neural activity) to just a couple of millimeters is the best technology available for such research [63, 64]. Two broad approaches have been employed for studying these high-order cognitive processes: model-based and perception-based. Model-based investigations tend to use tasks based on behavioral investigations into attention and working memory, adapt them to the strict parameters required of fMRI, and compare activity in conditions when attention or working memory are differentially deployed [61, 62]. Perception-based investigations tend to measure low-level perceptual cortex that has already been mapped in detail and measure the effects of attention or working memory within those regions [50, 55, 65]. Both approaches are important and should be fully integrated to garner a more complete and accurate localization of these attentional and working-memory systems.

2.2 Overview of auditory and visual attention research

Research into attention began in earnest in the auditory system after World War II with a very practical motivation. It had been noted that fighter pilots sometimes failed to perceive auditory messages presented to them over headphones despite the fact that the messages were completely audible. To solve this problem, Donald Broadbent began studying subjects with an auditory environment similar to the pilots, with multiple speech messages presented over headphones [34]. Based on his findings, he proposed a selective theory of attention, which was popular and persuasive, but ultimately required modification. Environments such as the one Broadbent studied are more commonly encountered at cocktail parties, in which multiple audible conversations are taking place, and people are able to attend to one or a small set of speech streams while attenuating the others. To study the ‘cocktail party phenomenon,’ the dichotic listening task was developed in the 1950s by Colin Cherry [66, 67]. Subjects were asked to shadow the speech stream presented to one ear of a set of headphones while another stream was presented to the other ear, and they demonstrated little knowledge of the nonshadowed (unattended) stream (Figure 4).

Figure 4.

Auditory spatial attention. Schematic of an example auditory spatial attention task (e.g., see [35, 40, 6667]). Each block typically starts with cue (auditory or visual) for the subject to attend left or right on the upcoming trial. Two simultaneous auditory streams of digits are presented as binaural, spatially lateralized signals. Behavioral studies in an anechoic chamber often use speaker physically located to the left and right of the subject; fMRI measurements do not have the option of such a set up, but instead can use differences in the interaural time difference (ITD) to produce a similarly effective lateralization for the two digit streams. The subjects attend to the cued digit stream and perform a 1-back task.

A host of studies followed up on the basic finding, revealing several attentional parameters within the context of that type of task (e.g., [30, 35, 40, 68, 69, 70, 71]). Importantly, preferential processing of the attended stream relative to the unattended streams is not absolute; for example, particularly salient information, such as the name of the subject, could sometimes be recalled from an unattended stream, presumably by reorienting attention [39, 66, 67, 69]. The streams were typically differentiated spatially (e.g., to each ear through a headset), indicating a spatial aspect to attentional selection and therefore the attentional system. Similarly, the streams were also typically differentiated by the voice of the person speaking, indicating attentional selection based on the spectrotemporal characteristics of the speaker’s voice such as the average and variance of pitch and speech rate (often reflecting additional information about the speaker, such as gender) [66, 67, 68, 72].

These findings are very similar to findings in the visual domain, indicating that attentional systems across senses are similarly organized. Visual attention can similarly be deployed to a small set of locations or to visual features with very little recall of nonattended visual stimuli [41]. Roughly analogous to speech shadowing are multiple-object-tracking tasks, which require subjects to visually track a small set of moving objects out of a group [47, 73]. Visual change-detection tasks are also very common, and they demonstrate very similar results as their auditory counterparts [50, 74, 75]. In sum, the evidence suggests that attentional systems are organized very similarly, perhaps identically, between at least vision and audition.

Despite these broad contributions, these types of tasks are of limited utility when tying behavior to cortical activity because the types of stimuli used are rather high-order (e.g., speech) with relatively uncontrolled low-level parameters. For example, the spectrotemporal profile of a stream of speech is complex, likely activating broad swaths of low-level sensory cortex in addition to higher-order regions dedicated to speech comprehension, including working and long-term memory [68, 72, 76, 77]. If one were to compare fMRI activity across auditory cortex in traditional dichotic listening tasks, the differences would have far too many variables for which to account before meaningful conclusions can be made about attentional systems. It may seem intuitive to compare cortical activity between conditions where identical speech stimuli have been presented and the subject either attended to the stimuli or did not. However, areas that have increased activity when the stimuli were attended could simply reflect higher-order processing that only occurs when attention is directed to the stimuli rather than directly revealing areas involved in attentional control. For example, recognition of particular words requires comparison of the speech stimulus to an internal representation, which requires activation of long-term memories of words [77]. Long-term memory retrieval does not happen if the subject never perceived the word due to attention being maintained on a separate speech stream, so such memory-retrieval activity would be confounded with attentional activity in the analysis [70].

Thus, simpler stimuli that are closer in nature to the initial spectrotemporal analyses performed by primary auditory cortex (PAC) are better suited for experiments intended to demonstrate attentional effects in cortex [24]. Reducing the speech comprehension element is a good first step, and research approached this by using a change-detection task and arrays of recognizable animal sounds (cow, owl, frog, etc.; Figure 5) [59]. These tests revealed what the researchers termed ‘change deafness,’ in which subjects often failed to identify changes in the sound arrays. Such inability to detect changes is entirely consistent with very limited attentional resources, and very similar to results of working-memory change-detection tasks [30, 53, 60, 78].

Figure 5.

Auditory feature attention. Schematic outlines a simple proposed attention task utilizing spectral (narrowband noise) and temporal (broadband noise) stimuli taken from the stimuli used by [10] to define auditory field maps. Subjects are asked to attend to one of two simultaneously presented stimuli, which are either (A) narrowband noise, in this case with central frequencies of 6400 and 1600 Hz and the same amplitude modulation (AM) rate of 8 Hz, or (B) broadband noise, in this case with AM rates of 2 and 8 Hz. (C) A proposed task that varies auditory feature attention, in which subjects are instructed to attend to each of the stimuli in an alternating pattern, cued by a short sound at the beginning of each block.

However, even these types of stimuli are not best suited to fMRI investigation at this stage of understanding due to their relative complexity compared to the basic spectrotemporal features of sounds initially processed in auditory cortex [12, 50]. As discussed in detail below, the auditory system represents sounds in spectral and temporal dimensions, and stimuli similar to those used to define those perceptual areas would be best suited now to evaluating the effects of attention in the auditory system (Figure 6) [5, 10].

Figure 6.

Auditory object attention and working memory. Schematic of one trial in an auditory change-detection task (e.g., see change-deafness experiments in [59]). Subjects are first presented with an array of four distinct auditory objects (e.g., four different recordings of real animal sounds, randomized each trial from a larger set of iconic animal calls). In the initial memory array, the four animal sounds are initially presented binaurally and are temporally overlapped for a short time (e.g., 2 s). Within an anechoic chamber setup often used in psychoacoustic studies, these speakers may be physically positioned at the corners of a square; fMRI measurements do not have the option of such a set up, but instead can use differences in the interaural time difference (ITD) and interaural level difference (ILD) to produce a similarly effective virtual space. The subject’s goal is typically to identify and remember all four animal sounds. The interstimulus interval is commonly filled with silence or white noise and can be varied in length to create shorter or longer retention intervals for attention or working-memory tasks, respectively. During the subsequent test array, subjects attempt to identify which one of the four auditory objects is now missing from the simultaneous animal sound presentations. In such auditory change-deafness paradigms, subjects fail to notice a large proportion of the changes introduced between the initial and test arrays.

2.3 Overview of auditory and visual working-memory research

Visual and auditory working memory were discovered in quick succession and discussed together in a very popular and influential model by Baddeley and Hitch linking sensory perception, working memory, and executive control [79, 80, 81]. The generally accepted modern model of working memory has changed somewhat from the original depiction, but the vast majority of research has been working within the framework (for reviews, see [30, 51, 53, 79, 81]). Each sense is equipped with its own perceptual system and three memory systems: sensory memory, working memory, and long-term memory. Direct sensory input, gated by attentional selection, is one of the two primary inputs into working memory. Sensory memory is a vivid trace of sensory information that persists after the information has vanished for a short time and is essentially equivalent to direct sensory input into working memory, again gated by attentional selection; one can reorient attention to aspects of the sensory trace as if it were direct sensation. Long-term memory is the second primary input into working memory, which is gated by an attention-like selection, generally referred to as selective memory retrieval. Working memory itself is a short-term memory workspace lasting a couple of seconds without rehearsal, in which sensory information is maintained and manipulated by a central executive [82]. The central executive is a deliberately vague term with nebulous properties; as a colleague often quips, “All we know of the central executive is that it’s an oval,” after its oval-shaped depiction in the Baddeley and Hitch model. There is ongoing debate as to the level of the hierarchy at which each system is integrated into that of the other senses, with no definitive solutions.

Visual working memory and visual sensory memory (i.e., ‘iconic memory’) were fundamentally measured by George Sperling in 1960 [52]. He presented arrays of simple visual stimuli for short periods of time and asked subjects to report what they had seen after a number of short delays. He discovered that subjects could only recall a small subset of stimuli in a large array, representing the limited capacity of visual working memory. Furthermore, they could recall a particular subset of the stimuli when cued after the presentation but before the sensory trace had faded (≤100 ms), indicating that visual sensory memory exists and that visual attention can be deployed to stimuli either during sensation or sensory memory. Over the next decade, George Sperling went on to perform similar measurements in the auditory system, delineating very similar properties for auditory perception, sensory memory, and working memory [83].

Without directly measuring brain activity, researchers concluded that sensory systems must be operating independently with dual-task paradigms in which subjects were asked to maintain visual, auditory, or both types of information in working memory. It was shown that subjects could recall ~3–4 ‘chunks’ of information (which may not precisely reflect individual sensory locations or features) of each type, regardless of whether they were asked to maintain visual, auditory, or both types of information [49, 78]. If the systems were integrated, one would be able to allocate multisensory working-memory ‘slots’ to either sense, with a maximum number (e.g., 6–8) that could be divided between the senses as desired. Instead, subjects can maintain on average ~3–4 visual chunks and ~3–4 auditory chunks, without any ability to reallocate any ‘slots’ from one sense to the other.

While electroencephalogram (EEG) and positron emission topography (PET) recordings could broadly confirm the contralateral organization of the visual system and coarsely implicate the parietal and frontal lobes in attention and working memory, it was not until the advent of high-resolution fMRI that researchers could begin localizing attention and working memory in human cortex with any detail [617, 37, 50, 84, 85, 86, 87, 88, 89, 90]. Model-based fMRI investigations have attempted to localize visual working memory by comparing BOLD activity in conditions where subjects are required to hold different numbers of objects in working memory [50, 62, 91, 92]. The logic goes that, because visual-working-memory models posit that a maximum of ~3–4 objects can be held in visual working memory on average, areas that increase their activity with arrays 1, 2, 3 objects and remaining constant with arrays of 4 or more objects should be areas controlling visual working memory. Such areas were found bilaterally in parietal cortex by multiple laboratories [57, 62, 91, 93], but activity related to visual working memory has also been measured in early visual cortex (e.g., V1 and hV4) [55, 65, 94], prefrontal cortex [95], and possibly in object-processing regions in lateral occipital cortex [62], indicating that working-memory tasks recruit areas throughout the visual-processing hierarchy. (We note that the report of object-processing regions is controversial, as the cortical coordinates reported in that study are more closely consistent with the human motion-processing complex, hMT+, than the lateral occipital complex [15, 17, 96, 97]). However, little has been done to measure visual-working-memory activity in visual field maps, and so these studies should be considered preliminary rather than definitive. Measurements within CFMs would, in fact, help to clear up such controversies.

Auditory-working-memory localization with fMRI has been quite limited compared to its visual counterpart, and largely concentrated on speech stimuli rather than fundamental auditory stimuli [30, 68]. As noted above with attention localization with fMRI, too many variables exist with highly complex stimuli, and as such, a different approach is necessary. Furthermore, even low-level auditory sensory areas have only very recently been properly identified [5, 10].


3. Auditory processing in human cortex

3.1 Inputs to auditory cortex

Auditory processing is essential for a wide range of our sensory experiences, including the identification of and attention to environmental sounds, verbal communication, and the enjoyment of music. The intricate sounds in our daily environments are encoded by our auditory system as the intensity of their individual component frequencies, comparable to a Fourier analysis [98]. This spectral sound information is thus one fundamental aspect of the auditory feature space (Figure 7A,C). The basilar membrane of the inner ear responds topographically to incoming sound waves with higher frequencies transduced to neural signals near the entrance to the cochlea and progressively lower frequencies transduced further along the membrane. This organized gradient of frequencies (i.e., tones) is referred to as tonotopy (i.e., a map of tones); this topography may also be termed cochleotopy, referring to a map of the cochlea. Tonotopic organization is maintained as auditory information is processed and passed on from the inner ear through the brainstem, to the thalamus, and into PAC along Heschl’s gyrus (HG; Figure 1; for additional discussion, see [2, 5, 6, 12, 99, 100]). The preservation of such topographical organization from the basilar membrane of the inner ear to auditory cortex allows for a common reference frame across this hierarchically organized sensory system [6, 7, 12, 13, 22, 23].

Figure 7.

Example tonotopic and periodotopic stimuli for auditory field mapping. (A) Three stimulus values for one dimension of auditory feature space (e.g., tonotopy) are depicted in the graph: 1—low (L, red); 2—medium (M, green); 3—high (H, blue). (B) Three stimulus values for a second dimension of auditory feature space (e.g., periodotopy) are depicted in the second graph: 1—low (L, orange); 2—medium (M, aqua); 3—high (H, purple). (C) Tonotopic representations can be measured using narrowband noise stimuli, which hold periodicity constant and vary frequency. (i) Sound amplitude (arbitrary units) for this stimulus set as a function of time in seconds. (ii) Sound spectrograms for two example narrowband noise stimuli with center frequencies (CF) of 1600 Hz (top) and 6400 Hz (bottom). Higher amplitudes in decibels (dB) are represented as ‘warmer’ colors (see dB legend below). (D) Periodotopic representations can be measured using broadband noise stimuli, which maintain constant frequency information and vary periodicity. (i) Sound amplitude (arbitrary units) for this stimulus set as a function of time in seconds. (ii) Sound spectrograms for two example broadband noise stimuli with amplitude modulation (AM) rates of 2 Hz (top) and 8 Hz (bottom). Higher amplitudes are again depicted as ‘warmer’ colors (see dB legend on bottom).

A second fundamental aspect of the auditory feature space is temporal sound information, termed periodicity (Figure 7B,D) [10, 101, 102]. Human psychoacoustic studies indicate that there are separable filter banks (i.e., neurons with distinct receptive fields) for not only frequency spectra—as expected given tonotopy, but also temporal information [103, 104, 105]. The auditory nerve likely encodes such temporal information through activity time-locked to the periodicity of the amplitude modulation (i.e., the length of time from peak-to-peak of the temporal envelope) [101, 106]. Temporally varying aspects of sound are thought to preferentially active neurons selective for the onset and offset of sounds and for sounds of certain durations. Organized representations of periodicity in primates have been measured to date in the thalamus and PAC of macaque and human, respectively, and are termed periodotopy, a map of neurons that respond differentially to sounds of different temporal envelope modulation rates [5, 10, 107]. Repeating periodotopic gradients exist in the same cortical locations as, but are orthogonal to, tonotopic gradients, which allows researchers to use measurements of these two acoustic dimensions to identify complete AFMs.

3.2 fMRI measurements of auditory field maps

Measurements of the structure and function of human PAC and lower-level auditory cortex have been relatively few to date, with many studies hampered by methodological issues (for reviews, see [5, 23]. Precise measurements of AFMs across primary and lower-level auditory cortex are vital, however, for studying the neural underpinnings of such prominent auditory behaviors as attention and working memory. Recent research has now successfully applied fMRI methods commonly used to measure visual field maps to the study of AFMs in human auditory cortex.

3.2.1 Phase-encoded fMRI

The phase-encoded fMRI paradigm provides highly detailed in vivo measurements of CFMs in individual subjects [9, 10, 15, 108, 109, 110, 111]. This technique measures topographical representations using stimuli that periodically repeat a set of values in an orderly sequence (Figure 7). The phase-encoded methods are specialized for AFM measurements by combining this periodic stimulus with a sparse-sampling paradigm (Figure 8) [10, 112, 113, 114, 115]. Sparse-sampling separates the auditory stimulus presentation from the noise of the MR scanner during data acquisition to avoid contamination of the data by nonstimulus sounds [116, 117, 118].

Figure 8.

Schematic of phase-encoded fMRI paradigm for auditory field mapping experiments. (A) Diagram of a single stimulus phase shows the components of a single block of one auditory stimulus presentation (striped green) followed by an fMRI data acquisition period (solid green). This sparse-sampling paradigm separates the auditory stimulus presentation from the noisy environment of the MR scanner acquisition. The timing of the acquisition (2 s delay) is set to collect the approximate peak response of auditory cortex to the stimulus, in accordance with the estimated hemodynamic delay. (B) Each phase (block) of an example tonotopic stimulus is displayed within the gray box above the colored blocks; one block thus represents one stimulus position in the ‘phase-encoded’ sequence. The diagram of an example stimulus cycle below this depicts six presentation blocks (striped green+ solid green) grouped together into one stimulus cycle (blue). Each block, or stimulus phase, in each cycle represents a specific frequency; e.g., for tonotopic measurements, the stimulus that is presented sequentially changes to each of the Hz listed in the gray box. The term ‘traveling-wave’ is also used to describe this type of phase-encoded stimulus presentation, as the stimuli produce a sequential activation of representations across a topographically organized cortical region. (C) Diagram shows a full, single scan comprising six cycles. (D) Legend denotes color-coding for diagrams above.

The periodic stimulus allows for the use of a Fourier analysis to determine the value of the stimulus (e.g., 800 Hz frequency for tonotopy) that most effectively drives each cortical location [110]. The cortical response at a specific location is said to be ‘in phase’ throughout the scan with the stimulus value that most effectively activates it, hence the term ‘phase-encoded’ mapping. The alternate term ‘traveling-wave’ mapping arises from the consecutive activation of one neighboring cortical location after the other to create a wave-like pattern of activity across the CFM during the stimulus presentation. The phase-encoded paradigm only captures cortical activity that is at the stimulus frequency, thus excluding unrelated cortical activity and other sources of noise. Similarly, cortical regions that are not organized topographically will not be significantly activated by phase-encoded stimuli, as there would be no differential activation across the cortical representation [8, 15, 16]. The statistical threshold for phase-encoded cortical activity is commonly determined by coherence, which is a measure of the amplitude of the BOLD signal modulation at the frequency of the stimulus presentation (e.g., six stimulus cycles per scan), divided by the square root of the power over all other frequencies except the first and second harmonic (e.g., 12 and 18 cycles per scan) [15, 17, 110].

Measurement and analysis of phase-encoded CFM data must be performed within individual subjects rather than across group averages to avoid problematically blurring together discrete CFMs and their associated computations (for extended discussions, see [5, 15, 17]). CFMs may differ radically in size and anatomical position among individual subjects independent of brain size; this variation is reflected in associated shifts in cytoarchitectural and topographic boundaries [119, 120, 121, 122, 123, 124]. In the visual system, for example, V1 can differ in size by at least a factor of three despite its location on the relatively stable calcarine sulcus [120]. Accordingly, when such data are group-averaged across subjects, especially through such approaches as aligning data from individual brains to an average brain with atlases such as Talairach space [125] or Montreal Neurological Institute (MNI) coordinates [126], the measurements will be blurred to such a degree that the measured topography of the CFMs is inaccurate or even lost. Blurring from such whole-brain anatomical co-alignment will thus cause different CFMs to be incorrectly averaged together into a single measurement, mixing data together from adjacent CFMs within each subject and preventing the analysis of the distinct computations of each CFM.

3.2.2 Criteria for auditory field map identification

In order to avoid the imprecise application of the term ‘map’ to topographical gradients or other similar patterns of cortical organization, the designation of an AFM—and CFMs in general—should be established according to several key criteria (Figure 9) (for reviews, see [5, 8, 15]). First, by definition, each AFM must contain at least the two orthogonal, nonrepeating topographical representations of fundamental acoustic feature space described above: tonotopy and periodotopy (Figure 9A) [10, 17, 21, 108, 110, 111]. When this criterion is ignored and the measurement of only one topographical representation is acquired (e.g., tonotopy), it is impossible to correctly identify boundaries among cortical regions. Measurements of the organization and function of specific regions of early auditory cortex in human long have mostly relied on tonotopic measurements alone, which has resulted in variable, conflicting, and ultimately unusable interpretations of the organization of human PAC and surrounding regions (for detailed reviews, see [523]).

Figure 9.

Definition of auditory field maps (AFMs). (A) (i) Schematic of a single gradient of dimension 1 (e.g., tonotopy). Black arrow shows the low-to-high gradient for this tonotopic gradient. With only measurements of the single dimension of tonotopy, it cannot be determined whether the region within dimension 1 contains one or more cortical field maps without measuring a second, orthogonal gradient. (ii) Schematic of a single gradient of dimension 2 (e.g., periodotopy) overlapping the tonotopic gradient in (i) to form a single AFM like hA1. Black arrow shows the low-to-high gradient for this periodotopic gradient. Note the orthogonal orientation of the two gradients (i vs. ii) composing this AFM. (iii) schematic of an alternative gradient organization for periodotopy overlapping the same tonotopic gradient in (i). Black arrows now show two low-to-high gradients (G1: gradient 1, G2: gradient 2) of this second dimension within the same territory as the orthogonal low-to-high gradient in (i). The gray dotted line marks the boundary dividing this region into two AFMs. (B) (i) In a properly defined AFM, measurements along the cortical representation of a single value of tonotopy (e.g., green) span all values of periodotopy (e.g., orange to cyan to purple), and vice versa. (ii) Schematic of vectors drawn along a single CFM from centers of low-stimulus-value regions of interest (ROIs) to high-stimulus-value ROIs for dimensions 1 (e.g., red to blue) and 2 (e.g., orange to purple). The offset measured between the low-to-high vectors for each dimension should be approximately 90° to be considered orthogonal and thus allow for each voxel/portion of the map to represent a unique combination of dimension 1 and dimension 2 values. (C) The diagram demonstrates how gradient boundaries for one dimension of an AFM are determined. Black dots denote hypothetical measurement points along the cortical surface shown in (A, iii). Black arrows note gradient directions (low, L, to medium, M, to high, H). Dashed gray lines mark gradient reversals. Two gradients that span the full range of dimension 2 measurements can be divided into G1 and G2, with the representations of stimulus values increasing from low to high across the cortical surface in one gradient to the boundary where the representations in the next map then reverse back from high to low along the cortical surface in the next gradient. G3 and G4 (gradients 3 and 4, respectively) denote additional gradients continuing at reversal to regions outside the diagram. (for review, see [23]).

The representation of one dimension of sensory space—one topographical gradient along cortex like tonotopy—is not adequate to delineate an AFM, or CFMs in any sensory system. The measurement of a singular topographical dimension merely demonstrates that this particular aspect of sensory feature space is represented along that cortical region. The CFMs within that cortical region cannot be identified without measuring an orthogonal second dimension: a region of cortex with a large, confluent gradient for one dimension could denote a single CFM (Figure 9Ai,ii) or many CFMs (Figure 9Ai,iii), depending upon the organization of the overlapping second topography. Similarly, the two overlapping gradients must be approximately orthogonal, as they will otherwise not represent all the points in sensory space uniquely (Figure 9B) [15, 16, 127, 128]. As the complexity of adjacent gradients increases, the determination of the emergent CFM organization grows increasingly complicated.

Due to the relatively recent measurements of periodotopic representations in human auditory cortex and monkey midbrain, AFMs in core and belt regions can now be identified [10, 102]. The identification of periodotopy as the second key dimension of auditory feature space is strengthened by psychoacoustic studies, which show that separable filter banks occur not only for frequency spectra, but also temporal information, indicating the presence of neurons with receptive fields tuned to ranges of frequencies and periods [14, 103, 104, 105]. Additionally, representations of temporal acoustic information (i.e., periodicity) have been measured in the auditory system of other model organisms, including PAC in domestic cat and inferior colliculus in chinchilla [129, 130].

A second AFM criterion is that each of its topographical representations must be organized as a generally contiguous and orderly gradient [16, 128]. For such a gradient to develop, the representation must be organized such that it covers a full range of sensory space, in order from one boundary to the other (e.g., from lower to upper frequencies for tonotopy; Figure 9C). A topographical gradient is thus one of the most highly structured features of the cortical surface that can be measured using fMRI. The odds of two orderly, orthogonal gradients arising as a spurious pattern from noise in an overlapping section of cortex is extraordinarily low (for a calculation of the probability of spurious gradients arising from noise, see [19]).

Third, each CFM should contain representations of a considerable amount of sensory space. Differences in cortical magnification are likely among CFMs with different computational needs, but a large portion of sensory space is still expected to be represented (e.g., [15, 16, 19, 21, 97, 127, 131]). A high-quality fMRI measurement of the topography is necessary to adequately capture the sensory range and magnification. The quality of the measurement is dependent upon choosing an appropriate set of phase-encoded stimuli. The sampling density and range of values in the stimulus set both affect the accuracy and precision of the measurement. For example, the intensity (i.e., loudness) of the tonotopic stimulus alone can alter the width of the receptive fields of neurons in PAC and consequently increase the lateral spread of the BOLD signal measured in neuroimaging [132]. In addition, some degree of blurring in the measurements of the topography is expected due to such factors as the overlapping broad receptive fields, the inherent spatial spread of the fMRI signal, and measurement noise [64, 109, 133, 134]. The stimulus parameters and how they may affect the cortical responses should therefore be given careful consideration.

Fourth, the general features of the topographies composing the CFMs and the pattern of CFMs across cortex should both be consistent among individuals. It is essential to remember, nevertheless, that cytoarchitectural and topographic boundaries in PAC vary dramatically in size and anatomical location independent of overall brain size [119, 121, 122, 123, 124, 135], as do CFMs across visual cortex [16, 17, 120, 136]. Regardless of these variations, the overall organization among specific CFMs and cloverleaf clusters will be maintained across individuals.

3.2.3 Definition of auditory field map boundaries

The measurement of AFMs is one of the few reliable in vivo methods to localize the distinct borders of the auditory core and belt regions in individual subjects [510, 12, 23]. The boundaries of an AFM—and of CFMs in general—are determined by carefully defining the edges of overlapping sections of tonotopic and periodotopic gradients within a specific cortical region in an individual hemisphere (Figure 9). If a set of overlapping representations of the two dimensions is present in isolation, the boundary of the AFM can be estimated to be where the gradient responses end, although there will likely be some spatial blurring or spreading of the representation along these edges (Figure 9Ai,ii) [16, 17, 110, 137]. For multiple, adjacent representations that each span the full range of one dimension (e.g., low-to-high frequencies of tonotopy) can be divided into two sections at the point at which the gradients reverse (Figure 9Ai,iii). At the gradient reversals, the representations of stimulus values increase from low to high (or vice versa) across the cortical surface in one section to the boundary where the representations in the next AFM then reverse back from high to low (or vice versa) along the cortical surface in the next section (Figure 9C). Such phase-encoded fMRI measurements of the boundaries of the AFMs in human auditory cortex have been shown to be closely related to those determined by invasive human cytoarchitectural studies and nonhuman primate cytoarchitectural, connectivity, and tonotopic measurements [2, 5, 10, 121, 138, 139, 140, 141, 142, 143, 144].

At a scale of several centimeters, groups of adjacent CFMs are organized within both auditory and visual cortex into a macrostructural pattern called the cloverleaf cluster, named for the similarity of the organization of the individual CFMs composing a cluster to the leaves of a clover plant [8, 10, 15, 16, 17, 18, 19, 20]. Within a cluster, one dimension of sensory topography is represented in concentric, circular bands from center to periphery of the cluster, and the second, orthogonal dimension separates this confluent representation into multiple CFMs with radial bands spanning the cluster center to periphery. In AFM clusters, a confluent, concentric tonotopic representation is divided into specific AFMs by reversal in the orthogonal periodotopic gradients. Neighboring cloverleaf clusters are then divided along the tonotopic reversals at the cluster boundaries.

While CFM clusters have consistent positions relative to one another across the cortical surface, CFMs within each cluster may be oriented differently among individuals as if rotating about a cluster’s central representation. This inter-subject is consistent with the variability in molecular gradient expression that gives rise to the development of cortical topographical gradients [145, 146, 147, 148, 149]. This unpredictability of cluster anatomical location and rotation emphasizes the need for careful data analysis to be performed in individual subjects, in which common CFMs can be identified by analyzing the pattern of CFMs and cloverleaf clusters within that sensory system.

3.3 Organization of human auditory field maps

3.3.1 Auditory cortex organization in macaque monkey vs. human

Auditory processing in human cortex and in nonhuman primates occurs bilaterally along the temporal lobes near the lateral sulcus (Figure 1; e.g., [5, 10, 115, 121, 139, 140, 141, 142, 144, 150, 151, 152, 153]). In the macaque monkey model system upon which much of our understanding of human audition is based, converging evidence from cytoarchitectural, connectivity, electrophysiological, and neuroimaging studies have generally identified 13 auditory cortical areas grouped into core, medial and lateral belt, and parabelt regions that are associated with primary, secondary, and tertiary levels of processing, respectively (for extended discussions, see [2, 5, 154]). Auditory processing in macaque cortex begins along the superior temporal gyrus (STG) within three primary auditory areas: A1, R, and RT [140]. In contrast to early visual processing in which primary visual cortex is composed of V1 alone, primary auditory cortex is considered to be a core region composed of these three AFMs; all three areas contain the expanded layer IV arising from dense thalamic inputs and the high expression of cytochrome oxidase, acetylcholinesterase, and parvalbumin distinctive to primary sensory cortices [2, 142, 143, 150, 152, 154, 155, 156, 157]. The eight belt regions are divided into four areas along both the lateral (CL, ML, AL, RTL) and medial (CM, RM, MM, RTM) sides of the core [158, 159, 160]. Along the lateral belt, two additional areas create the parabelt, which allocates auditory information to neighboring auditory cortex as well as to multimodal cortical regions [2, 161].

Based on cytoarchitectural, connectivity, and neuroimaging measurements, early auditory processing in human cortex has been shown to resemble the organization of lower-level macaque auditory processing [10, 23, 121, 144, 151, 152, 153, 162]. Over the ~25 million years of evolutionary separation between the species, the core, belt, and parabelt areas have rotated from the STG to Heschl’s gyrus (HG), an anatomical feature unique to humans [11, 163]. The specific structure of HG differs across individuals, variably existing as a single or double gyrus. PAC is then either mostly centered on the single HG or overlapping both gyri in the case of two (Figure 1B,C) [122, 135, 136]. Core, belt, and parabelt areas have thus shifted in orientation from a strictly rostral-caudal axis for A1 to R to RT along macaque STG to a medial-lateral axis along human HG for hA1, hR, and hRT. The naming of the AFMs in human is based on the likely homology to macaque, but adds an ‘h’ to signify human [10].

3.3.2 Eleven human AFMs compose three cloverleaf clusters overlapping Heschl’s gyrus

With our new understanding of periodotopic representations overlapping the previously identified tonotopic gradients, in vivo fMRI measurements can now identify the 11 AFMs that compose the core and belt regions of human auditory cortex (Figure 10) [5, 10, 12, 23]. Running from STG to the circular sulcus (CiS) along HG are three distinct, concentrically organized, tonotopic representations. The primary circular tonotopic gradient is one dimension of the HG cloverleaf cluster, with a confluent low-tone representation located centrally and expanding smoothly to high-tone representations at the outer edge (Figure 10B,C) [5]. The HG cluster is divided along the orthogonal periodotopic reversals into two AFMs each of core, medial belt, and lateral belt: hA1, hR, hMM, hRM, hML, and hAL (Figure 10D,E). Positioned at the tip of HG, hA1 is the largest of these core and belt AFMs, with the posterior/lateral region representing low tones and the anterior/medial region representing high ones. HA1 is involved in the most basic of cortical auditory computations, which is reflected in its representations of broad ranges of tonotopy and periodotopy [2].

Figure 10.

Auditory field maps and cloverleaf clusters in human cortex. (A) Anatomical views of Heschl’s gyrus (HG), superior temporal gyrus (STG) and surrounding auditory cortex in an individual subject’s left hemisphere (S2). (i) Inflated 3-D rendering of the cortical surface. Light gray denotes gyri; dark gray denotes sulci. The approximate region presented in the other panels is indicated by the dotted black line. Note that this subject has a double peak along HG. (ii) flattened cortical surface of the region indicated by the dotted black line in (i). AFM boundaries between maps along tonotopic reversals are indicated by solid black lines. These tonotopic reversals constitute the separation of cloverleaf clusters from one another. AFM boundaries along periodotopic reversals are indicated by dotted black lines. These periodotopic reversals compose the separation between maps within a cloverleaf cluster. Red text indicates AFM names. (B) Tonotopic gradients measured using narrowband noise stimuli with a phase-encoded fMRI paradigm (example single-subject data from [10]). Color overlay indicates the preferred frequency range for each voxel. CF: center frequency in Hz. For clarity, only voxels within the core and belt AFMs are shown. Solid and dotted black lines are as in (A). Coherence ≥0.20. Inset scale bar designates 1 cm along the flattened cortical surfaces in (B, D). Inset legend indicates anatomical directions for (B-E). M: medial; L: lateral; A: anterior; P: posterior. (C) Diagram is based on individual-subject data measured by [10] in multiple phase-encoded fMRI experiments. Approximate positions of core AFMs (hA1, hR, hRT) are shown in white, and approximate positions of belt AFMs (hML, hAL, hRTL, hRTM, hRM, hMM, hCM, hCL) are shown in gray. Darker beige background indicates the plane of the lateral sulcus, while lighter beige overlay indicates gyri. Gyri are also marked with dashed black lines. HG: Heschl’s gyrus. CG: circular gyrus; CiS: circular sulcus; a/p STG: anterior/posterior superior temporal gyrus. Diagram depicts the locations of tonotopic representations overlaid along the core and belt AFMs, with low (L) and high (H) tonotopic representations are marked in red and blue, respectively. Dotted black lines designate the boundaries between AFMs within three cloverleaf clusters: HG cluster with hA1; hCM/hCL cluster (partial cluster defined to date); hRTM/hRT/hRTL cluster (partial cluster defined to date). (D) Periodotopic representations measured using broadband noise stimuli with a phase-encoded fMRI paradigm. Data are from the same subject as shown for tonotopy in (B), with the color overlay now indicating the preferred period range for each voxel. AM rate: amplitude modulation rate in Hz. Other details are as in (B). (E) Diagram depicts periodotopic representations overlaid on the same example region of cortex as in (C). L and H now designate to the approximate locations of low (orange) or high (purple) periodotopic representations, respectively. Adapted from Barton et a.[10]. For a detailed review, see [5].

A reversal in the tonotopic gradient along the anteromedial edge of the HG cluster divides it from the CM/CL cluster just past the tip of HG (Figure 10B,C). A high-periodicity gradient reversal splits this tonotopic gradient into hCM, and hCL, two regions associated with early language and speech processing as well as audiovisual integration (Figure 10D,E) [164]. Finally, the reversal in the tonotopic gradient along the posteriolateral edge of the HG cluster separates it from the RT cluster positioned where HG meets STG (Figure 10B,C). Two reversals in the periodotopic representations here divide the RT cluster into hRT, hRTM, and hRTL (Figure 10D,E). In macaque, these AFMs along STG are thought to subserve lower-level processing of auditory stimuli like temporally modulated environmental sounds [158, 159]. More research is needed to determine how what other AFMs form the CM/CL and RT clusters. Based on emerging data, it is likely that AFMs will also be a fundamental organization of auditory cortex adjacent to these cloverleaf clusters, such as planum temporale (PT), planum polare (PP) and STG.

3.4 Measuring attention and working memory in human AFMs

The characterization of AFMs and cloverleaf clusters will be crucial for the study of the structure and function of human auditory cortex, as these in vivo measurements allow for the systematic exploration of computations across a sensory system (for reviews, see [5, 17]). Such AFM organization provides a basic framework for the complex processing and analysis of input from the sensory receptors of the inner ear [5, 12, 17, 23]. The cloverleaf cluster organization of AFMs may also play a role in coordinating neural computations, with neurons within each cluster sharing computational resources such as common mechanisms to coordinate neural timing or short-term information storage [8, 12]. Similarly, vision studies suggest that functional specializations for perception are organized by cloverleaf clusters, as a particular cloverleaf cluster can be functionally differentiated from its neighbors by its pattern of BOLD responses, surface area, cortical magnification, processing specialization, and receptive field sizes [12, 16, 18, 19, 21, 165]. These distinctions indicate that CFMs within individual cloverleaf clusters are not only anatomically but also functionally related [15, 18, 20, 166].

The cluster organization is not necessarily thought to be driving common sensory functions, but rather reflects how multiple stages in a sensory processing pathway might arise during development across individuals and during evolution across species. It is likely that this cluster organization, like the topographic organization of CFMs, allows for efficient connectivity among neurons that represent neighboring aspects in sensory feature space [166, 167, 168, 169]. Since the axons contained within one cubic millimeter of cortex can extend 3-4 km in length, efficient connectivity is vital for sustainable energetics in cortex [170].

The definitions of AFMs and the cloverleaf clusters they compose using phase-encoded fMRI will thus serve as reliable, independent localizers for investigations of attention and working memory in early auditory cortex across individuals. Measurements of individual AFMs along the cortical hierarchy will help reveal the distinct stages of top-down and bottom-up auditory processing. In addition, changes in AFMs can be tracked to study how auditory cortex changes under various attentional and working memory tasks and disorders (e.g., [145, 171, 172, 173, 174, 175, 176, 177]).


4. Conclusion

The human brain has sophisticated systems for perception, trace memory, attention, and working memory for audition and vision, and likely the other senses as well. These systems appear to be organized in a very similar manner for each sense, despite the inputs to each system and information content being quite different. Behavioral measures of the last several decades have led to the development of well-defined models of each system. These models form the basis for the investigation of their underlying architecture in the cortical structures of the human brain. EEG and PET have allowed for spatially coarse investigation of cortical activity, but with the advent of fMRI, it has become possible to make exceptionally detailed spatial measurements. The methods of investigation must be carefully crafted to best elicit activity reflecting the desired aspects of each system; not only must the tasks be appropriate for fMRI, the stimuli and task must be closely matched not just to the system being studied, but to the inputs into that system as well.

For both audition and vision, the sensory processing in cortex happens in cloverleaf clusters of CFMs. This organizational pattern has clearly been demonstrated in the lower tiers of the processing hierarchy and very likely is organized as such throughout. Because the CFMs across the entire hierarchy (or at least, most) of one sense can be measured in just one session in the fMRI scanner, they make incredibly efficient localizers. CFMs are be measured in individual subjects, and serve as functional localizers that can be used to average more accurately across subjects than anatomical localizers. As such, due to the pervasive and fundamental role CFMs play in sensory systems, they are also excellent candidates for measuring the effects of attention and working memory in cortex. To best accomplish this feat, it is proposed that stimuli that are similar to those used to measure CFMs are excellent candidates for use in traditional tasks used to define attentional and working-memory models.



This material is based upon work supported by the National Science Foundation under Grant Number 1329255 and by startup funds from the Department of Cognitive Sciences at the University of California, Irvine.


Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


  1. 1. Van Essen DC, Felleman DJ, DeYoe EA, Olavarria J, Knierim J. Modular and hierarchical organization of extrastriate visual cortex in the macaque monkey. Cold Spring Harbor Symposia on Quantitative Biology. 1990;55:679-696
  2. 2. Kaas JH, Hackett TA. Subdivisions of auditory cortex and processing streams in primates. Proceedings of the National Academy of Sciences of the United States of America. 2000;97(22):11793-11799. DOI: 10.1073/pnas.97.22.11793
  3. 3. Krubitzer L. The magnificent compromise: Cortical field evolution in mammals. Neuron. 2007;56(2):201-208. DOI: 10.1016/j.neuron.2007.10.002
  4. 4. Schreiner CE, Winer JA. Auditory cortex mapmaking: Principles, projections, and plasticity. Neuron. 2007;56(2):356-365. DOI: 10.1016/j.neuron.2007.10.013
  5. 5. Brewer AA, Barton B. Maps of the auditory cortex. Annual Review of Neuroscience. 2016;39:385-407. DOI: 10.1146/annurev-neuro-070815-014045
  6. 6. Wessinger CM, VanMeter J, Tian B, Van Lare J, Pekar J, Rauschecker JP. Hierarchical organization of the human auditory cortex revealed by functional magnetic resonance imaging. Journal of Cognitive Neuroscience. 2001;13(1):1-7
  7. 7. Kaas JH. Topographic maps are fundamental to sensory processing. Brain Research Bulletin. 1997;44(2):107-112
  8. 8. Wandell BA, Brewer AA, Dougherty RF. Visual field map clusters in human cortex. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 2005;360(1456):693-707. DOI: 10.1098/rstb.2005.1628
  9. 9. Sanchez-Panchuelo RM, Francis S, Bowtell R, Schluppeck D. Mapping human somatosensory cortex in individual subjects with 7T functional MRI. Journal of Neurophysiology. 2010;103(5):2544-2556. DOI: 10.1152/jn.01017.2009
  10. 10. Barton B, Venezia JH, Saberi K, Hickok G, Brewer AA. Orthogonal acoustic dimensions define auditory field maps in human cortex. Proceedings of the National Academy of Sciences of the United States of America. 2012;109(50):20738-20743. DOI: 10.1073/pnas.1213381109
  11. 11. Krubitzer LA, Seelke AM. Cortical evolution in mammals: the bane and beauty of phenotypic variability. Proceedings of the National Academy of Sciences. 2012;109(Supplement 1):10647-10654
  12. 12. Brewer AA, Barton B. Cloverleaf clusters: A common macrostructural organization across human visual and auditory cortex. In: Heinbockel T, editor. Sensory Nervous System. London, UK: IntechOpen; 2018. pp. 127-160. DOI: 10.5772/intechopen.77964
  13. 13. Felleman DJ, Van Essen DC. Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex. 1991;1(1):1-47
  14. 14. Santoro R, Moerel M, De Martino F, Valente G, Ugurbil K, Yacoub E, et al. Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns. Proceedings of the National Academy of Sciences of the United States of America. 2017;114(18):4799-4804. DOI: 10.1073/pnas.1617622114
  15. 15. Brewer AA, Barton B. Visual field map organization in human visual cortex. In: Molotchnikoff S, Rouat J, editors. Visual Cortex—Current Status and Perspectives. Rijeka, Croatia: InTech; 2012. pp. 29-60. DOI: 10.5772/51914
  16. 16. Brewer AA, Liu J, Wade AR, Wandell BA. Visual field maps and stimulus selectivity in human ventral occipital cortex. Nature Neuroscience. 2005;8(8):1102-1109. DOI: 10.1038/nn1507
  17. 17. Wandell BA, Dumoulin SO, Brewer AA. Visual field maps in human cortex. Neuron. 2007;56(2):366-383. DOI: 10.1016/j.neuron.2007.10.012
  18. 18. Kolster H, Peeters R, Orban GA. The retinotopic organization of the human middle temporal area MT/V5 and its cortical neighbors. The Journal of Neuroscience. 2010;30(29):9801-9820. DOI: 10.1523/JNEUROSCI.2069-10.2010
  19. 19. Barton B, Brewer AA. Visual field map clusters in high-order visual processing: Organization of V3A/V3B and a new cloverleaf cluster in the posterior superior temporal sulcus. Frontiers in Integrative Neuroscience. 2017;11:4. DOI: 10.3389/fnint.2017.00004
  20. 20. Kolster H, Mandeville JB, Arsenault JT, Ekstrom LB, Wald LL, Vanduffel W. Visual field map clusters in macaque extrastriate visual cortex. The Journal of Neuroscience. 2009;29(21):7031-7039. DOI: 10.1523/JNEUROSCI.0518-09.2009
  21. 21. Press WA, Brewer AA, Dougherty RF, Wade AR, Wandell BA. Visual areas and spatial summation in human visual cortex. Vision Research. 2001;41(10-11):1321-1332
  22. 22. Van Essen DC. Organization of Visual Areas in macaque and human cerebral cortex. In: Chalupa LM, Werner JS, editors. The Visual Neurosciences. Boston: Bradford Books; 2003. pp. 507-521
  23. 23. Brewer AA, Barton B. Human auditory cortex. In: Hickok G, Small SL, editors. Neurobiology of Language. Cambridge: Academic Press, Elsevier; 2016. pp. 49-58. DOI: 10.1016/B978-0-12-407794-2.00005-5
  24. 24. Kaya EM, Elhilali M. Modelling auditory attention. Philosophical Transactions of The Royal Society B: Biological Sciences. 2017;372(1714):1-10. DOI: 10.1098/rstb.2016.0101
  25. 25. Fritz JB, Elhilali M, David SV, Shamma SA. Auditory attention—Focusing the searchlight on sound. Current Opinion in Neurobiology. 2007;17(4):437-455
  26. 26. Driver J. A selective review of selective attention research from the past century. British Journal of Psychology. 2001;92(1):53-78
  27. 27. Dalton P, Spence C. Selective attention in vision, audition, and touch. In: Byrne J, editor. Learning and Memory: A Comprehensive Reference. Oxford: Elsevier; 2008. pp. 243-257
  28. 28. Corbetta M, Shulman GL. Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews. Neuroscience. 2002;3(3):201-215. DOI: 10.1038/nrn755
  29. 29. Corbetta M, Kincade JM, Shulman GL. Neural systems for visual orienting and their relationships to spatial working memory. Journal of Cognitive Neuroscience. 2002;14(3):508-523. DOI: 10.1162/089892902317362029
  30. 30. Spence C, Santangelo V. Capturing spatial attention with multisensory cues: A review. Hearing Research. 2009;258(1-2):134-142. DOI: 10.1016/j.heares.2009.04.015
  31. 31. Santangelo V, Olivetti Belardinelli M, Spence C, Macaluso E. Interactions between voluntary and stimulus-driven spatial attention mechanisms across sensory modalities. Journal of Cognitive Neuroscience. 2009;21(12):2384-2397. DOI: 10.1162/jocn.2008.21178
  32. 32. Hurlstone MJ, Hitch GJ, Baddeley AD. Memory for serial order across domains: An overview of the literature and directions for future research. Psychological Bulletin. 2014;140(2):339-373. DOI: 10.1037/a0034221
  33. 33. Treisman AM. Strategies and models of selective attention. Psychological Review. 1969;76(3):282
  34. 34. Broadbent DE. Perception and Communication. New York: Pergamon Press; 1958. p. 338
  35. 35. Bharadwaj HM, Lee AK, Shinn-Cunningham BG. Measuring auditory selective attention using frequency tagging. Frontiers in Integrative Neuroscience. 2014;8:6. DOI: 10.3389/fnint.2014.00006
  36. 36. Woods DL, Alain C, Diaz R, Rhodes D, Ogawa KH. Location and frequency cues in auditory selective attention. Journal of Experimental Psychology: Human Perception and Performance. 2001;27(1):65
  37. 37. Zatorre RJ, Mondor TA, Evans AC. Auditory attention to space and frequency activates similar cerebral systems. NeuroImage. 1999;10(5):544-554
  38. 38. Zatorre RJ, Bouffard M, Ahad P, Belin P. Where is’ where’in the human auditory cortex? Nature Neuroscience. 2002;5(9):905
  39. 39. Maddox RK, Atilgan H, Bizley JK, Lee AK. Auditory selective attention is enhanced by a task-irrelevant temporally coherent visual stimulus in human listeners. eLife. 2015;4:1-11. DOI: 10.7554/eLife.04995
  40. 40. Kong L, Michalka SW, Rosen ML, Sheremata SL, Swisher JD, Shinn-Cunningham BG, et al. Auditory spatial attention representations in the human cerebral cortex. Cerebral Cortex. 2014;24(3):773-784. DOI: 10.1093/cercor/bhs359
  41. 41. Awh E, Pashler H. Evidence for split attentional foci. Journal of Experimental Psychology: Human Perception and Performance. 2000;26(2):834
  42. 42. Conway AR, Cowan N, Bunting MF. The cocktail party phenomenon revisited: The importance of working memory capacity. Psychonomic Bulletin & Review. 2001;8(2):331-335
  43. 43. Alain C, Izenberg A. Effects of attentional load on auditory scene analysis. Journal of Cognitive Neuroscience. 2003;15(7):1063-1073
  44. 44. Shinn-Cunningham BG. Object-based auditory and visual attention. Trends in Cognitive Sciences. 2008;12(5):182-186
  45. 45. Saygin AP, Sereno MI. Retinotopy and attention in human occipital, temporal, parietal, and frontal cortex. Cerebral Cortex. 2008;18(9):2158-2168. DOI: 10.1093/cercor/bhm242
  46. 46. Lauritzen TZ, D’Esposito M, Heeger DJ, Silver MA. Top-down flow of visual spatial attention signals from parietal to occipital cortex. Journal of Vision. 2009;9(13):18.1-18.14. DOI: 10.1167/9.13.18
  47. 47. Cavanagh P, Alvarez GA. Tracking multiple targets with multifocal attention. Trends in Cognitive Sciences. 2005;9(7):349-354. DOI: 10.1016/j.tics.2005.05.009
  48. 48. Baddeley A. Working memory: Looking back and looking forward. Nature Reviews. Neuroscience. 2003;4(10):829-839
  49. 49. Cowan N. The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences. 2001;24(01):87-114
  50. 50. Barton B, Brewer AA. Visual working memory in human cortex. Psychology (Irvine). 2013;4(8):655-662. DOI: 10.4236/psych.2013.48093
  51. 51. Postle BR. Working memory as an emergent property of the mind and brain. Neuroscience. 2006;139(1):23-38. DOI: 10.1016/j.neuroscience.2005.06.005
  52. 52. Sperling G. The information available in brief visual presentations. Psychological Monographs: General and Applied. 1960;74(11):1-29. DOI: 10.1037/h0093759
  53. 53. Awh E, Vogel EK, Oh S-H. Interactions between attention and working memory. Neuroscience. 2006;139(1):201-208
  54. 54. Hitch GJ, Hu Y, Allen RJ, Baddeley AD. Competition for the focus of attention in visual working memory: Perceptual recency versus executive control. Annals of the New York Academy of Sciences. 2018;1424(1):64-75. DOI: 10.1111/nyas.13631
  55. 55. Serences JT, Ester EF, Vogel EK, Awh E. Stimulus-specific delay activity in human primary visual cortex. Psychological Science. 2009;20(2):207-214
  56. 56. Postle BR, Zarahn E, D’Esposito M. Using event-related fMRI to assess delay-period activity during performance of spatial and nonspatial working memory tasks. Brain Research Protocols. 2000;5(1):57-66
  57. 57. Sereno AB, Amador SC. Attention and memory-related responses of neurons in the lateral intraparietal area during spatial and shape-delayed match-to-sample tasks. Journal of Neurophysiology. 2006;95(2):1078-1098. DOI: 10.1152/jn.00431.2005
  58. 58. Pashler H. Familiarity and visual change detection. Perception & Psychophysics. 1988;44(4):369-378
  59. 59. Pavani F, Turatto M. Change perception in complex auditory scenes. Perception & Psychophysics. 2008;70(4):619-629
  60. 60. Simons DJ, Ambinder MS. Change blindness. Current Directions in Psychological Science. 2005;14(1):44-48
  61. 61. Todd JJ, Marois R. Capacity limit of visual short-term memory in human posterior parietal cortex. Nature. 2004;428(6984):751
  62. 62. Xu Y, Chun MM. Dissociable neural mechanisms supporting visual short-term memory for objects. Nature. 2006;440(7080):91-95. DOI: 10.1038/nature04262
  63. 63. Logothetis NK. The neural basis of the blood-oxygen-level-dependent functional magnetic resonance imaging signal. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 2002;357(1424):1003-1037
  64. 64. Logothetis NK, Wandell BA. Interpreting the BOLD signal. Annual Review of Physiology. 2004;66:735-769. DOI: 10.1146/annurev.physiol.66.082602.092845
  65. 65. Sligte IG, Scholte HS, Lamme VAF. V4 activity predicts the strength of visual short-term memory representations. Journal of Neuroscience. 2009;29(23):7432-7438. DOI: 10.1523/Jneurosci.0784-09.2009
  66. 66. Cherry EC. Some experiments upon the recognition of speech with one and two ears. Journal of the Acoustical Society of America. 1953;25:975-979
  67. 67. Cherry EC. Some further experiments upon the recognition of speech, with one and with two ears. Journal of the Acoustical Society of America. 1954;26:554-559
  68. 68. Kristensen LB, Wang L, Petersson KM, Hagoort P. The interface between language and attention: Prosodic focus marking recruits a general attention network in spoken language comprehension. Cerebral Cortex. 2013;23(8):1836-1848. DOI: 10.1093/cercor/bhs164
  69. 69. Wood N, Cowan N. The cocktail party phenomenon revisited: How frequent are attention shifts to one’s name in an irrelevant auditory channel? Journal of Experimental Psychology. Learning, Memory, and Cognition. 1995;21(1):255-260
  70. 70. Wood NL, Cowan N. The cocktail party phenomenon revisited: Attention and memory in the classic selective listening procedure of Cherry (1953). Journal of Experimental Psychology. General. 1995;124(3):243-262
  71. 71. Kaya EM, Elhilali M. Correction to ‘Modelling auditory attention’ Philosophical Transactions of The Royal Society B: Biological Sciences. 2017;372(1727):1. DOI: 10.1098/rstb.2017.0194
  72. 72. Alho J, Green BM, May PJ, Sams M, Tiitinen H, Rauschecker JP, et al. Early-latency categorical speech sound representations in the left inferior frontal gyrus. NeuroImage. 2016;129:214-223. DOI: 10.1016/j.neuroimage.2016.01.016
  73. 73. Culham JC, Brandt SA, Cavanagh P, Kanwisher NG, Dale AM, Tootell RB. Cortical fMRI activation produced by attentive tracking of moving targets. Journal of Neurophysiology. 1998;80(5):2657-2670
  74. 74. Barton B, Ester EF, Awh E. Discrete resource allocation in visual working memory. Journal of Experimental Psychology. Human Perception and Performance. 2009;35(5):1359
  75. 75. Awh E, Barton B, Vogel EK. Visual working memory represents a fixed number of items regardless of complexity. Psychological Science. 2007;18(7):622-628. DOI: 10.1111/j.1467-9280.2007.01949.x
  76. 76. Hickok G, Poeppel D. The cortical organization of speech processing. Nature Reviews. Neuroscience. 2007;8(5):393-402. DOI: 10.1038/nrn2113
  77. 77. Chomsky N. Knowledge of Language: Its Nature, Origin, and Use. Westport, Connecticut: Praeger Publishers; 1986
  78. 78. Luck SJ, Vogel EK. The capacity of visual working memory for features and conjunctions. Nature. 1997;390(6657):279-281. DOI: 10.1038/36846
  79. 79. Baddeley AD, Hitch GJ. Working memory. In: Bower GA, editor. ThePsychology of Learning and Motivation. Advances in Research and Theory. New York: Academic Press; 1974. p. 47-89. DOI: 10.1016/S0079-7421(08)60452-1
  80. 80. Baddeley AD. The episodic buffer: A new component of working memory? Trends in Cognitive Sciences. 2000;4(11):417-423
  81. 81. Baddeley AD, Hitch GJ, Allen RJ. From short-term store to multicomponent working memory: The role of the modal model. Memory & Cognition. 2018:1-14.
  82. 82. Baddeley A. Exploring the central executive. The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology. 1996;49(1):5-28
  83. 83. Sperling G, Speelman RG. Acoustic similarity and auditory short-term memory: Experiments and a model. In: Norman DA, editor. Models of human memory. New York: Academic Press; 1970. p. 151-202
  84. 84. Niemeier M, Goltz HC, Kuchinad A, Tweed DB, Vilis T. A contralateral preference in the lateral occipital area: Sensory and attentional mechanisms. Cerebral Cortex. 2005;15(3):325-331. DOI: 10.1093/cercor/bhh134
  85. 85. Luck SJ, Woodman GF, Vogel EK. Event-related potential studies of attention. Trends in Cognitive Sciences. 2000;4(11):432-440
  86. 86. Postle BR, Berger JS, Taich AM, D’Esposito M. Activity in human frontal cortex associated with spatial working memory and saccadic behavior. Journal of Cognitive Neuroscience. 2000;12(supplement 2):2-14. DOI: 10.1162/089892900564028
  87. 87. Swisher JD, Halko MA, Merabet LB, McMains SA, Somers DC. Visual topography of human intraparietal sulcus. The Journal of Neuroscience. 2007;27(20):5326-5337. DOI: 10.1523/JNEUROSCI.0991-07.2007
  88. 88. Kastner S, DeSimone K, Konen CS, Szczepanski SM, Weiner KS, Schneider KA. Topographic maps in human frontal cortex revealed in memory-guided saccade and spatial working-memory tasks. Journal of Neurophysiology. 2007;97(5):3494-3507. DOI: 10.1152/jn.00010.2007
  89. 89. Szczepanski SM, Konen CS, Kastner S. Mechanisms of spatial attention control in frontal and parietal cortex. The Journal of Neuroscience. 2010;30(1):148-160. DOI: 10.1523/JNEUROSCI.3862-09.2010
  90. 90. Zeki S, Bartels A. Toward a theory of visual consciousness. Consciousness and Cognition. 1999;8(2):225-259. DOI: 10.1006/ccog.1999.0390
  91. 91. Xu Y, Chun MM. Visual grouping in human parietal cortex. Proceedings of the National Academy of Sciences of the United States of America. 2007;104(47):18766-18771. DOI: 10.1073/pnas.0705618104
  92. 92. Xu Y, Chun MM. Selecting and perceiving multiple visual objects. Trends in Cognitive Sciences. 2009;13(4):167-174. DOI: 10.1016/j.tics.2009.01.008
  93. 93. Phan ML, Schendel KL, Recanzone GH, Robertson LC. Auditory and visual spatial localization deficits following bilateral parietal lobe lesions in a patient with Balint’s syndrome. Journal of Cognitive Neuroscience. 2000;12(4):583-600
  94. 94. Harrison SA, Tong F. Decoding reveals the contents of visual working memory in early visual areas. Nature. 2009;458(7238):632-635. DOI: 10.1038/nature07832
  95. 95. Rypma B, Berger JS, D’Esposito M. The influence of working-memory demand and subject performance on prefrontal cortical activity. Journal of Cognitive Neuroscience. 2002;14(5):721-731. DOI: 10.1162/08989290260138627
  96. 96. Larsson J, Heeger DJ. Two retinotopic visual areas in human lateral occipital cortex. The Journal of Neuroscience. 2006;26(51):13128-13142. DOI: 10.1523/JNEUROSCI.1657-06.2006
  97. 97. Huk AC, Dougherty RF, Heeger DJ. Retinotopy and functional subdivision of human areas MT and MST. Journal of Neuroscience. 2002;22(16):7195-7205. DOI: 20026661
  98. 98. Spoendlin H. Sensory neural organization of the cochlea. The Journal of Laryngology and Otology. 1979;93(9):853-877
  99. 99. Ress D, Chandrasekaran B. Tonotopic organization in the depth of human inferior colliculus. Frontiers in Human Neuroscience. 2013;7:586. DOI: 10.3389/fnhum.2013.00586
  100. 100. Chang KH, Thomas JM, Boynton GM, Fine I. Reconstructing tone sequences from functional magnetic resonance imaging blood-oxygen level dependent responses within human primary auditory cortex. Frontiers in Psychology. 2017;8:1983. DOI: 10.3389/fpsyg.2017.01983
  101. 101. Langner G, Sams M, Heil P, Schulze H. Frequency and periodicity are represented in orthogonal maps in the human auditory cortex: Evidence from magnetoencephalography. Journal of Comparative Physiology. A. 1997;181(6):665-676
  102. 102. Baumann S, Griffiths TD, Sun L, Petkov CI, Thiele A, Rees A. Orthogonal representation of sound dimensions in the primate midbrain. Nature Neuroscience. 2011;14(4):423-425. DOI: 10.1038/nn.2771
  103. 103. Dau T, Kollmeier B, Kohlrausch A. Modeling auditory processing of amplitude modulation. II. Spectral and temporal integration. The Journal of the Acoustical Society of America. 1997;102(5 Pt 1):2906-2919
  104. 104. Ewert SD, Dau T. Characterizing frequency selectivity for envelope fluctuations. The Journal of the Acoustical Society of America. 2000;108(3 Pt 1):1181-1196
  105. 105. Hsieh IH, Saberi K. Detection of sinusoidal amplitude modulation in logarithmic frequency sweeps across wide regions of the spectrum. Hearing Research. 2010;262(1-2):9-18. DOI: 10.1016/j.heares.2010.02.002
  106. 106. Langner G, Schreiner CE. Periodicity coding in the inferior colliculus of the cat. I. Neuronal mechanisms. Journal of Neurophysiology. 1988;60(6):1799-1822
  107. 107. Baumann S, Joly O, Rees A, Petkov CI, Sun L, Thiele A, et al. The topography of frequency and time representation in primate auditory cortices. eLife. 2015;4:1-15. DOI: 10.7554/eLife.03256
  108. 108. DeYoe EA, Carman GJ, Bandettini P, Glickman S, Wieser J, Cox R, et al. Mapping striate and extrastriate visual areas in human cerebral cortex. Proceedings of the National Academy of Sciences of the United States of America. 1996;93(6):2382-2386
  109. 109. Engel SA, Glover GH, Wandell BA. Retinotopic organization in human visual cortex and the spatial precision of functional MRI. Cerebral Cortex. 1997;7(2):181-192
  110. 110. Engel SA, Rumelhart DE, Wandell BA, Lee AT, Glover GH, Chichilnisky EJ, et al. fMRI of human visual cortex. Nature. 1994;369(6481):525. DOI: 10.1038/369525a0
  111. 111. Sereno MI, Dale AM, Reppas JB, Kwong KK, Belliveau JW, Brady TJ, et al. Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science. 1995;268(5212):889-893
  112. 112. Humphries C, Liebenthal E, Binder JR. Tonotopic organization of human auditory cortex. NeuroImage. 2010;50(3):1202-1211. DOI: 10.1016/j.neuroimage.2010.01.046
  113. 113. Talavage TM, Sereno MI, Melcher JR, Ledden PJ, Rosen BR, Dale AM. Tonotopic organization in human auditory cortex revealed by progressions of frequency sensitivity. Journal of Neurophysiology. 2004;91(3):1282-1296. DOI: 10.1152/jn.01125.2002
  114. 114. Petkov CI, Kayser C, Augath M, Logothetis NK. Optimizing the imaging of the monkey auditory cortex: Sparse vs. continuous fMRI. Magnetic Resonance Imaging. 2009;27(8):1065-1073. DOI: 10.1016/j.mri.2009.01.018
  115. 115. Joly O, Baumann S, Balezeau F, Thiele A, Griffiths TD. Merging functional and structural properties of the monkey auditory cortex. Frontiers in Neuroscience. 2014;8:198. DOI: 10.3389/fnins.2014.00198
  116. 116. Bandettini PA, Jesmanowicz A, Van Kylen J, Birn RM, Hyde JS. Functional MRI of brain activation induced by scanner acoustic noise. Magnetic Resonance in Medicine. 1998;39(3):410-416
  117. 117. Gaab N, Gabrieli JD, Glover GH. Assessing the influence of scanner background noise on auditory processing. II. An fMRI study comparing auditory processing in the absence and presence of recorded scanner noise using a sparse design. Human Brain Mapping. 2007;28(8):721-732. DOI: 10.1002/hbm.20299
  118. 118. Scarff CJ, Dort JC, Eggermont JJ, Goodyear BG. The effect of MR scanner noise on auditory cortex activity using fMRI. Human Brain Mapping. 2004;22(4):341-349. DOI: 10.1002/hbm.20043
  119. 119. Clarke S, Morosan P. Architecture, connectivity, and transmitter receptors of human auditory cortex. In: Poeppel D, Overath T, Popper A, Richard R, editors. The Human Auditory Cortex. New York: Springer; 2012. pp. 11-38
  120. 120. Dougherty RF, Koch VM, Brewer AA, Fischer B, Modersitzki J, Wandell BA. Visual field representations and locations of visual areas V1/2/3 in human visual cortex. Journal of Vision. 2003;3(10):586-598. DOI: 10:1167/3.10.1
  121. 121. Galaburda A, Sanides F. Cytoarchitectonic organization of the human auditory cortex. The Journal of Comparative Neurology. 1980;190(3):597-610. DOI: 10.1002/cne.901900312
  122. 122. Morosan P, Rademacher J, Schleicher A, Amunts K, Schormann T, Zilles K. Human primary auditory cortex: Cytoarchitectonic subdivisions and mapping into a spatial reference system. NeuroImage. 2001;13(4):684-701. DOI: 10.1006/nimg.2000.0715
  123. 123. Rademacher J, Caviness VS Jr, Steinmetz H, Galaburda AM. Topographical variation of the human primary cortices: Implications for neuroimaging, brain mapping, and neurobiology. Cerebral Cortex. 1993;3(4):313-329
  124. 124. Rademacher J, Morosan P, Schormann T, Schleicher A, Werner C, Freund HJ, et al. Probabilistic mapping and volume measurement of human primary auditory cortex. NeuroImage. 2001;13(4):669-683. DOI: 10.1006/nimg.2000.0714
  125. 125. Talairach J, Tournoux P. Col-Planar Stereotax Atlas of the Human Brain. New York: Thieme Medical Publishers; 1988
  126. 126. Collins DL, Neelin P, Peters TM, Evans AC. Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. Journal of Computer Assisted Tomography. 1994;18(2):192-205
  127. 127. Wade AR, Brewer AA, Rieger JW, Wandell BA. Functional measurements of human ventral occipital cortex: Retinotopy and colour. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 2002;357(1424):963-973. DOI: 10.1098/rstb.2002.1108
  128. 128. Tyler CW, Wade AR. Extended concepts of occipital retinotopy. Current Medical Imaging Review. 2005;1:3190329
  129. 129. Langner G, Dinse HR, Godde B. A map of periodicity orthogonal to frequency representation in the cat auditory cortex. Frontiers in Integrative Neuroscience. 2009;3:27. DOI: 10.3389/neuro.07.027.2009
  130. 130. Langner G, Albert M, Briede T. Temporal and spatial coding of periodicity information in the inferior colliculus of awake chinchilla (Chinchilla laniger). Hearing Research. 2002;168(1-2):110-130
  131. 131. Arcaro MJ, McMains SA, Singer BD, Kastner S. Retinotopic organization of human ventral visual cortex. The Journal of Neuroscience. 2009;29(34):10638-10652. DOI: 10.1523/JNEUROSCI.2807-09.2009
  132. 132. Tanji K, Leopold DA, Ye FQ , Zhu C, Malloy M, Saunders RC, et al. Effect of sound intensity on tonotopic fMRI maps in the unanesthetized monkey. NeuroImage. 2010;49(1):150-157. DOI: 10.1016/j.neuroimage.2009.07.029
  133. 133. Hart HC, Hall DA, Palmer AR. The sound-level-dependent growth in the extent of fMRI activation in Heschl’s gyrus is different for low- and high-frequency tones. Hearing Research. 2003;179(1-2):104-112
  134. 134. Menon RS, Kim SG. Spatial and temporal limits in cognitive neuroimaging with fMRI. Trends in Cognitive Sciences. 1999;3(6):207-216
  135. 135. Leonard CM, Puranik C, Kuldau JM, Lombardino LJ. Normal variation in the frequency and location of human auditory cortex landmarks. Heschl’s gyrus: Where is it? Cerebral Cortex. 1998;8(5):397-406
  136. 136. Amunts K, Schleicher A, Burgel U, Mohlberg H, Uylings HB, Zilles K. Broca’s region revisited: Cytoarchitecture and intersubject variability. The Journal of Comparative Neurology. 1999;412(2):319-341
  137. 137. Dumoulin SO, Wandell BA. Population receptive field estimates in human visual cortex. NeuroImage. 2008;39(2):647-660. DOI: 10.1016/j.neuroimage.2007.09.034
  138. 138. de la Mothe LA, Blumell S, Kajikawa Y, Hackett TA. Cortical connections of the auditory cortex in marmoset monkeys: Core and medial belt regions. The Journal of Comparative Neurology. 2006;496(1):27-71. DOI: 10.1002/cne.20923
  139. 139. Petkov CI, Kayser C, Augath M, Logothetis NK. Functional imaging reveals numerous fields in the monkey auditory cortex. PLoS Biology. 2006;4(7):e215. DOI: 10.1371/journal.pbio.0040215
  140. 140. Kaas JH, Hackett TA. Subdivisions of auditory cortex and levels of processing in primates. Audiology & Neuro-Otology. 1998;3(2-3):73-85
  141. 141. Pandya DN, Sanides F. Architectonic parcellation of the temporal operculum in rhesus monkey and its projection pattern. Zeitschrift für Anatomie und Entwicklungsgeschichte. 1973;139(2):127-161
  142. 142. Galaburda AM, Pandya DN. The intrinsic architectonic and connectional organization of the superior temporal region of the rhesus monkey. The Journal of Comparative Neurology. 1983;221(2):169-184. DOI: 10.1002/cne.902210206
  143. 143. Morel A, Garraghty PE, Kaas JH. Tonotopic organization, architectonic fields, and connections of auditory cortex in macaque monkeys. The Journal of Comparative Neurology. 1993;335(3):437-459. DOI: 10.1002/cne.903350312
  144. 144. Fullerton BC, Pandya DN. Architectonic analysis of the auditory-related areas of the superior temporal region in human brain. The Journal of Comparative Neurology. 2007;504(5):470-498. DOI: 10.1002/cne.21432
  145. 145. Muckli L, Naumer MJ, Singer W. Bilateral visual field maps in a patient with only one hemisphere. Proceedings of the National Academy of Sciences of the United States of America. 2009;106(31):13034-13039. DOI: 10.1073/pnas.0809688106
  146. 146. Brewer AA, Barton B. Developmental plasticity: FMRI investigations into human visual cortex. In: Papageorgiou TD, Christopoulos G, Smirnakis S, editors. Advanced Brain Neuroimaging Topics in Health and Disease - Methods and Applications. Rijeka, Croatia: InTech; 2014. pp. 305-334. DOI: 10.5772/58256
  147. 147. Brewer AA. Visual maps: To merge or not to merge. Current Biology. 2009;19(20):R945-R947. DOI: 10.1016/j.cub.2009.09.016
  148. 148. Katz LC, Shatz CJ. Synaptic activity and the construction of cortical circuits. Science. 1996;274(5290):1133-1138
  149. 149. Polleux F, Ince-Dunn G, Ghosh A. Transcriptional regulation of vertebrate axon guidance and synapse formation. Nature Reviews. Neuroscience. 2007;8(5):331-340. DOI: 10.1038/nrn2118
  150. 150. Merzenich MM, Brugge JF. Representation of the cochlear partition of the superior temporal plane of the macaque monkey. Brain Research. 1973;50(2):275-296
  151. 151. Sweet RA, Dorph-Petersen KA, Lewis DA. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus. The Journal of Comparative Neurology. 2005;491(3):270-289. DOI: 10.1002/cne.20702
  152. 152. Dick F, Tierney AT, Lutti A, Josephs O, Sereno MI, Weiskopf N. In vivo functional and myeloarchitectonic mapping of human primary auditory areas. The Journal of Neuroscience. 2012;32(46):16095-16105
  153. 153. Rivier F, Clarke S. Cytochrome oxidase, acetylcholinesterase, and NADPH-diaphorase staining in human supratemporal and insular cortex: Evidence for multiple auditory areas. NeuroImage. 1997;6(4):288-304. DOI: 10.1006/nimg.1997.0304
  154. 154. Hackett TA. Information flow in the auditory cortical network. Hearing Research. 2011;271(1-2):133-146. DOI: 10.1016/j.heares.2010.01.011
  155. 155. Hackett TA, Stepniewska I, Kaas JH. Subdivisions of auditory cortex and ipsilateral cortical connections of the parabelt auditory cortex in macaque monkeys. The Journal of Comparative Neurology. 1998;394(4):475-495
  156. 156. Jones EG, Dell’Anna ME, Molinari M, Rausell E, Hashikawa T. Subdivisions of macaque monkey auditory cortex revealed by calcium-binding protein immunoreactivity. The Journal of Comparative Neurology. 1995;362(2):153-170. DOI: 10.1002/cne.903620202
  157. 157. Molinari M, Dell’Anna ME, Rausell E, Leggio MG, Hashikawa T, Jones EG. Auditory thalamocortical pathways defined in monkeys by calcium-binding protein immunoreactivity. The Journal of Comparative Neurology. 1995;362(2):171-194. DOI: 10.1002/cne.903620203
  158. 158. Kusmierek P, Rauschecker JP. Functional specialization of medial auditory belt cortex in the alert rhesus monkey. Journal of Neurophysiology. 2009;102(3):1606-1622. DOI: 10.1152/jn.00167.2009
  159. 159. Rauschecker JP, Tian B, Hauser M. Processing of complex sounds in the macaque nonprimary auditory cortex. Science. 1995;268(5207):111-114
  160. 160. Tian B, Rauschecker JP. Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey. Journal of Neurophysiology. 2004;92(5):2993-3013. DOI: 10.1152/jn.00472.2003
  161. 161. Kajikawa Y, Frey S, Ross D, Falchier A, Hackett TA, Schroeder CE. Auditory properties in the parabelt regions of the superior temporal gyrus in the awake macaque monkey: An initial survey. The Journal of Neuroscience. 2015;35(10):4140-4150. DOI: 10.1523/JNEUROSCI.3556-14.2015
  162. 162. von Economo C, Koskinas G. Die Cytoarchitectonik der Hirnrinde des erwachsenen menschen. Berlin: Julius-Springer; 1925
  163. 163. Hedges S, Kumar S. Genomic clocks and evolutionary timescales. Trends in Genetics. 2003;19:200-206
  164. 164. Kayser C, Petkov CI, Augath M, Logothetis NK. Functional imaging reveals visual modulation of specific fields in auditory cortex. The Journal of Neuroscience. 2007;27(8):1824-1835. DOI: 10.1523/JNEUROSCI.4737-06.2007
  165. 165. Bartels A, Zeki S. The architecture of the colour Centre in the human visual brain: New results and a review. The European Journal of Neuroscience. 2000;12(1):172-193
  166. 166. Moradi F, Heeger DJ. Inter-ocular contrast normalization in human visual cortex. Journal of Vision. 2009;9(3):13.1-13.22. DOI: 10.1167/9.3.13
  167. 167. Mitchison G. Neuronal branching patterns and the economy of cortical wiring. Proceedings of the Biological Sciences. 1991;245(1313):151-158. DOI: 10.1098/rspb.1991.0102
  168. 168. Chklovskii DB, Koulakov AA. Maps in the brain: What can we learn from them? Annual Review of Neuroscience. 2004;27:369-392. DOI: 10.1146/annurev.neuro.27.070203.144226
  169. 169. Shapley R, Hawken M, Xing D. The dynamics of visual responses in the primary visual cortex. Progress in Brain Research. 2007;165:21-32. DOI: 10.1016/S0079-6123(06)65003-6
  170. 170. Braitenberg V, Schüz A. Cortex: Statistics and Geometry of Neuronal Connectivity. 2nd ed. Berlin: Springer; 1998
  171. 171. Baseler HA, Brewer AA, Sharpe LT, Morland AB, Jagle H, Wandell BA. Reorganization of human cortical maps caused by inherited photoreceptor abnormalities. Nature Neuroscience. 2002;5(4):364-370. DOI: 10.1038/nn817
  172. 172. Hoffmann MB, Tolhurst DJ, Moore AT, Morland AB. Organization of the visual cortex in human albinism. The Journal of Neuroscience. 2003;23(26):8921-8930
  173. 173. Hoffmann MB, Kaule FR, Levin N, Masuda Y, Kumar A, Gottlob I, et al. Plasticity and stability of the visual system in human achiasma. Neuron. 2012;75(3):393-401. DOI: 10.1016/j.neuron.2012.05.026
  174. 174. Baseler HA, Gouws A, Haak KV, Racey C, Crossland MD, Tufail A, et al. Large-scale remapping of visual cortex is absent in adult humans with macular degeneration. Nature Neuroscience. 2011;14(5):649-655. DOI: 10.1038/nn.2793
  175. 175. Fine I, Wade AR, Brewer AA, May MG, Goodman DF, Boynton GM, et al. Long-term deprivation affects visual perception and cortex. Nature Neuroscience. 2003;6(9):915-916. DOI: 10.1038/nn1102
  176. 176. Brewer AA, Barton B. Visual cortex in aging and Alzheimer’s disease: Changes in visual field maps and population receptive fields. Frontiers in Psychology. 2014;5:74. DOI: 10.3389/fpsyg.2014.00074
  177. 177. Brewer AA, Barton B. Changes in visual cortex in healthy aging and dementia. In: Moretti DV, editor. Update on Dementia. Rijeka, Croatia: InTech; 2016. pp. 273-310. DOI: 10.5772/61983

Written By

Brian Barton and Alyssa A. Brewer

Submitted: 18 September 2018 Reviewed: 28 February 2019 Published: 12 June 2019