Open access peer-reviewed chapter - ONLINE FIRST

Interplay between Primary Cortical Areas and Crossmodal Plasticity

By Christian Xerri and Yoh’i Zennou-Azogui

Submitted: July 10th 2020Reviewed: December 10th 2020Published: January 14th 2021

DOI: 10.5772/intechopen.95450

Downloaded: 49

Abstract

Perceptual representations are built through multisensory interactions underpinned by dense anatomical and functional neural networks that interconnect primary and associative cortical areas. There is compelling evidence that primary sensory cortical areas do not work in segregation, but play a role in early processes of multisensory integration. In this chapter, we firstly review previous and recent literature showing how multimodal interactions between primary cortices may contribute to refining perceptual representations. Secondly, we discuss findings providing evidence that, following peripheral damage to a sensory system, multimodal integration may promote sensory substitution in deprived cortical areas and favor compensatory plasticity in the spared sensory cortices.

Keywords

  • multisensory convergence
  • crossmodal processing
  • sensory loss
  • intersensory substitution

1. Introduction

The brain’s ability to generate a rich and unambiguous representation of the world requires the multimodal integration of sensory signals often co-occurring in time and space. A crucial issue is how the brain integrates the separate elements of an object perceived through individual sensory channels (vision, audition, touch, etc.) in order not only to improve detection and discrimination and evaluate crossmodal congruency, but also to form a unified percept. Over the last decades, a growing number of studies have challenged the traditional view of the sensory neocortex as a parcellation of highly specialized primary areas, each being exclusively dedicated to the integration and processing of a unique sensory modality. An ancillary conception is that signals conveyed through unisensory streams mainly converge and interact in higher-order association regions of the temporal and parietal cortex in which multisensory integration culminates. Accordingly, these cortical regions tuned to integrate increasingly complex sensory signals send divergent projections back to early sensory areas to exert modulatory feedback on their constituent neurons. However, this hierarchical model of multisensory integration has been reappraised in view of accumulating evidence over recent decades that primary sensory cortical areas are anatomically and functionally interconnected. There is increasing awareness that multisensory integration starts in lower sensory areas, presumably via thalamo-cortical and direct cortico-cortical connections [1, 2, 3] (macaque; ferret) ([4, 5], for reviews). Hence, it is of primary interest to unravel how heteromodal inputs interplaying with the dominant modality in primary sensory areas may contribute to improving perception. In addition, a major issue is to determine how crossmodal plasticity subserves functional compensation and behavioral recovery after the loss or impairment of a sensory organ, or following cortical damage. This review chapter has a double focus, firstly on the interplay between primary sensory cortices in normal condition, and secondly on crossmodal plasticity operating in primary and higher-order cortical areas following sensory loss.

2. Subcortical and intracortical connectivity between primary sensory areas

There is anatomical evidence that crossmodal inputs to primary cortical areas can be conveyed through thalamo-cortical or cortico-cortical projection fibers. There is, however, only scarce anatomical evidence for heteromodal convergence from auditory, visual and somatosensory thalamic nuclei to A1, V1 and S1 [6] (gerbil). By contrast, cortico-cortical connections underpinning plurimodal interplay between these cortical areas are well documented. Tract-tracing methods have revealed the existence of visual-somatosensory projections from V2 to areas 1/3b in S1, and somatosensory projections from S2 to A1 [2] (marmoset). Direct cortico-cortical connections between A1, V1 and S1 have been identified [1, 7] (macaque; cat). It has been shown that V1 projects mainly to S1, but receives a moderate amount of projections from A1 and S1, while A1 sends more projections to V1 than S1, but receives sparse projections from these two areas [8, 9] (mouse). These findings indicate that the connectivity network between A1, S1 and V1 is asymmetric. Overall, both thalamocortical and corticocortical connections may contribute to the occurrence of short-latency responses to heteromodal inputs reported in these area [10, 11, 12, 13, 14, 15, 16, 17] (monkey; human).

In the model of hierarchical organization of cortical connectivity, it is generally assumed that feedforward connections convey sensory information to higher order areas, whereas feedback connections modulate neural activity in lower-level cortical areas [18, 19] (macaque; cat). This model is somewhat challenged by retrograde tracing studies investigating the microcircuitry of reciprocal connections between primary cortical areas. These studies have shown that A1 and S1 project in a feedback manner to V1, while V1 to A1 projections are of feedforward type and V1 to S1 are mostly lateral [8, 9] (mouse). In addition, A1 and S1 are linked by reciprocal feedback projections [6] (gerbil). Hence, the available evidence suggests that connection patterns between primary sensory cortices are not at the same levels in the neural network. Furthermore, based on the labeling of reciprocal connections between V1 and S1 and the characterization of the size and laminar density of axonal swellings, it was concluded that S1 receives a stronger driver input fromV1 and that S1 inputs to V1 have a predominant modulatory influence [9] (mouse). Regarding the projections from the auditory cortex to V1, both types of input have been identified with, however, a clear dominance of small caliber axons bearing modulator boutons [8] (mouse).

Somatosensory-auditory interactions have been found at low-level of multisensory integration. Cutaneous responses were recorded in the caudo-medial auditory cortex, with a feedforward laminar activation profile. The initial excitatory response was located in layer 4, then followed by responses in the extragranular laminae (layers 2, 3, 5 and 6), in contrast with feedback and lateral activation profiles beginning in the extragranular laminae [20] (macaque).

The intricate connectivity between unimodal primary cortical areas favors crossmodal interplay in the early stage of multisensory integration, presumably through feedforward and feedback connections [21] (for a review). The question arises whether heteromodal connections between low-level sensory cortices exert a global modulatory influence on ongoing firing or more selectively contribute to shaping the neuronal response characteristics in primary sensory areas.

3. Neurophysiological mechanisms of multimodal integration in primary sensory areas

Concurrent stimuli of sensory organs coactivate primary cortical areas and generate reciprocal influences contributing to the process of multimodal integration. It is noticeable that the bulk of studies on multisensory integration in early cortical areas have focused on the interplay between the visual and auditory cortices.

3.1 Visual-auditory interactions

Activation of A1 neurons by noise bursts was found to induce GABAergic inhibition of supragranular pyramidal cells in V1, via cortico-cortical connections, leading to a reduced synaptic and spike activity upon bimodal stimulation [10] (mouse). Furthermore, this acoustic stimulation decreased behavioral responses to a dim flash, likely through GABAergic inhibition in V1, as this effect was prevented by acute blockade of GABAA and GABAB receptors. The authors concluded that salient auditory stimuli degrade potentially distracting sensory processing in the visual cortex. This finding was corroborated by an in vitro electrophysiological study showing that layer 1 and layer 2/3 inhibitory neurons in V1 receive direct excitatory inputs from A1 [22] (mouse). Along the same lines, intrinsic signal imaging aimed at simultaneously recording visuotopic maps in V1 and tonotopic maps in A1, showed that a high activation of A1 suppresses visually evoked responses in V1 [5] (mouse). As a result, under bimodal stimulation the global effect of auditory inputs to V1 was such that the neuronal firing averaged across all visual orientations was weaker. Nevertheless, the orientation selectivity of V1 excitatory neurons in layer 2/layer 3 was found to be sharpened by concurrent sound signals or optogenetic activation of A1 to V1 projections [22] (mouse). Indeed, auditory signals increased the neuronal responses at the preferred visual orientation, and decreased responses at the orthogonal orientation, with a stronger impact at lower visual contrast. Tracing data showed that axons from A1 layer 5 to V1 neurons mainly terminated in superficial layers and activated layer 1 inhibitory neurons. The sharpening effect was very likely mediated by a combination of inhibitory and disinhibitory circuits, since layer 1 neurons in V1 being excited by sound, they presumably suppressed layer 2/layer 3 pyramidal cell responses, but also inhibited other inhibitory neurons in layer 2/layer 3, thereby globally contributing to increasing the firing rate of the pyramidal cells at their preferred orientation tuning. A two-photon calcium imaging study showed that when visual and auditory stimulus features are temporally congruent, neurons in V1 exhibit a balanced pattern of response enhancement and suppression compared with unimodal visual stimuli. Temporally incongruent tones or white-noise bursts in paired audiovisual stimuli mainly produce suppressive responses across the neuronal population, in particular when the visual stimulus contrast is high [23] (mouse). Neuronal mechanisms of visual–auditory integration appear to be dependent upon the behavioral context. A study investigating the modulation of V1 neurons by auditory stimuli showed no difference in the latency or strength of visual responses in monkeys trained to a passive central fixation, while a visual–auditory stimulus was presented in the periphery [24] (monkey). By contrast, a significant reduction in latency was observed when the animal was required to orient its gaze toward the visual–auditory stimulus. This finding suggests that projections from the auditory cortex to V1 contribute to reducing the response time of head orientation during a foveation movement toward a peripheral sound source.

There is also convincing evidence that vision impacts neuronal coding in the primary auditory cortex. Neurons sensitive to visual stimulation in A1 convey more information about stimuli in their spike trains than neurons sensitive to either auditory or visual stimuli presented alone [3] (ferret). An intriguing study revealed that adding congruent visual signals to auditory ones enhanced the weak auditory responses, had no effect on intermediate responses and suppressed strong responses [25] (macaque). In this study, measurement of the amount of information contained in visual and auditory responses showed that bimodal stimuli yielded more information provided by firing rates and spike timing than unimodal ones, but that the suppressed responses carried more information than the increased responses. This information gain was due to a reduced variability of the suppressed responses, whereas the variability of the enhanced responses was increased. The authors proposed that enhanced, but less reliable responses may be involved in detecting rare or faint sensory events, while suppressed, more reliable responses may be used to represent detailed characteristics of sensory environment. Interestingly, a recent study suggests that in layers 5 and 6 of the auditory cortex, a primary locus of visual–auditory convergence, visual signals convey the presence and timing of a salient stimulus rather than specifics about that stimulus, i.e. auditory responses are not orientation-tuned to visual gratings unlike visual cortex responses [26] (mouse).

3.2 Somatosensory-auditory interactions

Besides the notion that crossmodal interactions are reflected by changes in firing rates, the synchronization of neural signals has been proposed as a key mechanism for multisensory integration in distributed networks [27]. In this regard, it is relevant to mention a study exploring the influence of somatosensory inputs on the activity of A1 neurons using laminar current source density and multiunit recordings. The findings show that somatosensory inputs elicited by median nerve stimulation amplify the neuronal responses evoked by auditory inputs during a high-excitability phase of ongoing local neuronal oscillations and suppress those occurring during a low-excitability phase in the supragranular layers [28] (macaque). Further analysis indicated that this effect was mainly due to a somatosensory-induced phase resetting of auditory oscillations to an optimal excitability phase enhancing the ensemble response of temporally coherent auditory inputs.

Neurons in the posterior region of A1 display cutaneous receptive fields specifically located on the head and neck, which are in spatial register with the auditory receptive fields [29] (macaque). This result supports the view that the posterior auditory cortex may be the site for spatial-movement processing, analogous to the “where pathway” in the parietal stream of the visual system [30, 31] (macaque). A fMRI study documented a supra-additive integration of touch and auditory stimulation in a cortical region posterior and lateral to A1 [32] (macaque). This integration process was more prominent for temporally coincident bimodal stimuli and for less effective stimuli, in conformity with the principle of “inverse effectiveness”.

3.3 Visual-somatosensory interactions

It is worth reporting a fMRI study aiming to compare cortical activation in response to matching versus non-matching visual–haptic texture information in a task that did not require cognitive evaluation of roughness [33] (human). The results show an increased BOLD response in V1 when a dot pattern was presented in both visual and haptic conditions, all the more so whenever visual information matched haptic texture information. In addition, parametric BOLD signal variations with varying texture characteristics were recorded in both primary visual and somatosensory cortices. This study confirms that haptic information can modulate visual information processing at an early stage. A hierarchical feedback of top-down influences from higher sensory areas on early sensory cortices could account for the observed BOLD effects. However, according to the authors this is unlikely, as only matching visual–haptic texture information induced a parametric modulation of the BOLD response in the contralateral somatosensory cortex. An alternative, more plausible, interpretation of the crossmodal texture effects would be direct or indirect cortico-cortical connections between primary areas. This explanation is compatible with haptic texture information flowing from S1 to V1 [34] (human).

Interestingly, a shrinkage of cutaneous receptive fields in areas 3b and 1 has been recorded when tactile and visual stimulations were concomitant, both during physical touch perception and touch observation [35] (human). This sharpening of coactivated receptive fields that reflects a suppressive interaction between tactile and visual cues, presumably occurring through a GABAergic modulation of intracortical inhibition in S1, is expected to improve tactile acuity (for reviews, see [36, 37]). The functional relevance of this finding was highlighted by a study reporting that viewing the hand increased the suppression of the P50 evoked potential due to simultaneous electrical stimulation of adjacent fingers and enhanced tactile acuity in a task of grating orientation discrimination [38] (human). Furthermore, a recent ultra-high-resolution fMRI study provided evidence for a spatially specific visual convergence onto S1. Neurons within the somatotopically organized cutaneous representation of the fingers in areas 3b and 1 were activated by the subject observing his fingers being touched, or the fingers of another person receiving similar tactile stimulation [39]. The visually-driven map was topographically and temporally precise and was found to be in register with the cutaneous map. Further investigations of the neuron characteristics within area 3b and area 1 are required to determine whether or not this area may contribute to distinguishing perceived touch from observed touch.

3.4 Vestibular-somatosensory interactions

The vestibular cortex differs from other sensory cortices in that vestibular signals are distributed in an extensive network of cortical regions [40, 41, 42]. A whole-brain electrophysiological investigation using galvanic vestibular stimulation and fMRI mapping described the cortical projections of vestibular inputs to functionally diverse cortical regions that included S1 [43] (rat). In addition, a more recent investigation revealed that the optogenetic stimulation of the medial vestibular nucleus neurons elicited bilateral fMRI activations in the sensorimotor cortices and their thalamic nuclei [44] (rat). Nevertheless, which region of S1 receives vestibular inputs and how the bimodal interplay occurs has not yet been investigated. In a recent study in rat, we reasoned that the vestibulo-somatosensory convergence in S1 could occur in the cortical zones of the paw representations that would be congruent with the functional role of these inputs in posturo-locomotor regulation. Accordingly, we evaluated the immediate effects of a complete unilateral vestibular neurectomy on the response properties of S1 neurons in the hindpaw cutaneous representations [45]. We found that the acute deafferentation immediately induces a bilateral expansion of the cutaneous receptive fields that exclusively concerned those located on the plantar skin surfaces. A corrolary effect consisted of a dedifferentiation of the topographic organization of the cortical maps representing these surfaces (Figure 1). However, this somatotopy disruption was relatively less pronounced for the representation of ipsilesional hindpaw, consistently with the contralateral predominance of vestibulo-thalamic projections [46, 47] (cat). The rapid deafferentation-induced expansion of cutaneous receptive field indicates that in intact animals, vestibular inputs exert a suppressive effect onto synaptic inputs driving cutaneous responses in S1.

Figure 1.

Immediate effects of unilateral vestibular neurectomy on hindpaw cutaneous representation in S1. (A) Typical receptive fields (RFs) recorded in the S1 cortex and located on hindpaw glabrous skin surfaces of an intact rat (CTRL, left panel) and after double mapping, on ipsilesional (ipsi) and contralesional (contra) hindpaws one hour after unilateral vestibular neurectomy (UVN) (1H, right panel). In green: small size RFs covering less than 10% of the total skin surface of the paw; in purple: medium size RFs >10% and <40% of the paw surface; in yellow: large size RFs including more than 40% of the paw surface. (B) Distribution of plantar cutaneous RF recorded in CTRL and UVN rats. The height of each area of the stacked histogram represents the mean proportion of RFs falling into each category (green: small; purple: medium; yellow: large RFs). (C) Representative electrophysiological cortical maps obtained from an intact rat (CTRL: left panel) and from two rats in which ipsilesional and contralesional hindpaw maps were obtained starting 1 hour after UVN (1H, right panel). The map remodeling was accounted for by the expansion of plantar cutaneous RF illustrated in A-B. Note the drastic dedifferentiation of the somatotopic maps. Simple areas correspond to neurons with RF located on the ventral or dorsal aspect of individual fingers, or encompassing palmar pads. Mixed areas correspond to neurons displaying enlarged RFs extending beyond the somatotopic regions observed in prelesion hindlimb maps, i.e. RFs on 2 or more skin territories of the hindpaw. (D) Stacked histogram showing the relative mean areas of the different map regions, normalized with respect to the total hindlimb area. These relative areas are color-coded (green: simple area; purple: mixed area). **: P < 0.01; ***: P < 0.001 (comparisons with control values); (Kruskal Wallis analysis and Dunnett’s post-hoc test). Vertical bars illustrate standard errors of the means (SEM). Hairy skin RFs and representational zones were not altered after acute UVN [modified from Facchini et al. (in press)].

It is well documented that cortical maps are dynamically reshaped through ongoing adjustments in the balance of excitatory and inhibitory influences on their constituent neurons. Hence, it is very likely that the receptive field enlargement induced by the vestibular loss results from a disinhibition process, possibly via thalamo-cortical inputs on S1 inhibitory interneurons or direct cortico-cortical connections. It has long been argued that intracortical inhibition plays a key role in controlling the spatial selectivity of cortical neurons through segregation of broad sets of converging synaptic inputs. Consistently, studies have reported a substantial enlargement of the cutaneous receptive field of somatosensory cortical neurons when GABA-mediated local inhibition was antagonized by an intracortical bicuculine injection [48, 49, 50] (cat; racoon), whereas injection of baclofen, a selective agonist for the GABAA receptors, induced a shrinkage of these receptive fields [51] (rat). A release of afferent-driven intracortical tonic inhibition results in an enhanced effectiveness of convergent cutaneous inputs. Therefore, this could be a most likely mechanism for rapid unmasking of previously subthreshold afferent connections reflected by the rapid expansion of cutaneous receptive field of S1 neurons following the loss of vestibular inputs. Our results extend previous findings, already mentioned in the present review, showing that auditory inputs to V1 decrease visually induced activity (mouse), while acute hearing loss releases the inhibitory effects of A1 neurons on visually elicited responses in V1 and leads to a concomitant increase in V1 activation [52] (mouse). As also previously noted, auditory stimulation sharpens the orientation selectivity of V1 neurons [22] (mouse). Collectively, the available evidence supports the view that, in normal conditions, cross modal modulation between primary cortical cortices may act to improve the tuning of neuronal response properties in these areas. In our study, the postlesion expansion of cutaneous RF was selectively located on the hindpaw plantar skin surfaces. Hence, we hypothesize that, in normal conditions, the vestibular influences on the S1 cortex could improve tactile acuity during perceptually guided posturo-locomotor adjustments.

3.5 Crossmodal interplay in transitory cortical regions bridging primary sensory areas

The main focus of the present review is on multimodal integration in primary cortical areas. Nonetheless, while considering low-level crossmodal interplay, it is relevant to mention findings related to multimodal convergence and integration in transitional zones lying between primary areas. Visual and somatosensory inputs converge and interact in a graded multisensory zone forming a narrow strip within the associative parietal cortex (APC) of rodent. Previous findings analyzing current source density [53] (rat), or using calcium imaging [54] (mouse) or voltage-sensitive dye imaging (VSDI) described this region as heteromodal [55] (rat). In addition, a gradual merging of modalities from the borders of the primary cortices to the middle of the APC strip has been reported [54] (mouse). Using optical imaging combined with laminar electrophysiological recordings, it was observed that both inputs elicited similar response patterns in this cortical zone [53] (rat) However, current source density analysis of event-related potentials revealed a supra-additive interaction of subthreshold activity when the somatosensory response preceded the visual response, whereas a sub-linear summation was induced by reversing the stimulus order. This finding suggests an asymmetry in the excitation-inhibition balance mediated by the underlying connectivity network, that may be consistent with the observation that visual responses were located deeper than somatosensory responses. The laminar pattern of these visual-somatosensory interactions and the fact that they vanished upon GABAergic silencing of local post-synaptic activity suggest their intracortical origin.

In a recent study, we investigated the neural processing of visual and somatosensory motion cues in individual neurons of the APC [56] (rat). The animals were exposed to moving visual gratings presented in different directions, with various motion speeds, and to air puffs deflecting bilaterally all the whiskers in the antero-posterior (backward) or postero-anterior (forward) directions. When delivered simultaneously, visual and tactile stimuli could be either in the same or opposite direction (congruent or incongruent). We used both voltage-sensitive dye imaging to identify the cortical zone of convergence of tactile and visual afferents, and single-unit recordings to investigate the uni- and bimodal processing of these inputs. We showed the convergence of visual and tactile information, both in layer 2/3 as revealed by VSDI, and in layer 4, as demonstrated by the single-unit recordings. Both whisker deflections and visual moving gratings evoked neural responses in the APC, with similar magnitudes, reflecting the convergence of equally weighted visual and somatosensory information (Figure 2). The majority of recorded cells were bimodal with about 50% exhibiting a directional congruence for the stimulus orientations tested, which strongly points to a potential role of the APC in heteromodal sensory integration. A machine learning approach revealed that the integration of the visual-tactile motion stimuli relies predominantly on the bimodal population, as performing decoding on the unimodal neurons did not yield accuracies above chance. In addition, we found that visual neurons in APC selectively respond to the direction (about 50%) and speed (about 30%) of visual grating motion, while somatosensory neurons display a direction selectivity for whisker stimulation (about 60%). Like in the study mentioned above [53], a temporal dissociation was observed between somatosensory and visual responses, both in the supragranular and granular layers, as the somatosensory stimulations evoked earlier responses than did the visual stimulations. This finding underscores the importance of timing in multimodal integration, and is consistent with the view that whiskers information predominantly relates to fast changing contacts with objects or congeners, while vision mainly provides information about the physical and social environment that likely facilitates the interpretation of somatosensory information. It is plausible that APC is designed as a hub in which multisensory motion information is integrated to contribute to elaborating in higher-order areas a supramodal percept guiding purposeful behavior. Interestingly, these animal studies are consistent with human investigations showing the existence of a multisensory homunculus posterior to S1, along the postcentral sulcus, that overlaps the most anterior retinotopic map with a topographic alignment of tactile and visual representations [57, 58]. The authors proposed that these multisensory topographically organized maps may play a pivotal role in perception and cognition related to peripersonal space.

Figure 2.

Convergence of visual and somatosensory inputs in the associative parietal cortex (APC). (A) Example of cortical activation dynamics evoked by somatosensory (upper row) or visual (lower row) stimulation revealed by voltage-sensitive-dye imaging. The latency to the somatosensory stimulation (60 – 90 ms) is shorter than to the visual stimulation (150 – 350 ms). The value in ms indicate the time after stimulation onset. (B) Example of mean DF/F over time. Time course of responses to the unimodal stimuli, in V1, S1 and APC with a 3D representation of selectivity indices. For each pixel of the acquisition window the colormap depicts the level of selectivity. The low selectivity belt (yellow) is characterized by comparable levels of activation (“ROI in low selectivity belt (APC)” plot), while high selectivity indices (blue) are observed in S1 and V1. (C) Spatial distribution of the recorded neurons corresponding to the visual, somatosensory and bimodal conditions for a representative animal. The heights of the histograms represent the proportions of neurons recorded at the corresponding cortical sites. The proportion of direction selective cells in the neuronal populations is indicated (backward, green; forward, yellow; not selective, gray). Examples of a recorded unit significantly responding only to the backward visual stimulus (pink line), a unit responding to both air puff directions, with a larger spiking probability to the forward direction and a bimodal neuron that responds to all 4 conditions. The inset presents the unit’s waveform. The spiking probabilities to each condition are represented as a function of time from stimulus onset. Note the latency shift existing in neuronal response to visual and somatosensory conditions in the neurons recorded in APC. (D) Proportions of visual (pink), somatosensory (yellow), bimodal (blue) and non-responsive (gray) neurons recorded in the APC (N = 914) (modified from Caron-Guyon et al. [56]).

The set of studies reviewed highlights the broad panoply of connectivity patterns and functional interactions between primary areas that underpin a flexible cooperation at an early stage of sensory processing. Tentatively, we propose that early crossmodal interactions in primary areas contribute to refining and sharpening neural response tuning adapted to improving “immediate” perception and eliminating perceptual ambiguity. This perceptual optimization could occur through rapid neurophysiological mechanisms operating in the corticocortical circuitry (e. g., local suppressive inhibition, sub-additive or supra-additive integration, oscillatory entrainment of neuronal networks) and serve automated behavioral responses. According to this view, cognitive influences exerted onto higher-order cortical integration areas, operating through relatively slower mechanisms, would adaptively modulate the early multimodal integration to fulfill a more complex integration processing influenced by attention and motivation so as to adjust perception to a continuously changing behavioral context.

4. Crossmodal plasticity

As sensory organs are highly specialized, the cooperative interplay of sensory systems improves perception through multimodal integration enhancing reliability of the information conveyed by each sensory channel, all the more so when individuals are engaged in a perceptually complex behavioral context. Hence, it has been assumed for decades that multimodal integration favors crossmodal plasticity and promotes functional compensation following partial or total deprivation of a sensory modality. Accordingly, a cortical area deprived of its dominant sensory input exhibits an increased responsiveness to stimulation of other modalities, thereby changing its functional tuning. There are a wealth of electrophysiological, neuroimaging and behavioral studies carried out in deaf and blind subjects that have provided convincing evidence for intersensory substitution in deprived cortical areas and experience-dependent reorganization in the areas taking over the defective sensory modality [59, 60, 61, 62, 63]. The plasticity mechanisms mediating these changes have also been extensively investigated, with a focus on the ingrowth of novel heteromodal projections or the unmasking of already existing heteromodal inputs. In this chapter, we focus the discussion on crossmodal plasticity occurring within primary areas.

4.1 Heteromodal recruitment of deprived visual cortex

4.1.1 Visual to somatosensory substitution

Capacity for tactile perception to substitute, at least partly, for the loss of vision has long been established [64]. Neuroimaging studies have provided evidence that the occipital visual cortex can be recruited by tactile tasks in blind subjects. For example, a PET study revealed that blind subjects display activation of primary and secondary visual cortical areas during tactile discrimination tasks, in contrast to sighted subjects who exhibited deactivation (i.e., decreased regional cerebral blood flow) in these areas [65]. In addition, this study showed that tactile recruitment of the visual cortex may be task-specific, since a non-discrimination tactile task did not activate V1 in either the blind or sighted subjects. This finding was corroborated in a fMRI study which also showed that electrical stimulation of the hand reading Braille dots did not evoke activation in the visual cortex, suggesting that the tactile recruitment in the visual cortex may result from high-order supramodal processing [66]. Interestingly, transcranial magnetic stimulation (TMS) of the occipital visual cortex overlying Brodmann areas 17, 18 and 19 in early-onset blind subjects, while they were identifying Braille or embossed Roman letters, was found to distort tactile perception [67]. By contrast, no such impairment of tactile performance was observed in sighted subjects. Furthermore, V1 was not only strongly activated during Braille reading, but also during Braille writing from memory in the most foveal part of V1 [68]. However, activation of occipital areas during Braille reading was not found in late-onset blind subjects, and their stimulation by TMS did not disrupt braille reading [69]. This report is at variance with other studies showing V1 activation in late onset blind subjects during braille reading [70, 71]. Individuals who lost sight as adults, and subsequently learned Braille, still exhibit activity in V1, although the spatial extent of the activation in the visual cortex is greater for those who became blind early in life [71]. Moreover, the early-onset blind subjects were found to display stronger activation in the occipital cortex contralateral to the hand used for reading Braille.

In late-blind patients with retinitis pigmentosa, vision deprivation leads to an elevated activation of the visual cortex in response to tactile stimuli during a discriminative task, with higher activation as the degree of vision loss was greater [72]. It is worth mentioning that even in normally sighted adults, five days of complete visual deprivation combined with intensive tactile training result in increased BOLD signal within the occipital cortex in response to tactile stimulation, hence reflecting visual areas engagement in the processing of non-visual information [73]. This crossmodal activation was reversed within 24 hours of removing the blindfold. Surprisingly, even after a short period of blindfolding (40-60 min), V1 activation was observed while the subjects performed a fine spatial tactile discrimination task [74]. Along the same lines, a one-week visual deprivation in juvenile mice was found to improve whisker function. This short period deprivation was sufficient to sharpen the tuning of layer 2/3 neurons in the barrel field of S1 [75].

Considering both the improvement in Braille character tactile discrimination after the five-day blindfolding period [73] and the impairment of Braille character recognition after disruption of the occipital cortex by TMS [67], it is reasonable to infer that crossmodal changes taking place in the visually deprived occipital cortex are behaviorally adaptive. A further argument stems from an interesting study showing that, when systematically stimulating the occipital cortex with single pulse TMS, early- and late -onset blind subjects have reported tactile sensations in the Braille-reading fingers, that were somatotopically mapped onto the visual cortex, whereas blindfolded sighted controls reported only phosphenes [76]. Further evidence for the adaptive function of tactile information processing in the visual cortex of early blind subjects comes from a study reporting the case of a proficient Braille reader blind from birth who was no longer able to read Braille (Braille alexia) after bilateral ischemic stroke to the occipital cortex, while somatosensory perception was otherwise unchanged [77]. The core evidence reported herein supports the view that the recruitment of V1 by somatosensory inputs in the context of compensatory behavioral strategy (Braille reading) accounts, at least in part, for the superior tactile perceptual abilities of blind people [67].

4.1.2 Visual to auditory substitution

Numerous studies have documented the fact that occipital cortical areas can be activated by auditory inputs in blind subjects (for reviews, see [60, 78]). For example, in the early-blind macaque, the occipital visual cortical areas were shown to respond to auditory stimulation [79]; Likewise, auditory responses in the visual cortex of neonatally enucleated rats have been recorded in a third of the V1 neurons recorded [80]. Contrary to a prevailing view, recent studies in late blind subjects have demonstrated that crossmodal plastic changes also occur in the adult. Sound change detection was found to recruit occipital cortical areas in individuals with both early- and late-onset blindness [81]. Further evidence was provided by a positron emission tomography (PET) study showing that visual cortical areas, including V1, were activated during auditory word processing in the congenitally blind and in subjects who had become blind after puberty [70].

There has been longstanding controversy about whether auditory signal processing can compensate for impaired accuracy of spatial representation in blind subjects. For example, fMRI studies have shown that, in early-blind people, V1 is activated during auditory detection and recognition [82] as well as during auditory localization tasks [83]. Early blind subjects are found to localize sound sources with a better accuracy than sighted subjects, in particular in monaural condition [84]. In this study, it was reported that subjects displaying a residual peripheral vision localized sound less precisely than sighted or totally blind subjects. Moreover, in blind individuals, experts at perceiving space through sound echoes using clicks (echolocators), evidence was found for a retinotopic-like mapping of sounds in V1 [85]. This finding indicates that the early visual area can be adapted to precisely remap spatial locations after visual loss. It is worth mentioning that the degree of retinotopic-like mapping of sound echoes was positively associated with echo localization ability [85]. Overall, the findings reported above strongly suggest that crossmodal substitution leading to a functional remapping of sensory and cognitive functions in the deprived cortex depends upon the extent of sensory loss and the nature of the task to be compensated for. It turns out that the crossmodal substitution is limited by the degree of functional overlap or cooperativity between sensory systems. It is worth mentioning a fMRI investigation using auditory discrimination in the congenitally blind with a focus on the effective connectivity between different cortical and thalamic regions via dynamic causal modeling [86]. The data showed a clear enhancement of BOLD responses in bilateral V1 during the auditory task, hence corroborating a previous study [87], and provided evidence for stronger corticocortical effective connectivity from A1 to V1 in blind than in sighted subjects. Furthermore, a combination of dynamic causal modeling with Bayesian selection has demonstrated that auditory-driven activity in the occipital cortex of the congenitally blind is best explained by direct feed-forward connections from A1 to V1, whereas it relies more on indirect feedback inputs from parietal regions in the late-onset blind subjects [88]. This study suggests that visual deprivation during an early critical period induces a crossmodal plasticity under the form of a transfer of spatial processing competency to a non-visual modality in the deprived cortex.

4.2 Heteromodal recruitment of deprived auditory cortex

In this section, we will not distinguish data related to the recruitment of the deprived auditory cortex by somatosensory or visual modalities. As found in blind subjects, animal and human studies have provided ample evidence of crossmodal plasticity after hearing loss. Recruitment of the deprived auditory cortical areas during somatosensory and visual stimulation in deaf individuals was repeatedly observed in higher-order auditory cortex (for review: [60, 89]). By contrast, it remains controversial whether the deafferented primary auditory cortex may be activated by spared sensory modalities. An electrophysiological investigation in congenitally deaf cats failed to detect crossmodal responses to visual or somatosensory stimuli in A1 [90]. Moreover, inactivation of A1 by cooling had no obvious effect on behaviorally-tested visual functions in the congenitally deaf cats [91]. Yet, after early destruction of cochlear receptors, photic stimulation was found to elicit neural activation in A1 of mature cats [92]. However, this crossmodal modification was observed after early-onset deprivation (one week), a period in which primary cortical areas are not yet well defined, but not after late-onset (2 month-old cats) auditory deprivation. Nevertheless, there is evidence in deaf cats for alteration in the pattern of heteromodal thalamocortical and corticocortical projections from somatosensory and visual areas to A1 [93]. Somatosensory projections were more prominent in early- and late-onset deaf animals, whereas projections originating from the visual areas were less apparent in the late-onset than early-onset deaf animals. These findings suggest that crossmodal anatomical plasticity in the deprived auditory cortex differs depending on the age of deafness onset and sensory modality. Furthermore, in early-deaf cats, increased projections from neighboring visual and somatosensory areas to the core auditory cortex including A1 and the surrounding anterior auditory field (AAF) have been described [94]. Interestingly, a study combining electrophysiological recording with cortical myelo-architecture description in congenitally deaf mice showed that the visual and somatosensory spatial domains had taken over auditory domains within A1 and AAF [95]. This finding demonstrates extensive re-specification of cortical fields following auditory loss. In addition, in early deafened ferrets, recordings from single-units in the core auditory cortex showed that 72% were activated by somatosensory stimulation, compared to 1% in hearing controls [96]. In adult-deafened ferrets, extensive crossmodal reorganization of core auditory cortex was also described, which was characterized by a consistent somatosensory conversion in neuron responsiveness within 16 days after deafening [97], thus demonstrating that crossmodal plasticity can also occur after the period of sensory system maturation. These data suggest that subthreshold tactile inputs found in hearing animals can transform into suprathreshold responses in adult deafened animals. In this regard, it is worth mentioning that somatosensory inputs to the core auditory cortex represent the majority of non-auditory effects in hearing ferrets [96]. This specificity may be due to the greater functional similarities between somatosensory and auditory modalities regarding temporal precision underlying frequency percept (e.g. vibrotactile stimulations), compared to that between audition and vision.

The recruitment of A1 for the processing of visual stimuli was also revealed by fMRI investigations in congenitally or early deaf subjects [98, 99, 100]. Moreover, in adult-onset single-sided deafness (SSD), seeded functional connectivity of visual cortices revealed enhancement in visual areas and reduction in auditory regions, suggesting adaptive functional modifications of the visual network [101]. Furthermore, V1 seeds demonstrated increased connectivity in multiple regions, including those dedicated to speech (inferior parietal lobule) or somatosensory processing (postcentral gyrus). It is also noticeable that activation of A1 was observed in deaf subjects with total hearing loss during sign language tasks, but not in subjects with residual hearing ability [102], suggesting that this crossmodal plasticity depends on the extent of hearing loss. Additional evidence of compensatory functional changes comes from the observation that congenitally deaf cats, compared with hearing cats, have superior localization abilities in the peripheral visual field and lower visual movement detection thresholds [91]. In this study, reversible deactivation of posterior auditory cortex was found to selectively eliminate superior visual localization abilities, whereas deactivation of the dorsal auditory cortex eliminated superior visual motion detection. It is of interest that measuring the fMRI signal changes in response to spatially co-registered visual, somatosensory and bimodal stimuli, the visual responses which were stronger in congenitally deaf than hearing adults, appeared to be weaker than those elicited by somatosensory stimulation [103]. This is consistent with the above-mentioned finding on the prevalence of somatosensory over visual inputs in the core auditory cortex [96]. Congenital deafness was also found to enhance the accuracy of suprathreshold tactile change detection, while tactile frequency discrimination thresholds tended to be reduced [104]. Beyond noticeable interspecies differences in the potential of crossmodal reorganization [61], the aforementioned studies highlight that deprived auditory sensory cortical areas become re-engaged in the processing of remaining sensory modalities.

5. Compensatory plasticity in the remaining sensory cortices

The crossmodal plasticity concept has been extended by studies demonstrating that the loss of one sense induces substantial alteration in remaining sensory cortical areas leading to experience-dependent refinement of neuronal responses. The so-called compensatory plasticity is conceived as underlying higher than normal perceptual abilities. However, it is notable that the available experimental evidence of compensatory plasticity is scarce compared to that documenting crossmodal plasticity. Nonetheless, changes involving processing of visual signals have been described following somatosensory and auditory deprivation. In adult mice, partial somatosensory deprivation (bilateral removal of macro-whiskers) lasting 12 days induced a massive increase of V1 responses elicited by weak visual stimuli, which was accompanied by a marked improvement of spatial frequency and contrast tuning (40%) of V1 neurons, as revealed by intrinsic signal imaging [105]. It is noteworthy that visual acuity and contrast sensitivity determined in behavioral tasks in individual animals improved by 40% and 60%, respectively, i.e., similarly to what was observed in V1. In addition, auditory deprivation in adult mice induces salient changes in visually evoked responses in V1, with improvement of spatial frequency and contrast tuning [106]. Conversely, visual deprivation (one week of dark exposure) in adult mice leads to improved frequency selectivity as well as increased frequency tuning and intensity discrimination performance of A1 neurons [107]. Collectively, these studies show that compensatory plasticity can develop after short-term deprivation in adult sensory cortices and highlight the fact that deprivation of one sense rapidly refines sensory processing in remaining cortical areas, while improving sensory guided behavior.

6. Putative mechanisms mediating crossmodal plasticity

A review of the literature indicates that a plethora of neuronal mechanisms such as stabilization of transient connections, unmasking of silent synapses, reinforcement/reweighing of subthreshold connections, structural changes such as axonal sprouting and dendritic arborization remodeling, are all putative mechanisms of crossmodal plasticity [59, 61, 108, 109]. Despite the wealth of data, these cellular and molecular mechanisms remain poorly understood. All these neural mechanisms may operate in subcortical, thalamocortical, as well as primary and associative cortical areas.

To offer some insight into the neural mechanisms underpinning crossmodal plasticity, we will mention a limited sample of studies. For example, concerning changes within the primary sensory areas, cross-modal synaptic plasticity is thought to involve LTP/LTD mechanisms [108]. Furthermore, the improved frequency selectivity and intensity discrimination of A1 neurons following visual deprivation in adult mice was attributed to a strengthening of thalamocortical synapses in A1, but not in V1 [107]. In addition, this deprivation was found to potentiate layer 4 to layer 2/3 synapses in A1 [110]. Such a selective effect suggests that the adult brain retains the capability for crossmodal changes, whereas this capability is absent or very limited within a sensory modality. This view seems to be corroborated by the observation that visual deprivation via lid suture that allows residual visual activity is sufficient to trigger a scaling-up of excitatory synapses in the S1 barrel fields. By contrast, this mild deprivation fails to trigger such a scaling up in V1 which requires a complete loss of visual activity [111]. Furthermore, dark rearing for 1 week in young rats (4-week-old) produces changes in synaptic function in S1 and A1. It was then hypothesized that the scaling-up of V1 synapses might allow recruitment of V1 for processing previously subthreshold inputs carrying tactile or auditory information, while scaling-down of S1 and A1 synapses may sharpen neuronal properties for enhanced perception thus constituting a basis for a sensory compensation [108, 111]. In these somatosensory and auditory cortical areas, decreased amplitudes of AMPA receptor-mediated excitatory transmission in layer 2/3 pyramidal neurons were observed, whereas opposite effects were recorded in the deprived visual cortex [112]. These changes were rapidly reversed after 2 days of light exposure. This study raises the question of whether the crossmodal plasticity induced in somatosensory cortex is due to the altered cortical processing of tactile inputs engaging cortico-cortical pathways or to differences in tactile experience involving thalamocortical projections. In addition to these alterations in the strength of intracortical synapses, visual deprivation (dark exposure for 6–8 days) was found to produce a refinement of intra- and inter-laminar functional circuitry in A1, in the adult mouse. Using in vitro whole-cell patch recordings in thalamocortical slices from auditory cortex, it was shown that this period of dark exposure can refine ascending and intralaminar excitatory and inhibitory circuits to layer 2/3 neurons [113], as well as interlaminar excitatory and inhibitory connections from layer 2/3 to layer 4 neurons [114]. Mathematical modeling of the data shows that the observed refinements increase the firing reliability of sound-evoked responses [113]. Visual deprivation in the rat was found to produce an increase in extracellular serotonin levels facilitating synaptic strengthening at layer 4 to layer 2/3 synapses in the barrel cortex [75]. Beyond this local effect, crossmodal plasticity may also engage large-scale modulatory mechanisms mediated, for instance, by the serotoninergic system to orchestrate cortical reorganization in relation with arousal and attention shift from the deprived to the intact sense [115, 116]. It is also worth mentioning that a positive correlation was depicted between behavioral performances in auditory and tactile tasks, and both the myelination of intracortical neurons and gray matter concentration measured in the occipital cortical areas of early-blind adults [117].

Even though crossmodal plasticity was demonstrated in early and late deprived subjects, the age of sensory loss onset seems to play a crucial role in the mechanisms involved [118]. In the case of congenitally or early sensory loss, primary cortical areas may retune their functional specificity based on the maintenance, during the developmental period, of intermodal projecting axons that would have otherwise been pruned. By contrast, in the case of late deprivation, crossmodal plasticity may rely on the remodeling and strengthening of pre-existing inputs in the deprived or spared cortical areas. Species-specificity also accounts for crossmodal plasticity, as primary-to primary areas connectivity changes have been show to occur in rodent, but less consistently in higher order species [61].

7. Conclusion

Collectively, the reported studies on crossmodal interplay and plasticity in primary cortical areas after sensory loss challenge the view that multisensory integration and plasticity only exists in high-order cortical areas. Future studies should investigate how the crossmodal substitution effects shown in the deprived primary areas contribute to the high-order cortices plasticity after a sensory loss and more generally promote adaptive behavioral performances through functional compensation. In this respect, it will be important to decipher the neural mechanisms underpinning the necessary rebalancing of bottom-up and top-down regulatory influences in order to improve neuro-rehabilitative procedures.

Abbreviations

A1, V1, S1primary auditory, visual and somatosensory areas of the neocortex
V2secondary visual cortical area
1/3bareas 1 and 3b of the primary somatosensory cortex

Download for free

chapter PDF

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Christian Xerri and Yoh’i Zennou-Azogui (January 14th 2021). Interplay between Primary Cortical Areas and Crossmodal Plasticity [Online First], IntechOpen, DOI: 10.5772/intechopen.95450. Available from:

chapter statistics

49total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us