1. Introduction
A fundamental problem in cognitive neuroscience is how the brain realizes semantic memory. It is generally accepted that semantic memory consists of stored information about the main features of an object, some processing mechanisms which allow a person to retrieve these features from partial cues, and to link them with lexical aspects and words. The final result is a kind of knowledge which is context independent, can be shared with other people and can produce language and thought (Tulving 1972).
Many conceptual theories of semantic memory have been proposed in past decades, based on two fundamental pieces of information: the behavior of patients with neurological lesions in specific brain areas, who exhibit deficits in word recognition, and results of more recent neuroimaging studies, putting in evidence which different brain areas participate to semantic tasks. The interested reader can find several excellent review papers on the subject (Martin & Chao 2001, Hart et al. 2007). Just the main fundamental issues, essential for the comprehension of the present chapter, are summarized below.
First, most theories agree in assuming that semantic memory is not a localized process, but one which involves a highly distributed representation of features and which engages several different cortical areas, located in the sensory and motor regions of the cortex. This concept may help explaining the existence of patients with category-specific semantic deficits (for instance, patients with impairment in word recognition for living things but no impairment for recognition of non-living objects, or patients with impairment for nouns vs. verbs (Warrington & Shallice 1984, Caramazza & Shelton 1998)). Damasio (Damasio 1989), suggested that semantic representation is fragmented in many motor and sensory features, which must then be integrated in a “convergence zone”. Accordingly, Warrington et al. (Warrington & McCarthy 1987), assumed the presence of multiple channels which separately process sensory and motor aspects of objects; these two main systems would be especially important to identify living and non-living objects, respectively. Several subsequent extensions, improvements or variations of this theory were formulated by various groups (Caramazza et al. 1990, Lauro-Grotto et al. 1997, Humphreys & Forde 2001, Snowden et al. 2004, Gainotti 2006). Nevertheless, all these theories substantially agree in assuming that the semantic system is realized by means of an integrated multimodal network, in which different areas store different modality-specific features. Some theories can also account for the emergence of categories from features: for instance, Tyler et al. (Tyler et al. 2000), in a conceptual model named “Conceptual Structure Account”, suggested that objects are represented as patterns of activation across features, and that categories emerge from those objects that share common features and are highly correlated. Hart, Kraut et al. (Kraut et al. 2002, Hart et al. 2002) imagined that object representation is encoded not only in sensorimotor but also in higher-order cognitive areas (lexical, emotional, etc…) and all these representations are integrated via synchronized neural firing modulated by the thalamus. Barsalou et al. (Barsalou et al. 2003) assumed that groups of neurons are coactivated to represent collection of features, and that these groups of features encode progressively more generic information from sensory perception to higher cognitive associations. Moreover, they assumed a topography principle, according to which the spatial proximity of neurons reflects the similarity of the encoded features.
The previous conceptual theories, and others not listed here for briefness, although of the greatest value for cognitive neuroscientists, are just qualitative. In particular, the mechanisms responsible for integrating a distributed information into a coherent semantic representation, their physiological reliability in terms of neural structures, the learning rules for synaptic plasticity, are all aspects which deserve a more quantitative analysis. Modern neural network models, and computer simulation techniques now allow conceptual theories to be translated into a quantitative integrated system of neural units, and the consequent emergent behavior to be analyzed in detail, incorporating training aspects which mimic real synaptic plasticity rules.
A few previous models, based on neural networks with “hidden units” have attempted to reproduce how sensory information (for instance the visual one) can recall the linguistic information, and vice versa (Rogers et al. 2004) or to simulate category-specific semantic impairment (Small et al. 1995, McRae et al. 1997; Devlin et al. 1998, Lambon Ralph et al. 2007). All these models exploit a supervised algorithm (such as the back propagation) for training the network to solve the requested task: pathological deficits are then simulated assuming some damage in network synapses. A more sophisticate model, able to simultaneously retrieve multiple objects stored in memory, was presented by Morelli et al. recently (Morelli et al. 2006). In the model, features are coded by neurons which work in chaotic regimen, and the retrieval process is achieved via synchronization of neurons coding for the same object.
In recent years, we developed an original model (Ursino et al. 2006, Ursino et al. 2009, Cuppini et al. 2009) which aspires to explore several important issues of semantic memory, laying emphasis on the possible topological organization of the neural units involved, on their reciprocal connections and on synapse learning mechanisms. Some problems that the model aspires to investigate are: how can a coherent and complete object representation be retrieved starting from partial or corrupted information? How can this representation be linked to lexical aspects (words or lemmas) of language? How can different concepts be simultaneously recalled in memory, together with the corresponding words? How can categories be represented? How can category-specific deficits be at least approximately explained? What are the mechanisms exploited in bilingualism?
The model assumes that objects are represented via different multimodal features, encoded through a distributed representation among different cortical areas: each area is devoted to a specific feature. Features are topologically organized (as in the conceptual model by Barsalou et al. (Barsalou et al. 2003)) and linked together by implementing two high-level Gestalt rules: similarity and previous knowledge. Multiple object retrieval is realized by means of synchronized activity of neural oscillators in the gamma-band (an idea often exploited in models for segmentation of visual or auditory scenes (von der Malsburg & Schneider 1986, Singer & Gray 1995; Wang & Terman 1997), and which reminds the conceptual Semantic Object Model by Kraut et al. (Kraut et al. 2002)). Finally, words are represented in a separate cortical area, and linked with the correct object representation via a Hebbian mechanism (Rolls & Treves 1998).
In the following, the model is first presented in a qualitative way, and some exemplary results, concerning object retrieval, connection between objects and words, and categories, are presented. All equations are reported in the Appendix. It is worth noting that the present model is significantly improved compared with previous versions (Ursino et al. 2009, Cuppini et al. 2009): the new aspects concern the possibility to represent objects with a different number of features (whereas a fixed number of features for all objects was used in previous works) and a more physiological mechanism to recognize words from objects. All new aspects are emphasized in the text.
2. Method and results
The model incorporates two networks of neurons, as illustrated in the schematic diagram of Fig. 1. These are briefly described below in a qualitative way, while all equations and parameters can be found in the Appendix.
1)
The network is composed of
Previous works demonstrated that, in order to solve the segmentation problem, a network of oscillating units requires the presence of a “global separator” (von der Malsburg & Schneider 1986, Wang & Terman 1997, Ursino et al. 2003). For this reason, the feature network incorporates a inhibitory unit which receives the sum of the whole excitation coming from the feature network, and sends back a strong inhibitory signal if this input exceeds a given threshold. In this way, as soon as a single object representation pops out in the network, all other objects representations are momentarily inhibited, avoiding superimposition of two simultaneous objects.
During the simulation, a feature is represented by a single input localized at a specific coordinate of the network, able to trigger the oscillatory activity of the corresponding unit. We assume that these inputs are the result of upstream processing stages, that extracted the main sensory-motor properties of the objects. The way these features are extracted and represented in the sensory and motor areas is well beyond the aim of the present model.
The topological organization of each cortical area is realized assuming that each oscillator is connected with the other oscillators in the same area via lateral excitatory and inhibitory synapses (
Throughout the following simulations, we assumed that the lateral intra-area synapses cannot be modified by experience, i.e., the similarity principle they implement is assigned “a priori”. This is probably not true in the reality, since topological maps can be learned via classical Hebbian mechanisms (Rolls & Treves 1998, Haykin 1999). However, this choice is convenient to maintain a clear separation between different processes in our model (i.e. the implementation of the similarity principle on one hand and implementation of a previous knowledge principle on the other).
Besides the intra-area synapses, we also assumed the existence of excitatory long-range synapses between different feature areas (
To simplify the algorithm, in previous works we assumed that each object is described by a fixed number of features (four features in (Ursino et al. 2009, Cuppini et al. 2009)). Conversely, in the present version this constraint is removed, and we assume that the number of features describing a single object can vary from one object to the next. In the following exempla, the feature network will be subdivided into nine different cortical areas. Hence, an object can have up to nine different features. This limit has been introduced just to reduce the computational weight.
An example of the synaptic changes obtained from the learning procedure is shown in Fig. 2, which depicts the synapses targeting onto a neuron. Here an object is described by means of five different features, located in five different cortical areas. After training, neurons belonging to the five activation bubbles are linked by means of excitatory connections, and synchronize their oscillatory activity. In particular, a neuron coding for a single feature of the object after training receives excitatory synapses from four different bubbles of neurons, representing the other four features of the same object and their minimal variations.
In summary, intra-area lateral connections implement a similarity principle, while inter-area trained connections implement a “previous knowledge” principle.
An important aspect of our model is that, after training, multiple objects can be simultaneously recovered in memory and oscillate in time division with their frequency in the gamma-band. Moreover, an object can be recovered even if some features are lacking, or even if some features are reasonably altered compared with those of the prototypical object used during the learning phase. Fig. 3 shows an example in which three previously trained objects, characterized by three, five and seven features respectively, are simultaneously
presented to the network. Moreover, as described in the figure legend, the objects are recovered despite the presence of incomplete information (some features are lacking) and corrupted data (some features are slightly changed).
2)
For the sake of simplicity, computational units in this network are described via a simple first-order dynamics and a non-linear sigmoid relationship. Hence, if stimulated with a constant input, these units do not oscillate but, after a transient response, reach a given steady-state activation value (but, of course, they oscillate if stimulated with an oscillating input coming from the feature network).
In order to associate words with their object representation, we performed a second training phase, in which the model receives a single input to the lexical network (i.e., a single word is detected) together with the features of a previously learned object. Synapses linking the objects with words, in both directions (i.e., from the lexical network to the feature network and viceversa) are learned with Hebbian mechanisms.
While synapses from words to features (
In order to address these two requirements, in previous works we implemented a complex “decision network” (see (Ursino et al. 2009)). Conversely, in the present model we adopted a more straightforward and physiologically realistic solution. First, we assumed that, before training, all units in the feature network send strong inhibitory synapses to all units in the lexical network. Hence, activation of any feature potentially inhibits all lexical units. These synapses are then progressively
Moreover, we assume that all feature units can send excitatory synapses to lexical units: these are initially set at zero and are
Using quite a sharp sigmoidal characteristic for lexical units, the previous two rules ensure that a word in the lexical network is excited if and only if all its features are simultaneously active: if even a single feature is not evoked, the word does not receive enough excitation (failure of the binding problem); if even a spurious feature pops up, the lexical unit receives excessive inhibition (failure of the segmentation problem): in both conditions it does not jump from the silent to the excited state.
Two exempla of model behavior are shown in Figs. 4 and 5. In the first figure, one word and two objects are simultaneously given to the network. The word evokes the corresponding object representation in the feature network, while the objects evoke the corresponding words in the lexical network. It is worth noting that all objects in the feature network oscillate in time division, and thus are all individually recognized. In the second figure, two incomplete objects are given to the feature network. The first is characterized by five features, but only four of them are given as input. However, these four features are sufficient to recover the whole object representation and the corresponding word is activated in the lexical area. Conversely, the second object is characterized by seven features, while only three features are given as input. These are insufficient to recover the overall object representation, and the corresponding word is not activated (i.e., the subject did not recognize the object starting from such an incomplete information). It is worth noting that the number of incomplete information necessary to recover the whole object depends on the strength of inter-area synapses in the feature network, hence on the duration of the previous training period. In our simulations, we always assumed that, after training, at least 50% or more of the object features are required to attain the overall object reconstruction.
The structure of the model can also be used to study the formation of categories starting from a distributed representation of features. A widely shared idea, in fact, is that a category can be realized by objects which share some common features (for instance, “dog” and “cat” belong to the category “pet” and have many common characteristics). This idea is supported by experiments on the so-called “semantic priming”, i.e., object recognition can be modulated by the previous recognition of another object which is “semantic congruent” (Rossell et al. 2003, Matsumoto et al. 2005). The explanation may be that the two objects activate some common neural structures, resulting in a classic priming phenomenon.
Category formation can be reproduced in our model by simply assuming objects that have some common features, and assuming that these common features are associated with a specific word, having a more general meaning (i.e., denoting a category).
Let us consider an example in which two objects (for convenience, “cat” and “dog”), with seven features each, are trained and associated with two different words. Moreover, we assumed that these two objects have four common features (representing the common characteristics of “pets”). It is worth noticing that, after the training phase 1, the inter-area synapses in the feature network are the sum of synapses learned during each object presentation. As a consequence, the four common features are linked together by means of stronger synapses compared with the remaining specific features distinguishing cats from dogs, which are more weakly linked together and to the other four features. Hence, we have an irregular pattern of synapses (Fig. 6).
We assumed that, after the first training phase, the four common features alone are unable to recover the three remaining features (otherwise, any pet would retrieve the words “dog” and “cat”). During the second training phase, the features representing objects “dog”(7 features), “cat” (7 features) and “pet” (4 common features) were separately given to the network together with the corresponding words, to generate three distinct lexical links.
Simulation results are presented in Fig. 7. In this figure, all seven features describing a cat were initially given to the network (during the first half of the simulation). In this condition, the corresponding word is activated in the lexical area (a similar result, of course, would be obtained giving seven features of “dog” to the network). It is worth noticing that the word “pet” is not active in the lexical area, due to inhibition coming from an excessive number of features. Conversely, when the number of features given to the network is reduced to four (second part of the simulation), all belonging to the category “pet”, the network does not recall the features specific of cats and dogs, and the word “pet” is now emerging in the lexical area, without the emerging of the words “dog” and “cat”.
3. Discussion
The present work intends to summarize several different ideas on semantic memory, appeared in the neurocognitive and psycholinguistic literature over past years, into a coherent and comprehensive neural network model. The main points that characterize model functioning are briefly discussed below, together with their neurophysiological support:
i) The model assumes that binding and segmentation of multiple objects occur via synchronization of neural activity in the gamma-band. This idea, originally proposed with reference to vision problems (Singer & Gray 1995, Engel & Singer 2001), is now widely supported also for what concerns high-level cognitive problems. For instance, a role of gamma-activity has been demonstrated in recognition of music (Bhattacharya et al. 2001), faces (Rodriguez et al. 1999), as well as during visual search tasks (Tallon-Baudry et al. 1997) and delayed-matching-to-sample-tasks (Tallon-Baudry et al. 1998). ii) Each object is described by means of a different number of features, which spread over different cortical areas. This is quite a common idea in conceptual models of semantic memory (Caramazza et al. 1990, Gainotti 2006, Hart et al. 2007). iii) Features are topographically organized. Results supporting this idea can be found in recent works by Barsalou et al. (Simmons & Barsalou 2003, Barsalou et al. 2003). iv) Knowledge of previous objects is stored in the model by means of inter-area synapses, which realize excitatory links learned via a Hebbian mechanism. Hence, it is sufficient that several features of an object occurred together for a sufficient long period in the past, for the creation of a permanent link. v) Words are represented in a different cortical area, separate from features. Although it is difficult to find a specific cortical location for this area, the existence of cortical regions especially devoted to lexical aspects of language has been hypothesized in cognitive neuroscience for decades (Ward 2006). vi) Links between lexical aspects (words) and semantic aspects (i.e., object representation) are learned via Hebbian mechanisms too. This requires the simultaneous presentation of an object with its representative word. vii) Objects can evoke words only if all their features have been correctly restored in memory and segmented from features of other objects. In other terms, the present model implies that a complete semantic recognition is a prerequisite for evocation of words. viii) The presentation of a word (from phonemes or from written texts) is able to evoke the representation of the object. Some recent data in the neurophysiologic literature support this idea: presentation of an action word can evoke activity in the motor and premotor cortex (Pulvermller et al. 2005a, Pulvermller et al. 2005b) and presentation of a smell can activate olfactory areas (González et al. 2006). ix) Categories can be represented by features which belong to different objects simultaneously, assuming that these shared features can be associated with a new word, and that activities of these features, when presented alone, remain bounded without spreading to reconstruct the original individual objects. x) The relationships from objects to words require the presence of both excitatory and inhibitory mechanisms, to evoke objects separately from their category, or to avoid that the presence of an excessive number of features (as in the failure of the segmentation task) evokes erroneous words.
Using the previous basic ideas, the model is able to simulate semantic memory and its link with lexical aspects in a variety of conditions which, although drastically simplified compared with the reality, can provide some cues to drive future ideas and to test the reliability of existing theories.
The present simulations (recognition of different simultaneous words and objects, even in the case of absent or corrupted features) represent just a few aspects of the potential model applications. Future challenges may be concerned with the following major issues, which have not been explicitly treated here due to space limitations:
These last aspects are just exempla of how the model may have a large applicative domain in future research. Its validation, amelioration and extension, however, and its use for the analysis and the theoretical formalization of different semantic/lexical problems, will necessarily require a strong multidisciplinary approach. This should entrain researchers in different domains: such as neurophysiologists, cognitive neuroscientists, experts of psycholinguistics, mathematicians and neuro-engineers.
4. Appendix
4.1. The bidimensional network of features
In the following, each oscillator will be denoted with the subscripts
Each single oscillator consists of a feedback connection between an excitatory unit,
where H() represents a sigmoidal activation function defined as
The other parameters in Eqs. (1) and (2) have the following meaning:
According to Eq. 4, the global inhibitor computes the overall excitatory activity in the network, and sends back an inhibitory signal (z = 1) when this activity overcomes a given threshold (say θz). This inhibitory signal prevents other objects from popping out as long as a previous object is still active.
The coupling terms between elements in cortical areas, Eij and Jij in Eqs. (1) and (2), are computed as follows
where ij denotes the position of the postsynaptic (target) neuron, and hk the position of the presynaptic neuron, and the sums extend to all presynaptic neurons in the feature area. The symbols
The Mexican hat disposition for the intra-area connections has been realized by means of two Gaussian functions, with excitation stronger but narrower than inhibition. Hence,
where
Finally, the term
where
4.2. The bidimensional lexical area
In the following each element of the lexical area will be denoted with the subscripts ij or hk (i, h = 1, 2, …, M1; j, k = 1,2,…, M2) and with the superscript L. In the present study we adopted M1 = M2 = 40. Each single element exhibits a sigmoidal relationship (with lower threshold and upper saturation) and a first order dynamics (with a given time constant). This is described via the following differential equation:
where
According to the previous description, the overall input,
where
4.3. Synapses training
Phase 1: Training of inter-area synapses within the feature network
In a first phase, the network is trained to recognize objects without the presence of words.
Recent experimental data suggest that synaptic potentiation occurs if the pre-synaptic inputs precede post-synaptic activity by 10 ms or less (Markram et al. 1997, Abbott & Nelson 2000). Hence, in our learning phase we assumed that the Hebbian rule depends on the present value of post-synaptic activity, xij(t), and on the moving average of the pre-synaptic activity (say mhk(t)) computed during the previous 10 ms. We define a moving average signal, reflecting the average activity during the previous 10 ms, as follows
where TS is the sampling time (in milliseconds), and NS is the number of samples contained within 10 ms (i.e., Ns = 10/TS). The synapses linking two neurons (say ij and hk) are then modified as follows during the learning phase
where βij,hk represents a learning factor.
In order to assign a value for the learning factor, βij,hk, in our model we assumed that inter-area synapses cannot overcome a maximum saturation value. This is realized assuming that the learning factor is progressively reduced to zero when the synapse approaches its maximum saturation. Furthermore, neurons belonging to the same area cannot be linked by a long-range synapse. We have
where Wmax is the maximum value allowed for any synapse, and
These synapses are trained during a second phase, in which an object is presented to the network together with its corresponding word.
Synapses from the lexical network to the feature network (i.e., parameters
where
Conversely, synapses from the feature network to the lexical network (i.e., parameters
The excitatory portion is trained (starting from initially null values) using equations similar to 16 and 17, but assuming that the sum of synapses entering a word must not overcome a saturation value (say
where the average activity mhk
The inhibitory synapses start from a high value (say
where the function “positive part” ([]+) is used in the right hand member of Eq. 24 to avoid that these synapses become negative (i.e., that inhibition is converted to excitation).
References
- 1.
Abbott L. F. Nelson S. B. 2000 Synaptic plasticity: taming the beast. ,3 1178 1183 . - 2.
Abutalebi J. 2008 Neural aspects of second language representation and language control . , 128(3),466 478 . - 3.
Barsalou L. W. Simmons W. K. Barbey A. K. Wilson C. D. 2003 Grounding conceptual knowledge in modality-specific systems . , 7(2),84 91 . - 4.
Bhattacharya J. Petsche H. Pereda E. 2001 Long-range synchrony in the gamma band: role in music perception. ,21 6329 6337 . - 5.
Caramazza A. Hillis A. Rapp B. 1990 The multiple semantics hypothesis: Multiple confusions? ,7 161 189 . - 6.
Caramazza A. Shelton J. R. 1998 Domain-specific knowledge systems in the brain the animate-inanimate distinction. ,10 1 34 . - 7.
Cuppini C. Magosso E. Ursino M. 2009 A neural network model of semantic memory linking feature-based object representation and words . , 96(3),195 205 . - 8.
Damasio A. R. 1989 Time-locked multiregional retroactivation: a systems level proposal for the neural substrates of recall and recognition. ,33 25 62 . - 9.
Devlin J. T. Gonnerman L. M. Andersen E. S. Seidenberg M. S. 1998 Category-specific semantic deficits in focal and widespread brain damage: a computational account. , 10(1),77 94 . - 10.
Engel A. K. Singer W. 2001 Temporal binding and the neural correlates of sensory awareness . , 5(1),16 25 . - 11.
Gainotti G. 2006 Anatomical functional and cognitive determinants of semantic memory disorders . ,30 577 594 . - 12.
González J. Barros-Loscertales A. Pulvermller F. Meseguer V. Sanjun A. Belloch V. Avila C. 2006 Reading cinnamon activates olfactory brain regions . , 32(2),906 912 . - 13.
Green D. W. 1998 Mental control of the bilingual lexico-semantic system . , 1(2),67 -81. - 14.
Hart J. Anand R. Zoccoli S. Maguire M. Gamino J. Tillman G. King R. Kraut M. A. 2007 Neural substrates of semantic memory . , 13(5),865 880 . - 15.
Hart J. Moo L. R. Segal J. B. Adkins E. Kraut M. 2002 "Neural substrates of semantics." , Psychology Press, Philadelphia. - 16.
Haykin S. 1999 "Neural Neworks: a comprehensive foundation." Prentice Hall. - 17.
Hopfield J. J. Brody C. D. 2001 What is a moment? Transient synchrony as a collective mechanism for spatiotemporal integration . , 98(3), 1282-1287. - 18.
Humphreys G. W. Forde E. M. E. 2001 Hierarchies, similarity, and interactivity in object recognition: "Category-specific" neurophysiological deficits. ,24 453 509 . - 19.
Kraut M. A. Kremen S. Segal J. B. Calhoun V. Moo L. R. Art H. J. 2002 Object activation from features in the semantic system . , 14(1),24 36 . - 20.
Lambon Ralph. M. A. Lowe C. Rogers T. T. 2007 Neural basis of category-specific demantic deficits for living things: evidence from semantic dementia, HSVE and a neural network model. ,130 1127 1137 . - 21.
Lauro-Grotto R. Reich S. Visadoro M. 1997 "The computational role of conscious processing in a model ofsemantic memory." Cognition, Computation and Consciousness , M. Ito, S. Miyashita, and E. Rolls, eds., Oxford University Press, Oxford,249 263 . - 22.
Markram H. Lübke J. Frotscher M. Sakmann B. 1997 Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSSs. ,275 213 215 . - 23.
Martin A. Chao L. L. 2001 Semantic memory and the brain: structure and processes. , 11(2),194 201 . - 24.
Matsumoto A. Iidaka T. Haneda K. Okada T. Sadato N. 2005 Linking semantic priming effect in functional MRI and event-related potentials. , 24(3),624 34 . - 25.
Mc Rae K. de Sa V. R. Seidenberg M. S. On the nature and scope of featural representations of word meaning. J Exp Psychol Gen99 130 . - 26.
Morelli A. Lauro G. R. Arecchi F. T. 2006 Neural coding for the retrieval of multiple memory patterns . , 86(1-3),100 109 . - 27.
Pulvermller F. Hauk O. Nikulin V. V. Ilmoniemi R. J. 2005a Functional links between motor and language systems . , 21(3),793 797 . - 28.
Pulvermller F. Shtyrov Y. Ilmoniemi R. 2005b Brain signatures of meaning access in action word recognition . , 17(6),884 892 . - 29.
Rodriguez E. George N. Lachaux J. P. Martinerie J. Renault B. Varela F. J. 1999 Perception’s shadow: long-distance synchronization of human brain activity. ,397 430 433 . - 30.
Rogers T. T. Lambon Ralph. M. A. Garrard P. Bozeat S. Mc Clelland J. L. Hodges J. R. Patterson K. 2004 Structure and deterioration of semantic memory: a neuropsychological and computational investigation . , 111(1),205 235 . - 31.
Rolls E. T. Treves A. 1998 "Neural Networks and Brain Function ." Oxford University Press, Oxford. - 32.
Rossell S. L. Price C. J. Nobre A. C. 2003 The anatomy and time course of semantic priming investigated by fMRI and ERPs. , 41(5),550 64 . - 33.
Simmons W. K. Barsalou L. W. 2003 The similarity-in-topography principle: reconciling theories of conceptual deficits . ,20 451 486 . - 34.
Singer W. Gray C. M. 1995 Visual Feature integration and the temporal correlation hypothesis. ,18 555 586 . - 35.
Small S. L. HArt J. Nguyen T. Gordon B. 1995 Distributed representations of semantic knowledge in the brain . , 118 ( Pt 2),441 453 . - 36.
Snowden J. S. Thompson J. C. Neary D. 2004 Knowledge of famous faces and names in semantic dementia . ,127 860 872 . - 37.
Tallon-Baudry C. Bertrand O. Delpuech C. Pernier J. 1997 Oscillatory gamma-band (30-70 Hz) activity induced by a visual search task in humans. ,17 722 734 . - 38.
Tallon-Baudry C. Bertrand O. Peronnet F. Pernier J. 1998 Induced gamma-band activity during the dealy of a visual short-term memory task in humans. ,18 4244 4254 . - 39.
Tulving E. 1972 "Episodic and semantic memory." , E. Tulving, and W. Donaldson, eds., Academic Press, New York. - 40.
Tyler L. K. Moss H. E. Durrant-Peatfield M. R. Levy J. P. 2000 Conceptual structure and the structure of concepts: a distributed account of category-specific deficits . , 75(2),195 231 . - 41.
Ursino M. La Cara G. E. Sarti A. 2003 Binding and segmentation of multiple objects through neural oscillators inhibited by contour information. ,89 56 70 . - 42.
Ursino M. Magosso E. Cuppini C. 2009 Recognition of abstracts objects via neural oscillators: interaction among topological organization, associative memory and gamma-band synchronizatio n. , 20(2),316 335 . - 43.
Ursino M. Magosso E. La Cara G. E. Cuppini C. 2006 Object segmentation and recovery via neural oscillators implementing the similarity and prior knowledge gestalt rules . ,85 201 218 . - 44.
von der Malsburg. C. Schneider W. 1986 A neural cocktail-party processor. ,54 29 40 . - 45.
Wang D. Terman D. 1997 Image Segmentation based on oscillatory correlation. ,9 805 836 . - 46.
Ward J. 2006 "The student’s guide to cognitive neuroscience ." Psychology Press, Hove and New York. - 47.
Warrington E. K. Mc Carthy R. 1983 Category specific access dysphasia . ,106 859 878 . - 48.
Warrington E. K. Mc Carthy R. 1987 Categories of knowledge: further fractionations and an attempted integration. ,110 1273 1296 . - 49.
Warrington E. K. Shallice T. 1984 Category specific semantic impairements. ,107 829 854 .