Open access peer-reviewed chapter

Perspective Chapter: Role of the Hippocampal Formation in Navigation from a Simultaneous Location and Mapping Perspective

Written By

André Pedro, Jânio Monteiro and António João Silva

Reviewed: 09 February 2023 Published: 29 March 2023

DOI: 10.5772/intechopen.110450

From the Edited Volume

Hippocampus - More than Just Memory

Edited by Douglas D. Burman

Chapter metrics overview

105 Chapter Downloads

View Full Metrics

Abstract

The research of the brain has led to many questions, with most of them still not having a definitive answer. One of those questions is about how the brain acts when we navigate a new space. Inside the Temporal Lobe’s Hippocampal structure, specific types of neurons and neuronal structures are responsible to identify spatial elements. To recognize spaces, these cells require data, which is obtained from the subject’s senses. It is important to understand how these features are captured, processed, encoded and how the Hippocampus, and its neighboring elements, use the information to help in the navigation and mapping of a place. A specific type of neurons seems to support an animals location and spatial mapping, on other areas of research, discrete global grid systems are used to increase the independence of the autonomous vehicles, allowing the indexing of assets across the globe by partitioning the earth into grids that take into account the heterogeneity of the scales of the associated geospatial data. In this context, the main objective of this chapter is to make an analysis about the biological and technical aspects of navigation by establishing a bridge between the Hippocampus and Simultaneous Localization and Mapping (SLAM) methods.

Keywords

  • hippocampus
  • entorhinal cortex
  • SLAM
  • Grid Cells
  • robotics
  • spatial navigation
  • sensory processing

1. Introduction

The brain is a complex structure that somehow is able to make associations between different inputs from different areas. The information used for these associations results from the body senses, including the Visual, Olfactory, Auditory, Gustatory, Somatosensory and Self-Motion. This process is done without labeling the data entries. It is believed that most of these associations occur in the Hippocampus. The Hippocampus is a Temporal Lobe element and its functions are related to the creation of new memories [1], control of emotions [2] and spatial navigation [3]. There are several conceptual models about the functioning of the Hippocampus, one of them is the memory index theory where the Hippocampus plays the role in creating an index of neocortical pattern associations related to a context or situation [4]. Since the discovery of the Place Cells in the Hippocampus and their connection of spatial processing elements of the Entorhinal Cortex (EC), it is believed that the Hippocampus can identify places and the contextual situations associated with those areas [5]. The information about the state of the senses enters the Hippocampus through the EC, that in turn receives inputs directly from the senses or through supporting elements like the Parahippocampal gyrus and Peririhinal Cortex [5]. The EC area is divided into two different parts, the Lateral Entorhinal Cortex (LEC), that is responsible for the handling of data not related with spatial navigation, and the Medial Entorhinal Cortex (MEC). The Grid Cells are a set of cells located in the MEC, with the assumed ability to identify positions inside a spatial environment. Such behavior is possible due to its hexagonal firing pattern [5]. Also, according with [6], the Grid Cells are organized in a multi layer format. Such organization allows those cells to give the necessary precision in the determination of a spatial position. The information produced by the Grid Cells is sent to the Hippocampus, specially affecting the Place Cells, where the data is used to encode and to create representations of the known places, by adapting to new environmental changes with a mechanism called global remapping [7]. Is important to understand how the senses gather information from the environment and how such data is processed, because a coherent understanding of the space depends in the acquisition of such information. The Vision, the Audition and other senses are crucial for the navigational calculations because those elements are used to determine the environmental landmarks, essential in the creation of a mental map [7]. In this scope, the purpose of this book chapter is to describe the work done until now in this area, focusing in how the obtained sensory information can be used for spatial navigation and how the Hippocampal Formation elements are used to solve navigational problems. This book chapter is composed of five sections. The first section, contains an analysis about the importance of navigation and how these questions are handled with the available computational resources, followed by a second and a third sections about how the brain elements gather, interpret, prepare and encode the sensory information used in spatial navigation applications. The fourth section, focuses in the bridge between the knowledge obtained from neuroscience trials and experiments using the available computational tools to simulate/recreate some of the Hippocampal Formation characteristics, based in computer models dedicated to the Hippocampus and to EC. It ends with a section dedicated to the discussion of the extracted scientific information.

Advertisement

2. The importance of navigation

The ability to navigate is an important function, common to animals. Navigation is useful to avoid dangerous areas and to find the necessary resources for survival, including food and resting places [8].

Complementary to their intrinsic abilities for navigation, Humans initially had few, external, tools to solve navigation problems. Most of the times the stars in the sky were the only reliable reference to determine our position [9]. However, such navigation method is fully dependent from the stars or the sun, which can only be seen during the day, or in a clear night sky, not to mentioned the fact that our perception of the stars change duo to the Earth rotation [10]. To answer such questions several instruments were created, like the Geographical Representation Systems, that map the entire world with a relative good precision or physical tools that helped solving possible errors in the calculation of the position. Also, the navigation task can be seen has a pure mathematical problem [11]. The Simultaneous Localization and Mapping (SLAM) approach, has the challenge to map unknown environments, supported by different types of algorithms for spatial mapping of a space, contributing to the increase of autonomy in virtual and physical agents.

The following sequence of sub-chapters start with an analysis about the past and present of the Geographical Representation Systems, followed by a perspective about the SLAM characteristics, ending with an in depth analysis about the available methods and strategies used to solve navigational problems.

2.1 Geographical representation systems

As mentioned previously, the initial navigation methods made by humans relied on the observation of the stars like the North Star, the Southern Cross, or using Planets like Jupiter or Saturn [12]. Back in that time, there were several charts with the position of the Stars and Planets in the different hours of the day. The stars were a complementary element of a navigation method called dead reckoning. The Dead Reckoning is a method which allows the calculation of a heading without using heading calculation instruments like the compass. It uses the distance, the known positions recorded in a time period and the estimated drift to determine the direction of travel [13].

In the 1500 a.c, a certain type of instruments such as the Quadrant and the Astrolabio were used to get a better position from the stars, and with them it was possible to discover a sea route from Europe to Brazil [14]. Later, the navigational tasks become easier with the help of another instrument called the Gyroscope, that had the function to determine the horizon line [15].

At that time, it became clear the necessity to have a system which could represent with a certain precision all the regions of the globe. The Mercator projection divides the world into a squared grid, and its basis are still used in our days, despite the fact that this representation method was developed in the XVI century [16]. Nowadays, there are new forms to find and represent spatial information. The Global Positioning System (GPS), that uses the trilateration method where the distance between a device to a set of three or more satellites is used to calculate the position, as demonstrated in Figure 1 [16], and the Web Mercator projection, that allows the implementation of Mercator projection characteristics in digital platforms. Also, the Equal Earth projection is used to represent the geographical surface using the continent area proportions. The combination between a Polynomial Equation with the Robinson projection, are the basis of the method that has an easy implementation and it is a good attempt to represent the Earth in the fairest possible way [18].

Figure 1.

Demonstration of the difference between trilateration and triangulation, adapted from [17]. The left image is the trilateration method where the distances are used to determine a location. The image on the right is the triangulation method, where the position of the objective (black dot) is determined by the angles a01 and a02 that connect the reference points to the objective.

Also, there are other types of systems with the capability to represent the surface of the Earth. The Discrete Global Grid Systems (DGGS) are a digital geographic representation form where the projections are formed using geometrical forms, called cells, in a hierarchical structure. The main function of DGGS consists in creating maps of information providing a digital framework used in Geographical Information Systems [19]. The implementations of this approach can be found in Hexagonal Hierarchical Geospatial Indexing System (H3), developed by the Uber Tecnologies Inc., and the Open Equal Area Global Grid (OpenEAGGR) where the cells have the same size, and it has the possibility to convert from latitude and longitude references, to cell elements [20]. Although, the DGGS systems seem to be used in the representation of geographical information, they appear to have difficulties in the aggregation of similar information. According, with [21], the best methods to index spatial data are the ones that use the triangle from has a base, where its edges are bisected, forming sub-triangles, allowing the increase of precision.

2.2 Simultaneous localization and mapping

The SLAM problem involves creating a map of an unknown environment while simultaneously determining the location of a mobile agent, virtual or physical, within that environment, without any previous information about the space [11]. The solution to the SLAM problem increases the autonomy of the agents, but it is necessary to take into account the environment of the agent, namely if it is indoor, outdoor, underwater or in the air. At first, the SLAM question was approached using a probabilistic mindset, taking into account that is problem has a chicken and egg situation. The robot needs a position has a reference to start the mapping, but how it is possible to give such position where there is no map references [22, 23].

During the mapping process, the agent extract landmark information from its sensors as long as it stays inside the designated area. Depending of the SLAM method, the mapping procedure can be made with statistical strategies or using Machine Learning procedures, like the Deep Learning Neural Networks, where those structures are used to recognize environmental references [24]. To better characterize the behavior of the SLAM methods, the steps can be divided into four parts: (i) the input of the data, which consists in the extraction of environmental information, (ii) the mapping of place based in the acquired data, (iii) the determination of the location using navigation planning strategies and (iv) the output based on the results obtained from the previous steps [25].

Nowadays, the SLAM methods are more accurate and precise because of the development of new types of sensors like the Lidar, that sends light beams which reflects in the objects creating a good map of the space and the Visual SLAM which captures information from the agent’s video resources, making the definition of the landmarks less probabilistic and more dependent of the surrounding space. Also, the SLAM approaches can be applied in 3D spaces, allowing vehicles, or agents, to keep track of their position and to map areas while they are navigating in it [26] and they can be seen as a complement to the existing navigational technologies, like the GPS [27], where the SLAM can complement the other navigational methods, by correcting the drifts and inaccuracies caused by the navigational instruments [28].

2.2.1 Methods and implementations of SLAM

At first, some SLAM solutions used various types of Kalman Filter methods, fusing the sensory data to determine the position based in the location of the landmarks. However, as mentioned in Section 2.2, with the increase of the computational resources and the appearance of new sensors, it was possible to develop new methods that took into account these new technological improvements.

Some of these methods are the Gmapping method [29], which uses the Partical Filters to estimate the position, or the LagoSLAM [30] that works as a graph-based SLAM, where the cost function is a non-linear and non-convex formula. There are also several models that were applied to three dimensional (3D) environments such as the Loam [31], which uses a 3D Lidar sensor to scan the space and the IMLS-SLAM [32] that has a drift reduction algorithm to optimize the results obtained from the three dimensional Lidar. With the appearance of deep learning methods, it was possible to fuse the Artificial Intelligence neural networks into SLAM methods, as in the VoxelNet [24] that uses 3D networks to help in the extraction of features and the SqueezeSeg [33] which uses Convolutional Neural Networks to perform a real-time data segmentation [23].

In terms of the Visual SLAM algorithms there are some approaches like: (i) the ORB-SLAM [34] that uses the ORB computer vision method [35] to define the environmental landmarks, (ii) the OpenVSLam [36] which can be used with different types of cameras and it has algorithms that handle the features in a sparse form and (iii) the VINS [37, 38] that is a SLAM system, with the ability to work in real-time, which can be used in robots running ROS. The VINS can work in mobile platforms like the iOS and it can receive information from different types of sensors like the GPS, to correct some of the method drifts. Deep Learning approaches are also present in Visual SLAM methods, like in the: (i) DeepSLAM [39] that handles some of the noise present in the gathered images; (ii) the ScanComplete [40] model which calculates the position even with incomplete three dimensional scans and (iii) the DA-RNN [41] that has a Recurrent Neural Network to label the spatial data captured by RGB-D cameras [23].

In all of the possible approaches to solve SLAM, there is one that enhances the brain’s ability to navigate through different types of spaces and environments. Biological based SLAM, started to appear when the scientific community have found different types of brain cells, using the Hippocampal Formation elements, with the ability to solve navigational tasks [42]. Some of these methods use special types of neural networks such as the Spiking Neural Networks (SNN) to mimic the brain behaviors in the processing of sensory data to map environments and spaces. The details of these types of models will be further discussed in Section 5.

Advertisement

3. The hippocampus and navigation

As mentioned in Section 1, the Hippocampus is a brain element that is present in humans and other animals. It is located in the Temporal Lobe area and is composed of a left and right Hippocampus. Inside it there are several sub-structures essential to the creation of short-term memories, to the recognition of data patterns and to identify spatial areas [4, 43]. With that in mind, it is important to understand how the different sub-structures handle the information, what are the functions of their neural elements and how they work with the data sent though the EC.

In the following sub-chapters are dedicated to the biological analysis of the sub-structures of the Hippocampal loop, starting with the Dentate Gyrus and ending at the Subiculum, as demonstrated in Figure 2, maintaining a constant bridge between the Hippocampus functions with their ability to solve navigational problems.

Figure 2.

Simple structure of the hippocampal formation, with the hippocampus closure adapted from [44].

3.1 Dentate gyrus

The Dentate Gyrus (DG) receives information from the layer II of the LEC and MEC. It is considered a sensory data pattern separator element, that removes the disambiguity obtained from the sensory data [45]. The DG seems to allow the discrimination of elements, like the difference of two objects in the same space [45]. The DG is composed by the Granule Cells and Mossy Cells. They are specialized in the processing of sensory information, to make some associations and to detect misplaced items. It sends the information to the CA3 area. These cells have a low firing rate and they seem to represent the information using a sparse codification [46]. Also, the Mossy Cells act as a data hub which connects the DG to the Mossy Fibers. The Mossy Fibers are connected from the DG into the CA3 area. The Mossy Fibers could act as a conditional detonator or discriminator, because they discharge their energy potential only on special occasions and such discharge inhibits the Cornu Ammontis 3 (CA3) ability to receive information from the layer II of LEC and MEC [47].

3.2 Cornu Ammontis 3

The CA3 region is responsible for the association of episodic events and identification of information patterns, essential in the transformation of sensory events into short-term memories [45]. This area receives input from the DG, via Mossy Fibers, and the layer II of the LEC and MEC. It was discovered that inside the Hippocampus of a rat, the CA3 area has a set of recurrent connections which allows the CA3 to store information required for the pattern completion and memory association process [48]. The CA3 element is mostly composed of Pyramidal Neurons which are the elements responsible for the differentiation and the detection of overlapped memories [49]. The CA3 structure is divided into 3 different parts: the CA3a and CA3b areas seem to be dedicated to the encoding of information, while the CA3c connects to the Mossy Fibers and seems to help in the separation of occurred events [50]. Also, it is believed that the CA3 element could be important to the prediction of future steps and together with DG they could play a role in the correction of path integration calculations [51].

3.3 Cornu Ammontis 2

The Cornu Ammontis 2 (CA2) region acts as a hub between the CA3 and CA1. It is composed of Pyramidal Neurons and they seem to characterize the patterns related to social habits [52]. The CA2 has cells that discharge their energy in different locations of a space. These elements are similar to Place Cells, however their place field, which is the spatial area associated to the neuron, is more suitable to time factor changes, meaning that such elements can change their state in a matter of hours or days. This means that CA2 could create associations between time and place [53].

Also, it appears that CA2 could help in the organization of information in CA1, according with their periods and time characteristics because the information inside the CA1 is more stable to changes [54].

3.4 Cornu Ammontis 1

The Cornu Ammontis 1 (CA1) seems to be responsible for the retrieval of memory elements permitting the recognition of known episodic events, giving the “where” context on those elements [7]. The CA1 component is divided into two different parts, the distal part that is responsible to extract information from the sensory data and the proximal area which isolates the spatial information to allow the recognition of events. The CA1 could discriminate ambiguous information gathered from the MEC and CA3 areas [55]. The data stored in CA1 maintains its stability for long periods of time. Also the CA1 receives projections from the other Hippocampus, via a set of fibers called Hippocampal Comissure Fibers [56].

The CA1 could be used for Navigation purposes, because it has a special group of cells called Place Cells and their task consists in representing a specific spatial area, with the ability to adapt to environmental changes [7]. The area represent by each Place Cells is called place field and when a person or animal passes such area, the correspondent Place Cell discharges its energy [57]. The deepest part of the CA1 area has a link with the Landmark Vector Cells, because their firing is stronger when the agent is in a position of a particular landmark [58].

3.5 Subiculum

The Subiculum is located after the CA areas and it divides into different sections: (1) A section which receives the projections from the CA1 module, (2) a section dedicated to the reinforcement of the learned data and (3) a section that connects to the EC, closing the Hippocampal Loop. Also, the Subiculum can provide information regarding the boundaries of a spatial area [59]. Border Cells fire when a subject is facing a border in their allocentric direction [60]. These cells are present in other areas like the Parasubiculum and the MEC [61].

According to [62], the Subiculum may have the necessary structures to represent the memory of events with more precision and robustness. The data is converted from a sparse format to a denser one and then the processed data is sent to another brain element called Nucleus Accumbens, to decide which step to take at a given time. This could mean that the Hippocampus forms a map of possible decisions valid for a time period.

Also, inside the Subiculum there are cells called Head Direction Cells (HD), that are capable of indicating the direction which a person or animal is facing at a given moment [63]. The HD cells have a dual-axis system used to represent complex spaces and it could process 3D environments due to the possibility to calculate the azimuth using visual data as referential landmarks [64].

Advertisement

4. The connection between hippocampus and senses

The Hippocampus relies on the information gathered from the senses to understand the animal’s surroundings. The sensory information is sent to the Hippocampus via the EC. The EC is located in the Temporal Lobe area and it can process the spatial and non-spatial information before it is sent to the Hippocampus. Giving the importance of the EC, it is necessary to understand how the data is encoded in a format, understandable by and Hippocampus and how small changes in the acquired information affect the spatial characterization.

In the following sub-section we start to mention some of the properties of the EC and how they receive and interpret the data from the senses. Then, the focus is directed in how the Vision and Audition are used for navigational purposes. This chapter ends with an analysis about how the brain elements encode the sensory information.

4.1 Entorhinal cortex

The EC area is divided into two distinct parts, the LEC and MEC, each one with different types of neurons.

The LEC receives inputs from the Perirhinal and Parahippocampal corteces. Those elements process data gathered from the olfactory and somatosensory senses [65]. Also, the LEC appears to encode temporal information across different periods of time [66], using a set of cells called Ramping Cells. They act as a supporting element in the acquisition of a time reference because they discharge the energy in a periodic format. The data is further sent to the Hippocampus to a group of cells called Time Cells and they useful for organizing the memory according with their time reference [67].

The MEC is divided into 6 different layers, each one with specific types of neurons and neuronal structures. The first layer is related to the transfer of information between the second and third layers. The elements responsible for such behavior are the Chandelier Cells because they regulate the inhibition response to the Pyramidal Neurons [68]. The second layer is composed of Stellate Cells that support Grid Cells with the ability to transmit the necessary information for the path-integration calculations [69]. The third layer is a mixture of Pyramidal Neurons and Stellate Cells [70]. The fourth, fifth and sixth layers are dedicated to the association of spatial and non-spatial features because they are essentially composed of Pyramidal Neurons and Horizontal Chandelier Cells and they can send information directly to the Hippocampus [65]. The Grid Cells appear to create an internal map structure of a spatial environment, where each point in the map is displayed in an hexagonal form when applied to a 2D environment [5]. The display of the points and the firing of these cells is determined by the velocity and the head direction of the individual, where the hexagonal shape is maintained at all times as it can be observed in Figure 3. The Figure 3 has images that enhances the hexagonal shape of the Grid Cells and the results that proved its properties.

Figure 3.

The Grid Cells main characteristics adapted from [3, 5, 71]. The left image shows the hexagonal shape of the Grid Cells, the right enhances the possible triangle properties. The bottom image is an adaptation of the obtained scientific results from [71].

The velocity and direction, are represented by the Speed Cells and HD cells [5]. The Speed Cells and HD cells are supported by the Conjunctive Cells that discharge their energy potential when a subject is facing a specific direction of movement. Also, it is possible that the Conjunctive Cells are responsible for the encoding of data combinations between Speed Cells, Place Cells and HD cells, allowing them to act as an information integrator [72]. The Grid Cells firing displacement, have a spacing value close to and they are organized an multi-layered form. Each layer represents a different spatial precision of the environment, allowing the increase of certainty, relative to the position of the agent [73].

4.2 Importance of vision and audition in space characterization

As mentioned in Section 1, all the senses have a relative importance in the gathering of the spatial data. For this chapter we will focus on the Vision and Audition.

The Vision and Audition send their information in two different pathways, the Dorsal stream which can indicate the information related to high-level information like the where and how, and the Ventral stream that has low-level data such as the recognizing and identification of objects [74].

Yet, as mentioned in the previous sections, the visual elements are used by the HD cells as inputs used in the determination of the heading. On the other hand, the audition contributes to the spatial characterization because some of the landmarks and spatial identifications are obtained using the auditory system [75]. Also, it is necessary to understand how the sensory cues are converted from the physical element to the neural language and how such data is interpreted by other brain structures [76].

4.2.1 Visual cortex

The Human Vision System can be seen as two distinct parts. The first part consists in an image capturing system, where the eye and its components are used to gather visual information, and the second part is dedicated to its processing and the interpretation of the acquired data, in the so called Visual cortex.

The Human Eye is mostly formed by: (i) the cornea, which refracts light as it enters the eye, focusing the image on the retina, at the back of the eye [77], (ii) the Aquoeus Hummour that provides nutrition and oxygen to the eye’s tissues, as well as maintain the shape of the eye by adjustment of the internals’ eye pressure, (iii) the Iris and Pupil which are responsible for the control of the tunnel that guides the light rays [77, 78]. The conversion between the light’s electromagnetic waves into neural signals, occurs in the Retina. The Retina has Photo-Receptor cells, Amacrine Cells, Bipolar Cells and Ganglion Cells. The Photo-Receptor Cells are sensitive to light. These are either Rods or Cones, respectively sensitive to the tones of gray and colors at different areas of the retina. There are 3 types of Cones, each one processes and triggers at the presence of the different electromagnetic wavelets like the red, green and blue colors [77].

The information collected from the cones is aggregated with the data from the rods using the Horizontal Cells, which are inhibitory interneurons, that balance the overall activity of the retina and improve the contrast of the visual signal that is sent to the brain. According with [79], it is possible that Horizontal Cells can have some characteristics used in color opponency. Color opponency consists in the creating a new color by suppressing the other wavelets. Such characteristic allows the reduction of redundant information generated by the photo-receptor electrical discharges, avoiding overlap of the cone sensitivities [80].

Also, the Amacrine Cells establish links between the different layers and they could play a role in the set of contrast of the visual image. The data is sent to the Visual Cortex through the Ganglion Cells. They also have a crucial role at the regulation and synchronization of the circadian rhythm [77].

The eye and the Vision Cortex can capture and identify different image aspects, such as the edges of a shadow or a object, the motion of elements, they can distinguish from large to small objects and they have the ability to process large visual fields [81].

The information regarding the recognition of objects, processed in the Visual Cortex, is sent to the Temporal Lobe elements via the Ventral pathway where the Perirhinal Cortex acts as a hub between the Hippocampus and the visual system [82].

From the image capturing by the retina, to the data preparation made by the Visual Cortex, the usage of the Visual data in navigation can be important in the identification of spatial landmarks, the gathering of information present in a cartographic map and they can correct some of the errors made during the path-integration process. Also, the Vision data is used by the Hippocampal Place Cells to encode the specific area in the spatial environment [83].

4.2.2 Auditory cortex

A spatial environment can be characterized by the environmental sound. The Auditory mechanism has similar features with the Visual System. It can be divided into two parts: the first part is dedicated to the capturing of the sound waves and the second part for the processing and interpretation of the information gathered from the sound waves.

The sound waves are captured at the structure called external ear and the sound vibrations are extracted by the tympanic membrane which sends the vibrating rhythms into the cochlea. The cochlea has a set of elements called Hair Cells and they move with a certain vibrating frequency. Such movement causes the discharge of an energy potential, converting the sound’s vibration into an neural signal. The electrical impulses are sent to the Auditory Cortex where the sound is classified according with their properties [84].

The Auditory Cortex seem to have several neural populations that react to different sound characteristics. Such statement was supported by several observations of voxels in Functional Magnetic Resonance images [85]. These elements are present in the Primary Auditory Cortex, that identifies and sorts data according to their frequency [86]. The Second Auditory Cortex identifies the sound using the pitch and melody. Also, between the Auditory Cortex and the Speech Cortex is the language processing area, necessary for the extraction of information from the voice [85].

The auditory landmarks can help in the identification and recognition of places, but according with [75] some of these landmarks are quite difficult to gather and sometimes the information acquired needs to be compared with other senses like the Vision. Also, it is stated that the ambiguous information will decrease with the amount of Visual and Auditory data. Since the audio data is stored by its frequency, it is required to locate the position of the sound source from the sound profiles received by the two ears taking into account that it is required to solve the ambiguity and sound input differences between the two ears.

The extraction of spatial features in the sound could be difficult, because of the ears position. The cues obtained using the frontal part and the back part of the head, are equal and such difficult is solved by the moving of the head to other position [87].

The difference of sound arrival can be important to the creation of a auditory spatial map which can lead to the detection of important environmental cues [88], but they need to be fused with other elements, like the Visual data, to separate the environmental data segments [75].

4.3 Encoding of sensory information

This subject lacks consensus among the scientific community. However, there are several hypothesis about how the brain handles the sensory information [89, 90].

One of the theories that has the most consensus consists in fact that the brain encodes the information in a sparse manner [89]. A sparse distributed representation can be described has a representation where not all neurons are active at a given period of time. The sparse representation together with the firing rate of the neuron can be used in the encoding of some of the sensory information like the auditory data. The generalization and pattern completion depend in the sparseness of the data and in the distributed characteristic of the neurons and the less sparse is the system the more information can be encoded and represented. A sparse codification system can be found in different areas of brain like in (i) the Auditory Cortex where the firing rate can be characterized as a binary coding, because the neuron responds to the stimulus in a fire or no fire behavior, (ii) in the Hippocampus and (iii) in the motor sensory cortex [91].

On the other hand, it is not clear how the brain enforces such codification and how this method could lead to generalization when the interactions between neurons are very low. These statements are supported by the fact that the sparse coding is not present in all the areas of the cortex and it is not clear if the brain is a discrete classifier or if it works using continuous regression, making the system totally dependent to the size of the neuronal population [92].

Advertisement

5. Computational methods about hippocampal formation and spatial navigation

Nowadays, due to the increase of technological capabilities it is possible to implement complex structures of neural networks. However, some of the used methods do not have a strong biological justification. Also, nowadays it is possible to simulate some of the current brain’s neurological behaviors using certain type of networks. Those methods and strategies include neuronal simulators like Nengo [93] or The Brian Simulator [94] but also with different types of networks like the SNN. There are several models which try to simulate some of the characteristics of the EC, others focus in the Hippocampus and some even use the brain and Hippocampal features applied to navigation.

The following sub-sections start with an analysis with the EC models followed by an analysis of the Hippocampal model where it be given emphasis on their main technological characteristics.

5.1 Entorhinal cortex models

Throughout the years several computer models were developed that tried to replicate some or all elements of the EC. Some of the models focused on certain types of cells like the Grid Cells. The Grid Cells models can be divided by their main characteristics in: (1) Oscillatory-Interference Model, (2) the Attractor Network Model, (3) the Self-Organized Model and (4) the Path-Integration Model [95].

The Oscillatory-Interference method makes use of two different oscillators and their phase difference creates the Grid Cells [95]. The oscillators can be active although it is required for them to have different frequencies. This model can respond to rapid changes in the environment and at the same time they could provide the necessary information for a rapid remapping of Place Cells [96].

The Attractor Network model uses simulated neurons, with different types of connections to represent a position in a space. The firing of the neuron is directly influenced by the behavior of the neighboring neurons. However, to recreate the functions and behaviors of the EC, this network must have a element which detects and recognizes borders/obstacles to limit the initialization of hyper-parameters, avoiding the data overfit problem [95].

A Self-Organized mechanism states that the data has to be separated by a 60-degree angle. These model makes uses of a special type of cells called Stripe Cells [97]. The Stripe Cells send to the Grid Cells, the speed information and they constrict the formation of Grid Cells by maintaining a constant hexagonal pattern displacement [95]. Those cells have different firing behaviors and the intersection of those elements create the Grid Cells. These approach is sensitive to trigonometric properties and it can represent different ranges of spatial scales [98].

The Path-Integration method states that Grid Cell’s formation only depend from the velocity and direction of the agent. Also, the displacement of Grid Cells is derived by the multiplication of the previous elements with a timing factor [95].

According to [71], it is possible to recreate the EC Grid Cells using simple mathematical properties. By separating the hexagonal shape into triangles and with a specific type of Gray code, the Grid Cells are formed and characterized using a Triangular Coordinate System. This method has as its main characteristic the ability to define specific degrees of precision, because it behaves in a multi-layered form, like the real Grid Cells. Also, it assembles a property of similarity where the code of the near elements has the lowest possible difference, making this method suitable for navigation and to encode multidimensional data.

5.2 Hippocampus models

When the Hippocampus is referenced, there are several computational models which enhance some of the characteristics mentioned in the Introduction section. The NeuroSLAM [99] and RatSLAM [100] are some of the models based in the characteristics of the Hippocampal Formation.

The RatSLAM method consists in a mapping algorithm based in the characteristics of the rat’s Hippocampal Formation system. This method uses the Competitive Attractor Network (CAN) to convert the Visual information to a position in a space. The CAN can have different structures and the idea consists in exciting a neuron with its neighbors, and to inhibit the distant neurons, with the ability to handle small amounts of noisy data [101]. Also, this method uses landmarks for spatial guidance and it is not dependent of a special grid system like the Cartesian method [100]. In addition, the core of the RatSLAM, had been used in the development of other neural based navigational models, like the OpenRatSLAM, that has a modular structure which is easy to implement in robots [102], and the DolphinSLAM where the properties of RatSLAM, are used in underwater environments [103].

The NeuroSLAM is a spatial neural inspired model that can be implemented in 2D and 3D environments. This method uses some of the previously mentioned biological elements like the Place Cells, HD cells and Grid Cells. The Grid Cells maintain their hexagonal displacement. They represent the position and give the metric information used in the path integration. The HD cells provide information regarding the azimuth and the heading direction of the movement. The networks for this model are made using Multidimensional Continuous Attractor Network (MD-CAN) that, with time, achieve a stable state, when the right network parameters are used [101]. Each unit has a continuous activation value between zero and one. With the simulated neurons, this model makes use of the Local View Cells which are able to resets the errors gathered during the path integration process based on the rotation data provided by the 3D visual odometry data [99].

The Neural Engineering Framework (NEF) is a model which can be applied to navigational problems. The NEF allows the simulation of biological neural models applied to cognition and other brain tasks. This can be used model visual attention, reinforcement learning and other cognitive jobs [104]. This framework allows a better understanding of the brain mechanisms using fast computational resources. The neural activity uses an tuning curve as the firing rule, which is described in Eq. (1)

ai=Gaieix+biE1

where the G is a neural model, ai is the gain parameter, bi is the background bias and the x is the neural activity. There are several available neural models like the rate-based sigmoidal neurons, spiking Leaky-Integrate-and-Fire (LIF) neurons and other models [104]. The NEF properties can be used in programming libraries like the Nengo where the neural models are created with programming languages like Python or Java and it allows the visualization of the model behavior using Graphical User Interface tools. In terms of hardware it is possible to use FPGAs to create physical neural models, and it is also possible to convert from SNN’s to Artificial Neural Networks (ANN) [105].

SNNs are a special type of network that try to recreate the neural structure of the brain. They work in a sparse manner and in a continuous time format. There are several SNN neural models like the Hodgkin-Huxley (HH) model when the discharge of sodium and potassium is simulated inside the digital neurons or the Izhikevich model which combines the characteristics of the LIF, creating a more efficient digital neuron [106]. The NEF and SNN models do not have a direct connection to the previously mentioned SLAM methods. However, with the right neural configurations, those methods can solve SLAM problems because those methods are simulations of the brain’s neural dynamics [107].

Advertisement

6. Discussion and conclusions

Inside the brain’s Hippocampal structure there are some neurological tools that allow navigation. Their discovery led to some questions, namely: (i) what are the functions of the Hippocampal Formation and (ii) what is the correlation between the Hippocampus and Navigation?

The first studies about the Hippocampus have shown that it was essential to the formation of short-term memories, as was observed and reported for a patient that had to remove this neuronal structure to solve a problem related to seizures [1]. After the removal, he repeated routine tasks, as if it was the first time ever. He could watch the same movie repetitively, without remembering any scene, actor, or any detail. A similar thing happened with people and places. So, the Hippocampus has shown to allow the identification of places and persons, meaning that it can give the context to a newly formed event [4]. Later studies and researches have made it clear that specific types of neurons, named Place Cells with the help of the EC Grid Cells [5], allow the identification of locations, based on environmental characteristics [7].

Even with the map, we need to have a sense of our current position, which is given by the EC [72], and to have the ability to find new or previously known areas based on the data patterns, which is the function of the Hippocampus [108]. In the Hippocampus the senses, including Vision and Audition, allow the identification of the place, like the SLAM approach which uses the spatial landmarks as a reference to determine the location and the position of a agent. Like the SLAM methods the Hippocampal Formation must have structures that can work with incomplete data. One of those structures is the CA3 area which is composed by recurrent network, that works as a memory element, which can complete some of the missing data [45]. The CA areas and the other Hippocampal elements, process the sensory data and complete the pattern that leads to the identification the current position in an environment.

The Hippocampal Formation is not the only part of the brain that is capable of processing spatial data. Recent works have discovered that the Retrosplenial Cortex, allows the processing of spatial landmarks, creating a spatial schema of the environment, with the possibility to calculate future directions [109].

The Retrosplenial Cortex has a connection to some elements of the Hippocampal Formation. The 29th and 30th areas have a higher neural density which could indicate that the Retrospleanial Cortex process hippocampal related information [110]. Its malfunction could lead to disorientation, because of the lack of ability to pair and process spatial data [109, 110].

Also, it is necessary to further analyze how other senses can contribute to the navigation tasks. “The olfactory input to the perirhinal cortex and the parahippocampal cortex, can provide information which can change the place cell mapping configuration by creating new spatial landmarks [111]. However, the landmark, starts to lose strength due to the odor habituation mechanisms that occur inside the olfactory bulb [112]. In addition, the somatosensory, receives the information from different types of senses and pathways. It passes or holds the sensory information according with the received stimulus [113].

The set of tools created by mankind to navigate in space, have only recently evolved to use non-biological Grid Cells, as defined in discrete global grid systems, that somehow approximate the latices used in the Entorhinal Cortex. Discrete global grid systems currently support the indexing of assets across the globe, allowing a more adequate partitioning and aggregation of the earth into logical structures that take into account the heterogeneity of the scales of the associated geospatial data.

With the increase of information regarding the neuronal navigation capabilities, there are some methods that bridge neuroscience and SLAM. As previously mentioned the NeuroSLAM [99], and RatSLAM [100] can be considered good approaches to solve SLAM using biological based methods. However, due to the complexity of the CAN networks, they cannot handle large quantities of noisy data. Meaning that the data has to be cleaned, which could increase the time of response. Also the MD-CAN networks reveal some instability when there are some changes in the network parameters. Such changes cause a drift in the attractor that leads to the degradation of the results [101].

Also, the RatSLAM uses some cells that have not a direct correlation with the Hippocampal Formation, like the Pose Cells that represent the location and orientation of an agent [100]. In addition, the NeuroSLAM considers an intermediary agent to facilitate the communication between the system and neuromorphic devices, which is not required when using SNNs [99].

On the other hand, the Triangular Coordinate System [71], can handle certain amounts of noise in the data using simple mathematical operations, without using any special equipment like the neuromorphic devices. Yet, this model does not address a codification for 3D environments and the hexagons seem to a have a perfect hexagonal shape, which contrasts with what is observed in some neurobiological studies.

Non-biological based SLAM approaches that were previously mentioned can already be found in some house appliances like in autonomous vacuum cleaners. So does it make sense to create a biological based artificial Hippocampal Formation structure, in robots?

The answer to this question seems to be difficult, because the SNN has a different structure and there are certain aspects that require attention. The first aspect is in the training method. It is different from the ANN because SNNs have special characteristics in the spike process, meaning that the backpropagation mechanisms cannot be directly applied. The second aspect comes from the fact that the SNNs need a computer simulated program. According with [114], it is necessary to understand how such networks could work with high performance computer elements, like the neuromorphic devices. On the other hand, the SNN would not need a large energy storage units and the robot could make difficult tasks in a more efficient way due to the network’s performance capabilities [114].

As mentioned in the SLAM section, one of the steps of the SLAM approach is the navigation planning and according with [115], it seems that the Hippocampus Place Cells and the EC Grid Cells, can be used for conceptual learning, due to their capacity to organize information and to predict future states.

With the analysis of the state of the art matter, it is possible to say that the Hippocampal Formation can have the required elements for the task, because it can create a data representation of the place, based on the available environmental characteristics, allowing the understanding of the current location and which are the next available steps.

During this century, neuroscientists and engineers have been trying to close the gap between these fields with a better knowledge that could clarify some of the existing doubts about the functioning of the brain, and with new technological approaches. With the junction of the neuroscience to the robotic field, it may be possible to develop new technologies which are more adapted to the people’s necessities and requirements to solve their daily challenges.

Advertisement

Abbreviations

ANN
CA
CAN
DG
GPS
HD
LEC
LIF
MEC
MD-CANMultidimensional Continuous Attractor Network
NEF
SLAM
SNN

References

  1. 1. William Beecher Scoville and Brenda Milner. Loss of recent memory after bilateral hippocampal lesions. Journal of Neurology, Neurosurgery, and Psychiatry. 1957;20(1):11
  2. 2. Zhu Y, Gao H, Tong L, Li ZL, Wang L, Zhang C, et al. Emotion regulation of hippocampus using real-time fmri neurofeedback in healthy human. Frontiers in Human Neuroscience. 2019;13:242
  3. 3. Moser M-B, Rowland DC, Moser EI. Place cells, grid cells, and memory. Cold Spring Harbor Perspectives in Biology. 2015;7(2):a021808
  4. 4. Teyler TJ, Rudy JW. The hippocampal indexing theory and episodic memory: Updating the index. Hippocampus. 2007;17(12):1158-1169
  5. 5. Rowland DC, Roudi Y, Moser M-B, Moser EI, et al. Ten years of grid cells. Annual Review of Neuroscience. 2016;39:19-40
  6. 6. Bush D, Barry C, Manson D, Burgess N. Using grid cells for navigation. Neuron. 2015;87(3):507-520
  7. 7. Go MA, Rogers J, Gava GP, Davey CE, Seigfred Prado Y, Liu, and Simon R Schultz. Place cells in head-fixed mice navigating a floating real-world environment. Frontiers in Cellular Neuroscience. 2021;15:618658
  8. 8. Newcombe NS. Navigation and the developing brain. Journal of Experimental Biology. 2019;222(Suppl_1):jeb186460
  9. 9. Getting IA. Perspective/navigation-the global positioning system. IEEE Spectrum. 1993;30(12):36-38
  10. 10. McCarthy DD, Seidelmann PK. Time: From Earth Rotation to Atomic Physics. Cambridge, United Kingdom: Cambridge University Press; 2018
  11. 11. Durrant-Whyte H, Bailey T. Simultaneous localization and mapping: Part i. IEEE Robotics & Automation Magazine. 2006;13(2):99-110
  12. 12. Sadler DH. Lunar distances and the nautical almanac. Vistas in Astronomy. 1976;20:113-121
  13. 13. Roston GP, Krotkov EP. Dead Reckoning Navigation for Walking Robots. Technical Report. Pittsburgh, United States: Carnegie-Mellon University Pittsburgh PA Robotics Institute; 1991
  14. 14. Richey M. The navigational background to 1492. The Journal of Navigation. 1992;45(2):266-284
  15. 15. Wagner J, Sorg HW. The bohnenberger machine. Gyroscopy and Navigation. 2010;1(1):73-78
  16. 16. MJP Vis. History of the Mercator projection [B.S. thesis]. Heidelberglaan, Utrecht, Netherlands: Utrecht University; 2018
  17. 17. Akram M, Khiyal H, Ahmad M, Abbas S. Decision Tree for Selection Appropriate Location Estimation Technique of GSM Cellular Network. In: International conference on engineering & emerging technology, Lahore, Pakistan. March 2014
  18. 18. Šavrič BJ, Patterson T, Petrovič D, Hurni L. A polynomial equation for the natural earth projection. Cartography and Geographic Information Science. 2011;38(4):363-372
  19. 19. Peterson PR. Discrete global grid systems. In: International Encyclopedia of Geography: People, the Earth, Environment and Technology: People, the Earth, Environment and Technology. 2016. pp. 1-10
  20. 20. Bush I, Riscaware L. Openeaggr Software Design Document. GitHub-Riskaware Ltd. 2017. Available from: https://github.com/riskaware-ltd/open-eaggr [Accessed: 22 June 2019]
  21. 21. Goodchild F Michael, Kimerling A Jon. Discrete Global Grids: A Web Book. 2002.
  22. 22. Mahony R, Hamel T, Trumpf J. An homogeneous space geometry for simultaneous localisation and mapping. Annual Reviews in Control. 2021;51:254-267
  23. 23. Huang B, Zhao J, Liu J. A survey of simultaneous localization and mapping. arXiv preprint arXiv:1909.05214, 2019
  24. 24. Zhou Y, Tuzel O. Voxelnet: End-to-end learning for point cloud based 3d object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Los Alamitos, CA, USA: IEEE Computer Society. 2018. pp. 4490-4499
  25. 25. Khairuddin A, R, Talib MS, Haron H. Review on simultaneous localization and mapping (slam). In: 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE). Penang, Malaysia: IEEE; 2015. pp. 85-90
  26. 26. Stasse O, Davison AJ, Sellaouti R, Yokoi K. Real-time 3d slam for humanoid robot considering pattern generator information. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. Beijing, China: IEEE; 2006. pp. 348-355
  27. 27. Hening S, Ippolito CA, Krishnakumar KS, Stepanyan V, Teodorescu M. 3d lidar slam integration with gps/ins for uavs in urban gps-degraded environments. In: AIAA Information Systems-AIAA Infotech@ Aerospace. 2017. p. 0448
  28. 28. Chiang K-W, Tsai G-J, Li Y-H, Li Y, El-Sheimy N. Navigation engine design for automated driving using ins/gnss/3d lidar-slam and integrity assessment. Remote Sensing. 2020;12(10):1564
  29. 29. Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Transactions on Robotics. 2007;23(1):34-46
  30. 30. Carlone L, Aragues R, Castellanos JA, Bona B. A linear approximation for graph-based simultaneous localization and mapping. In: Robotics: Science and Systems. Vol. 7. Cambridge, Massachusetts, USA: The MIT Press; 2012. pp. 41-48
  31. 31. Zhang J, Singh S. Loam: Lidar odometry and mapping in real-time. In: Robotics: Science and Systems. Vol. 2. Berkeley, CA, Cambridge, Massachusetts, USA: MIT Press; 2014. pp. 1-9
  32. 32. Deschaud J-E. Imls-slam: Scan-to-model matching based on 3d data. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, QLD, Australia: IEEE; 2018. pp. 2480-2485
  33. 33. Bichen W, Wan A, Yue X, Keutzer K. Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane, QLD, Australia: IEEE; 2018. pp. 1887-1893
  34. 34. Mur-Artal R, Tardós JD. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Transactions on Robotics. 2017;33(5):1255-1262
  35. 35. Rublee E, Rabaud V, Konolige K, Bradski G. ORB: An efficient alternative to SIFT or SURF. In: 2011 International Conference on Computer Vision. Barcelona, Spain: IEEE; 2011. pp. 2564-2571
  36. 36. Sumikura S, Shibuya M, Sakurada K. Openvslam: A versatile visual slam framework. In: Proceedings of the 27th ACM International Conference on Multimedia. 2019. pp. 2292-2295
  37. 37. Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics. 2018;34(4):1004-1020
  38. 38. Qin T, Shen S. Online temporal calibration for monocular visual-inertial systems. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, Spain: IEEE; 2018. pp. 3662-3669
  39. 39. DeTone D, Malisiewicz T, Rabinovich A. Toward geometric deep slam. arXiv preprint arXiv:1707.07410. 2017
  40. 40. Dai A, Ritchie D, Bokeloh M, Reed S, Sturm J, Nießner M. Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, Utah, United States: IEEE; 2018. pp. 4578-4587
  41. 41. Xiang Y, Fox D. Da-rnn: Semantic mapping with data associated recurrent neural networks. arXiv preprint arXiv:1703.03098. 2017
  42. 42. Silveira L, Guth F, Fisher D, Codevilla F, Drews P, Botelho S. Biological inspired system for localization and mapping in underwater environments. In: 2013 OCEANS-San Diego. San Diego, CA, USA: IEEE; 2013. pp. 1-6
  43. 43. Barry C, Lever C, Hayman R, Hartley T, Burton S, O’Keefe J, et al. The boundary vector cell model of place cell firing and spatial memory. Reviews in the Neurosciences. 2006;17(1–2):71-98
  44. 44. Weilbächer RA, Gluth S. The interplay of hippocampus and ventromedial prefrontal cortex in memory-based decision making. Brain Sciences. 2016;7(1):4
  45. 45. Cherubini E, Miles R. The ca3 Region of the Hippocampus: How Is it? What Is it for? How Does it Do it?, Lausanne, Switzerland, 2015;9. DOI: 10.3389/fncel.2015.00019. Available from: http://journal.frontiersin.org/Article/10.3389/fncel.2015.00019/abstract
  46. 46. Umschweif G, Greengard P, Sagi Y. The dentate gyrus in depression. European Journal of Neuroscience. 2021;53(1):39-64
  47. 47. Urban NN, Henze DA, Barrionuevo G. Revisiting the role of the hippocampal mossy fiber synapse. Hippocampus. 2001;11(4):408-417
  48. 48. Senzai Y. Function of local circuits in the hippocampal dentate gyrus-ca3 system. Neuroscience Research. 2019;140:43-52
  49. 49. Molitor RJ, Sherrill KR, Morton NW, Miller AA, Preston AR. Memory reactivation during learning simultaneously promotes dentate gyrus/ca2, 3 pattern differentiation and ca1 memory integration. Journal of Neuroscience. 2021;41(4):726-738
  50. 50. Kesner RP. A process analysis of the ca3 subregion of the hippocampus. Frontiers in Cellular Neuroscience. 2013;7:78
  51. 51. Pata DS, Escuredo A, Lallée S, Verschure PFMJ. Hippocampal based model reveals the distinct roles of dentate gyrus and ca3 during robotic spatial navigation. In: Conference on Biomimetic and Biohybrid Systems. Milan, Italy: Springer; 2014. pp. 273-283
  52. 52. Hitti FL, Siegelbaum SA. The hippocampal ca2 region is essential for social memory. Nature. 2014;508(7494):88-92
  53. 53. Dudek SM, Alexander GM, Farris S. Rediscovering area ca2: Unique properties and functions. Nature Reviews Neuroscience. 2016;17(2):89-102
  54. 54. MacDonald CJ, Tonegawa S. Crucial role for ca2 inputs in the sequential organization of ca1 time cells supporting memory. Proceedings of the National Academy of Sciences. 2021;118(3):e2020698118
  55. 55. Deshmukh SS. Distal ca1 maintains a more coherent spatial representation than proximal ca1 when local and global cues conflict. Journal of Neuroscience. 2021;41(47):9767-9781
  56. 56. Voneida TJ, Vardaris RM, Fish SE, Reiheld CT. The origin of the hippocampal commissure in the rat. The Anatomical Record. 1981;201(1):91-103
  57. 57. Harland B, Contreras M, Souder M, Fellous J-M. Dorsal ca1 hippocampal place cells form a multi-scale representation of megaspace. Current Biology. 2021;31(10):2178-2190
  58. 58. Geiller T, Fattahi M, Choi J-S, Royer S. Place cells are more strongly tied to landmarks in deep than in superficial ca1. Nature Communications. 2017;8(1):14531
  59. 59. Poulter S, Hartley T, Lever C. The neurobiology of mammalian navigation. Current Biology. 2018;28(17):R1023-R1042
  60. 60. Lever C, Burton S, Jeewajee A, O’Keefe J, Burgess N. Boundary vector cells in the subiculum of the hippocampal formation. Journal of Neuroscience. 2009;29(31):9771-9777
  61. 61. Deshmukh SS, Knierim JJ. Influence of local objects on hippocampal representations: Landmark vectors and memory. Hippocampus. 2013;23(4):253-267
  62. 62. Kitanishi T, Umaba R, Mizuseki K. Robust information routing by dorsal subiculum neurons. Science Advances. 2021;7(11):eabf1913
  63. 63. Kazanovich YB, Evgen’evich Mysin I. How animals find their way in space. Experiments and modeling. Mathematical Biology and Bioinformatics. 2018;13(Suppl):132-161
  64. 64. Kim M, Maguire EA. Encoding of 3d head direction information in the human brain. Hippocampus. 2019;29(7):619-629
  65. 65. Canto C, B, Wouterlood FG, Witter MP. What does the anatomical organization of the entorhinal cortex tell us? Neural Plasticity. 2008;2008
  66. 66. Tsao A, Sugar J, Li L, Wang C, Knierim JJ, Moser M-B, et al. Integrating time from experience in the lateral entorhinal cortex. Nature. 2018;561(7721):57-62
  67. 67. Rolls ET, Mills P. The generation of time in the hippocampal memory system. Cell Reports. 2019;28(7):1649-1658
  68. 68. Woodruff AR, Anderson SA, Yuste R. The enigmatic function of chandelier cells. Frontiers in Neuroscience. 2010;4:201
  69. 69. Rowland DC, Obenhaus HA, Skytøen ER, Zhang Q, Kentros CG, Moser EI, et al. Functional properties of stellate cells in medial entorhinal cortex layer ii. eLife. 2018;7:e36664
  70. 70. Jones RSG. Entorhinal-hippocampal connections: A speculative view of their function. Trends in Neurosciences. 1993;16(2):58-64
  71. 71. Monteiro J, Pedro A, Silva AJ. A gray code model for the encoding of grid cells in the entorhinal cortex. Neural Computing and Applications. 2022;34(3):2287-2306
  72. 72. Tukker JJ, Beed P, Brecht M, Kempter R, Moser EI, Schmitz D. Microcircuits for spatial coding in the medial entorhinal cortex. Physiological Reviews. 2022;102(2):653-688
  73. 73. Gay S, Le Run K, Pissaloux E, Romeo K, Lecomte C. Towards a predictive bio-inspired navigation model. Information. 2021;12(3):100
  74. 74. Rauschecker JP. Where, when, and how: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex. 2018;98:262-268
  75. 75. Jetzschke S, Ernst MO, Froehlich J, Boeddeker N. Finding home: Landmark ambiguity in human navigation. Frontiers in Behavioral Neuroscience. 2017;11:132
  76. 76. Zavitz E, Price NSC. Understanding sensory information processing through simultaneous multi-area population recordings. Frontiers in Neural Circuits. 2019;12:115
  77. 77. Zhu J, Zhang E, Del Rio-Tsonis K. Eye Anatomy. eLS. Hoboken, New Jersey, USA: Wiley Online Library; 2012
  78. 78. Bouma H. Size of the static pupil as a function of wave-length and luminosity of the light incident on the human eye. Nature. 1962;193(4816):690-691
  79. 79. Twig G, Levy H, Perlman I. Color opponency in horizontal cells of the vertebrate retina. Progress in Retinal and Eye Research. 2003;22(1):31-68
  80. 80. Lee T-W, Wachtler T, Sejnowski TJ. Color opponency is an efficient representation of spectral properties in natural scenes. Vision Research. 2002;42(17):2095-2103
  81. 81. Sousa NPPA. Neural encoding models in natural vision. Porto, Portugal: Faculdade de Engenharia da Universidade do Porto; 2013
  82. 82. Turk-Browne NB. The hippocampus as a visual area organized by space and time: A spatiotemporal similarity hypothesis. Vision Research. 2019;165:123-130
  83. 83. Ekstrom AD. Why vision is important to how we navigate. Hippocampus. 2015;25(6):731-735
  84. 84. Musiek FE, Baran JA. The Auditory System: Anatomy, Physiology, and Clinical Correlates. San Diego, CA: Plural Publishing; 2018
  85. 85. Norman-Haignere SV, Feather J, Boebinger D, Brunner P, Ritaccio A, McDermott JH, et al. A neural population selective for song in human auditory cortex. Current Biology. 2022;32(7):1470-1484
  86. 86. Weinberger NM. Auditory associative memory and representational plasticity in the primary auditory cortex. Hearing Research. 2007;229(1–2):54-68
  87. 87. McAnally KI, Martin RL. Sound localization with head movement: Implications for 3-d audio displays. Frontiers in Neuroscience. 2014;8:210
  88. 88. King AJ, Schnupp JWH, Doubell TP. The shape of ears to come: Dynamic coding of auditory space. Trends in Cognitive Sciences. 2001;5(6):261-270
  89. 89. Rolls ET, Treves A. The neuronal encoding of information in the brain. Progress in Neurobiology. 2011;95(3):448-490
  90. 90. Godenzini L, Alwis D, Robertas Guzulaitis S, Honnuraiah GJS, Palmer LM. Auditory input enhances somatosensory encoding and tactile goal-directed behavior. Nature Communications. 2021;12(1):1-14
  91. 91. Olshausen BA, Field DJ. Sparse coding of sensory inputs. Current Opinion in Neurobiology. 2004;14(4):481-487
  92. 92. Spanne A, Jörntell H. Questioning the role of sparse coding in the brain. Trends in Neurosciences. 2015;38(7):417-427
  93. 93. Bekolay T, Bergstra J, Hunsberger E, DeWolf T, Stewart TC, Rasmussen D, et al. Nengo: A python tool for building large-scale functional brain models. Frontiers in Neuroinformatics. 2014;7:48
  94. 94. Goodman DFM, Brette R. The brain simulator. Frontiers in Neuroscience. Lausanne, Switzerland: Frontiers Media SA; 2009:26
  95. 95. Giocomo LM, Moser M-B, Moser EI. Computational models of grid cells. Neuron. 2011;71(4):589-603
  96. 96. Burgess N. Grid cells and theta as oscillatory interference: Theory and predictions. Hippocampus. 2008;18(12):1157-1174
  97. 97. Pilly PK, Grossberg S. Spiking neurons in a hierarchical self-organizing map model can learn to develop spatial and temporal properties of entorhinal grid cells and hippocampal place cells. PLoS One. 2013;8(4):e60599
  98. 98. Mhatre H, Gorchetchnikov A, Grossberg S. Grid cell hexagonal patterns formed by fast self-organized learning within entorhinal cortex. Hippocampus. 2012;22(2):320-334
  99. 99. Fangwen Y, Shang J, Youjian H, Milford M. Neuroslam: A brain-inspired slam system for 3d environments. Biological Cybernetics. 2019;113(5):515-545
  100. 100. Milford MJ, Wyeth GF, Prasser D. Ratslam: A hippocampal model for simultaneous localization and mapping. In: IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004. Vol. 1. IEEE; 2004. pp. 403-408
  101. 101. Latham PE, Deneve S, Pouget A. Optimal computation with attractor networks. Journal of Physiology-Paris. 2003;97(4–6):683-694
  102. 102. Ball D, Heath S, Wiles J, Wyeth G, Corke P, Milford M. Openratslam: An open source brain-based slam system. Autonomous Robots. 2013;34:149-176
  103. 103. Zaffari GB, dos Santos MM, Duarte AC, Fernandes DDA, Silvia SDCB. Exploring the dolphinslam’s parameters. In: OCEANS 2016-Shanghai. Shanghai, China: IEEE; 2016. pp. 1-5
  104. 104. Stewart TC. A Technical Overview of the Neural Engineering Framework. University of Waterloo. London, United Kingdom: AISB Quartely. Vol. 110. 2012
  105. 105. DeWolf T, Jaworski P, Eliasmith C. Nengo and low-power ai hardware for robust, embedded neurorobotics. Frontiers in Neurorobotics. 2020;14:568359
  106. 106. Yamazaki K, Vo-Ho V-K, Bulsara D, Le N. Spiking neural networks and their applications: A review. Brain Sciences. 2022;12(7):863
  107. 107. Galluppi F, Conradt J, Stewart T, Eliasmith C, Horiuchi T, Tapson J, et al. Spiking ratslam: Modeling rat hippocampus place, grid and boarder cells in spiking neural hardware.
  108. 108. Maurer AP, Nadel L. The continuity of context: A role for the hippocampus. Trends in Cognitive Sciences. 2021;25(3):187-199
  109. 109. Mitchell AS, Czajkowski R, Zhang N, Jeffery K, Nelson AJD. Retrosplenial cortex and its role in spatial cognition. Brain and Neuroscience Advances. 2018;2:2398212818757098
  110. 110. Vann SD, Aggleton JP, Maguire EA. What does the retrosplenial cortex do? Nature Reviews Neuroscience. London, UK: Nature Publishing Group; 2009;10(11):792-802
  111. 111. Fischler-Ruiz W, Clark DG, Joshi NR, Devi-Chou V, Kitch L, Schnitzer M, et al. Olfactory landmarks and path integration converge to form a cognitive spatial map. Neuron. 2021;109(24):4036-4049
  112. 112. Chaudhury D, Manella L, Arellanos A, Escanilla O, Cleland TA, Linster C. Olfactory bulb habituation to odor stimuli. Behavioral Neuroscience. 2010;124(4):490
  113. 113. ten Donkelaar HJ, ten Donkelaar HJ, Broman J, van Domburg P. The somatosensory system. In: Clinical Neuroanatomy: Brain Circuitry and its Disorders. 2020. pp. 171-255
  114. 114. Bing Z, Meschede C, Röhrbein F, Huang K, Knoll AC. A survey of robotics control based on learning-inspired spiking neural networks. Frontiers in Neurorobotics. 2018;12:35
  115. 115. Mok RM, Love BC. A non-spatial account of place and grid cells based on clustering models of concept learning. Nature Communications. 2019;10(1):5685

Written By

André Pedro, Jânio Monteiro and António João Silva

Reviewed: 09 February 2023 Published: 29 March 2023