Open access peer-reviewed chapter - ONLINE FIRST

Recognition of Brain Wave Related to the Episode Memory by Deep Learning Methods

Written By

Takashi Kuremoto, Junko Ishikawa, Shingo Mabu and Dai Mitsushima

Submitted: 29 May 2023 Reviewed: 11 July 2023 Published: 02 August 2023

DOI: 10.5772/intechopen.112531

Research Advances in Data Mining Techniques and Applications IntechOpen
Research Advances in Data Mining Techniques and Applications Edited by Yves Rybarczyk

From the Edited Volume

Research Advances in Data Mining Techniques and Applications [Working Title]

Prof. Yves Philippe Rybarczyk

Chapter metrics overview

51 Chapter Downloads

View Full Metrics

Abstract

Hippocampus makes an important role of memory in the brain. In this chapter, a study of brain wave recognition using deep learning methods is introduced. The purpose of the study is to match the ripple-firings of the hippocampal activity to the episodic memories. In fact, brain spike signals of rats (300–10 kHz) were recorded and machine learning methods such as Convolutional Neural Networks (CNN), Support Vector Machine (SVM), a deep learning model VGG16, and combination models composed by CNN with SVM and VGG16 with SVM were adopted to be classifiers of the brain wave signals. Four kinds of episodic memories, that is, a male rat contacted with a female/male rat, contacted with a novel object, and an experience of restrain stress, were detected corresponding to the ripple waves of Multiple-Unit Activities (MUAs) of hippocampal CA1 neurons in male rats in the experiments. The experiment results showed the possibility of matching of ripple-like firing patterns of hippocampus to episodic memory activities of rats, and it suggests disorders of memory function may be found by the analysis of brain waves.

Keywords

  • brain wave
  • ripple-like wave
  • episodic memory
  • pattern recognition
  • convolutional neural networks

1. Introduction

Brain activities and functions can be recorded by Electroencephalograph (EEG). EEG signals have been widely used in clinical examination, psychophysiology, engineering, etc. To realize the Brain Computer Interface (BCI), there have been many studies to recognize mental tasks by EEG signals and machine learning methods in the last decades [1, 2, 3, 4, 5, 6, 7, 8, 9]. Meanwhile, EEG electrodes are usually placed along the scalp of human or animals in noninvasive measurement methods, the activities of deep structure in the brain, such as the base of cortical gyrus, hippocampus, thalamus, and brain stem, are not recorded exactly by EEG data. To investigate the relationship between the hippocampus and memory function, Ishikawa, Tomokage, and Mitsushima recorded Multiple-Unit Activities (MUA) of CA1 neurons in adult rats that experienced four kinds of episodes (each for 10 minutes): restraint stress, social interaction with a female or male rat, or observation of a novel object [10]. According to the signal analysis of MUA, they hypothesized that early learning process [11] exists in CA1 pyramidal neurons and experience-induced super bursts, synaptic plasticity, and ripple-like firings. Sharp waves and ripples, which can be observed in the brain waves of CA1 and CA3 neurons of hippocampus with 140–300 Hz, are considered to be concerned with memory consolidation [12, 13, 14, 15].

In this chapter, machine learning methods such as Support Vector Machines (SVM) [16], Convolutional Neural Networks (CNN) [17], VGG16 (a well-known deep CNN with 16 layers [18, 19]), and hybrid models which are CNN with SVM [17], and VGG16 with SVM are introduced to recognize the MUA patterns related to episodic memories.

Advertisement

2. Multiple-Unit Activities (MUA) of CA1 neurons

Signal data of Multiple-Unit Activities (MUA) of CA1 neurons in hippocampus were recorded by vertically movable recording electrodes (Unique Medical Co., LTD, Japan), which were implanted in adult Sprague-Dawley rats (CLEA Japan Inc., Japan) at age 15 to 25 weeks in Yamaguchi University, Japan [10]. Neural signals were measured with a shielded cable and amplifiers. We used 300–10,000 Hz band-pass filter to obtain original signals of MUA. The sampling rate of the band-pass filtered signals was 25 kHz for the digital time series data by a CED 1401 interface (Cambridge Electronics Design, U.K.). The environment of the rat and the electrode used for MUA recording are shown in Figure 1.

Figure 1.

In vivo recording of MUA [10]. (a) An adult male rat implanted with electrodes. (b) Electrode used for recording MUA.

Experient events for episodic memories of male rats performed in the experiment were restraint stress, social interaction with a female or male rat, or observation of a novel object which was a yellow plastic brick (LEGO bricks) [10]. The schedule of MUA recording, including preparation (15 minutes), experience (10 minutes), and consolidation (30 minutes), is shown in Figure 2.

Figure 2.

Schedule of MUA recording and data used in classification experiment [16].

Four kinds of original MUA signals related to the different experiences are shown in Figure 3. The duration of each signal is 1 second, that is, 25,000 time series data. Notice the scale of the vertical axis (amplitude) is different in these signals.

Figure 3.

Four kinds of patterns of MUA signals related to different experiences (horizontal axis: time (1/25,000sec), vertical axis: volt). (a) Restraint stress (b) With a female rat (c) With a male rat (d) With an object.

An example of no ripple MUA signals is depicted in Figure 4. It was also adopted in our last classification experiment as a kind of input data.

Figure 4.

An example of no ripple MUA signals (horizontal axis: time (1/25,000sec), vertical axis: volt).

Advertisement

3. Machine learning methods

To analyze brain waves, various statistical methods and machine learning approaches have been suggested since 1950s [1, 4, 5, 6, 7, 8, 9]. To classify different EEG signals, methods such as Principal Component Analysis (PCA), Artificial Neural Networks (ANN), Support Vector Machines (SVM), and Deep Learning (DL), have been widely utilized. To recognize ripple-like waves of MUA of the hippocampus related to episodic memories, we confirmed the performance of SVM [16], Convolutional Neural Networks (CNN) [17], VGG16 [18, 19], and hybrid models such as CNN with SVM [17], and VGG16 with SVM by the experiment using the MUA data described in the last section.

3.1 SVM

Support Vector Machines (SVM) [16] showed a higher performance for pattern recognition problems than other classifiers such as Multi-Layered Perceptron (MLP), which is a kind of feedforward neural network and led to the second Artificial Intelligence (AI) boom in the middle of 1980s. For a two-class classification problem, the decision function of a linear SVM is as follows.

fx=signivixxi+bE1

where x is the unknown input data (e.g., a high-dimension vector), xi is so-called a support vector i, chosen from training data (teacher signals) which determines the hyperplanes with parameters viand b. Parameters in (1) are obtained with training data and Quadratic Programming (QP).

A nonlinear SVM maps the training data into a higher-dimensional feature space with a nonlinear kernel function kxy, then classifies the data by a separating hyperplane with maximum margin in the feature space.

fx=signivikxxi+bE2
kxxi=exp(xxi2/2σ2E3

For a multi-class problem, a one-vs-the-rest SVM algorithm needs to be adopted, that is, for a m-class problem, m SVMs are built at first, then the class of an unknown data is voted by these SVMs.

In the pattern recognition problem, four kinds of MUA signals were applied by SVM, 50.0% accuracy was obtained when 3,600 training data (900 for each kind of MUA signal) were utilized and each data with 25,000 dimensions (a signal in one second with a sampling rate 25.0 kHz).

3.2 CNNs

Convolutional Neural Networks (CNN) are mostly adopted in deep learning methods. For the MUA pattern recognition in this study, two shallow models of CNN, CNN-1 and CNN-2, were used as shown in Figure 5 [17] and Figure 6. CNN-2 was structured by more convolutional layers and pooling layers for the investigation of the change in recognition accuracy. The recognition experiment using the four kinds of MUA signals, as shown in Figure 2, showed that the deeper model CNN-2 had a higher performance than CNN-1. The identification rates (validation accuracies) of CNN-1and CNN-2 were 65.78 and 86.09%, respectively, when 1,328 signals of MUA were utilized to train CNNs, detailly, the number of different experiences was restraint stress 298, with a female rat 306, with a male rat 330, with a novel object 270, respectively. So, it is necessary to investigate the performance of more deeper CNN models for this pattern recognition problem and we tested the cases of VGG16 [18, 19, 20] and ResNet50 [21].

Figure 5.

CNN1: a shallow neural network for MUA pattern recognition [17].

Figure 6.

CNN-2: an improved neural network for MUA pattern recognition.

3.3 VGG and ResNet

A deep learning model VGG16 [18, 19, 20] is well-known for its high performance in ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, which is an image pattern recognition contest since 2010. VGG16 was trained by 1,000 classes of 14,000,000 images, and its error rate was 7.3%, meanwhile, the champion’s score of ILSVRC 2012 was 15.3%. The structure of VGG16 was shown in Figure 7 [20]. Convolutional layers and max-pooling layers and fully connected layers with ReLU units were composed repeatedly in VGG16.

Figure 7.

The structure of VGG16 [20].

A more complex deep learning model Residual Network (ResNet) won the champion of ILSVRC 2015 with an error rate of 3.57% (top-5 validation error). The comparison of structures between a conventional deep learning model VGG19 and a ResNet with 34 layers is shown in Figure 8 [21].

Figure 8.

The structures of VGG19 and ResNet34 [21].

ResNet solved the problem that deeper networks resulted in training error degradation problem by introducing a deep residual learning framework, that is, “shortcut connections” were added to the conventional plain networks.

In our recognition experiment, five kinds of MUA signals as shown in Figures 3 and 4 were preprocessed by Short-Time Fourier Transform (STFT), which resulted in higher recognition rates than by inputting the original values of signals to classifiers. The size of window function of STFT was 1,024, and the overlap of shift of windows was 256 for a signal with 25,000 data. A sample of the preprocessed data is shown in Figure 9. In fact, we compared the recognition rates between the case of original MUA signals and the case of STFT data using a VGG16 model without fine-tuning, and they were 70.92, 74.04% (val_acc), respectively.

Figure 9.

A power spectrum data of a MUA signal.

The learning performance of CNN-2, VGG16, and ResNet50 using the power spectrum data is shown in Figure 10. As a result, the validation accuracies of these classifiers were 88.95, 93.57, and 62.47%. It suggested that too shallow or deep structures of deep learning models were not suitable for the 5-class recognition problem of MUA signals, and VGG16 was the best of them. Note that the horizontal axes in Figure 10 were the number of training iterations, and it was 1,000 in the case of CNN-2, 200 in the cases of VGG16 and ResNet50. The reason for these differences is that the satisfied training needs to be finished when the training accuracy is converged.

Figure 10.

Learning curves of different deep learning models. (a) CNN-2 (b) VGG16 (c) ResNet50.

All training accuracies and training losses of CNN-2, VGG16, ResNet50 were converged, however, the validation accuracy of ResNet50 vibrated intensely. The reason can be thought that the model scale was too large for the problem, or in other words, the number of training samples and classes of MUA signals was too few for the optimization of a robust ResNet50.

3.4 Hybrid models

Generally, deep convolutional neural networks have fully connected layers, which are the classical Multi-Layered Perceptron (MLP). As Support Vector Machine (SVM) showed its priority to MLP for classification problems, we proposed a kind of hybrid model by adopting SVM to the deep learning models instead of MLP in our previous works [9, 17]. The hybrid models are composed of deep convolutional neural networks such as CNN-1, CNN-2, VGG16, which are optimized by transfer-learning (fine-tuning) with training data and replacing the output layers (such as full-connected layers and SoftMax output function) by SVMs. Using the 5-class data descripted in Section 3.2, we investigated the recognition rates of the conventional deep convolutional neural networks CNN-2 and VGG16 and the hybrid models CNN-2 with SVM and VGG16 with SVM by 5-fold cross-validation experiments. As shown in Figure 11, two hybrid models showed their priorities, and the validation accuracy of VGG16 with SVM even reached 95.60% for the 5-class problem of MUA signals.

Figure 11.

Comparison of validation accuracies of different deep learning models.

The classification results of the validation data were plotted by t-SNE method [22] as shown in Figure 12. It can be confirmed that VGG16 with SVM which had the highest recognition rate as shown in Figure 11 separated the five kinds of MUA signals better than CNN-2 with SVM.

Figure 12.

Visual comparison of labeled validation data given by CNN-2 with SVM and VGG16 with SVM according to t-SNE method [22]. Letters r, f, m, o, n denote MUA signals of restrain stress, with a female rat, with a male rat, with an object, and no ripples. (a) CNN-2 with SVM (b) VGG16 with SVM.

Advertisement

4. Ripple-like waves related to episodic memories

In last section, the MUA signals of rat’s hippocampus CA1 neurons corresponding to different episodic memories were classified by kinds of deep learning methods. It is interesting to discover the difference between these signals, especially, the pattern of different ripple-like waves among them. Features of input data can be extracted by deep learning models by convolutional layers or pooling layers and they can be expressed by visual explanations: Gradient-weighted Class Activation Mapping (Grad-CAM) [23].

In Figure 13, an original MUA signal (above) and its features (under) in thermography expression given by Grad-CAM were shown. High temperatures mean high participation degrees of the features. In this case, 41 features which were the convolutional layer of CNN-1 were evaluated for the original input image (1×25,000), that is, 5,000 data of the signal corresponding to one feature, and the convolution window of the signal slid 500 of the time series data. The most important feature for classification by CNN-1 was No.24 in red color, corresponding to the data from 11,500 to 16,500 as shown in the above of Figure 13. This duration of the MUA signal suggested a kind of ripple-like wave pattern concerning to episodic memory.

Figure 13.

The relationship between an original MUA signal and its 41 features given be Grad-CAM [23].

In Ref. [17], four kinds of MUA signals that are related to a rat’s experiences of restrain stress, with a female rat, with a male rat, with an object, and no ripples were analyzed by CNN-1 and Grad-CAM. Different from the case in Figure 13, in Figure 14 [17], there were 2,500 features were expressed for slide number 10, while the length of the MUA signal was 25,000 (1sec, 25kHz). Duration of episodic memory-related signals were extracted by choosing the highest temperatures in heatmaps as shown in Figure 15.

Figure 14.

MUA signals (above) and their heatmaps (under) of the event concerned features in CNN-1 [17] (horizontal axis in blue heatmap: feature number). (a) Restraint stress (b) With a male rat (c) With a female rat (d) With an object.

Figure 15.

Ripple-like waves related to the different episodic memories (Scale of horizontal axis: 1/25,000 sec; vertical axis: volt) [17]. (a)Restraint stress (b) With a male rat (c) With a female rat (d) With an object.

To enclose the distribution of the frequencies of ripple-like waves, Fourier transform and Cepstrum transform were performed to the durations of signals shown in Figure 15. The highest spectra of these duration point to the frequencies of 100–400 Hz are as shown in Figure 16.

Figure 16.

Fourier transform result of the specified intervals of MUA extracted by Grad-CAM (Scale of horizontal axis: Hz) [17]. (a) Restraint stress (b) With a male rat (c) With a female rat (d) With an object.

Cepstrum transform, which is another signal analysis method, estimates periodic structures in frequency spectra by the inverse Fourier transform of the logarithm of the Fourier transform results. The Cepstrum transform results of the four kinds of MUA signals are shown in Figure 17. Compare to the Fourier transform result, it can be more easy to find that the patterns of restrain stress in Figure 17(a) and with a male rat Figure 17(b) are similar to each other, meanwhile, ripple-like waves of a female rat Figure 17(c) and with an object Figure 17(d) are similar.

Figure 17.

Cepstrum transform results of the important intervals of MUA signals extracted by Grad-CAM (Scale of horizontal axis: msec) [17]. (a) Restraint stress (b) With a male rat (c) With a female rat (d) With a novel object.

To verify the data distributions of Fourier transform results and Cepstrum transform results, Principal Component Analysis (PCA) was adopted and the 2-dimensional results of PCA are shown in Figure 18. The similarities of the four MUA signals are more easy to be confirmed in the case of Cepstrum transform.

Figure 18.

Data distribution of different transforms of ripple-like waves by PCA [17]. (a) The case of Fourier transform. (b) The case of Cepstrum transform.

Advertisement

5. Conclusions

It is well-known that Sharp Waves and Ripples (SWR) in brain waves are related to memory consolidation. In this chapter, to classify Multiple-Unit Activities (MUA) of hippocampus CA1 neurons, machine learning methods, such as Support Vector Machine (SVM), Convolutional Neural Networks (CNNs), deep learning models such as VGG16 and hybrid models CNN with SVM and VGG16 with SVM were introduced. The MUA data of rats were recorded by movable electrodes implanted above the hippocampal CA1 and fixed with dental cement. Four kinds of events were experienced by a male adult rat: restraint stress, with an intruder male rat, with a female rat, and with a novel object (a yellow LEGO® brick). The highest recognition rate (validation accuracy) of the four kinds of ripple-like waves and a no-ripple signal was 95.6% given by VGG16 with SVM method. The duration of ripple-like waves of the original MUA signals were extracted by Grad-CAM, and spectra analysis of the relationship between these episodic memory-related signals were performed by Fourier transform, Cepstrum transform, and Principal Component Analysis (PCA).

Although the MUA data used in this experiment were recorded from male rats, the pattern of ripple firing associated with a particular episodic experience may be common to various animal species, including humans [24]. If commonality exists, it may be possible to identify the neural circuits that generate those patterns and extract experiential information from different animal species. These questions need to be answered in order to decipher memory information, and their resolution has the potential to open up a new era in brain sciences.

Advertisement

Acknowledgments

We would like to thank Mr. Takaaki SASAKI, Mr. Yuichi KOBAYSHI for them hard work involved with the recognition experiments of this study.

The study was supported JSPS KAKENHI Grant No. 19H03402, No.20K07724, No.22K12152, No.22H03709.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. Blankertz B et al. The BCI Competition 2003: Progress and perspectives in detection and discrimination of EEG Single Trials. IEEE Transaction on Biomedical Engineering. 2004;51(6):1044-1051
  2. 2. Colorado State University, Brain-Computer Interfaces Laboratory. Available from: http://www.cs.colostate.edu/eeg/
  3. 3. BCI competition II. Available from: http://www.bbci.de/competition/ii/#datasets
  4. 4. Chin ZY, Ang KK, Wang C, Guan C, Zhang H. Multi-class filter bank common spatial pattern for four-class motor imagery BCI. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009. Vol. 2009. Minneapolis, MN, USA: EMBC; 2009. pp. 571-574
  5. 5. Tang Z, Li C, Sun S. Single-trial EEG classification of motor imagery using deep convolutional neural networks. Optik. 2017;130:11-18
  6. 6. Schirrmeister RT, et al. Deep learning with convolutional neural networks for EEG decoding and visualization, arXiv:1703.05051v5 [cs.LG]. 2018
  7. 7. Kuremoto T, Baba Y, Obayashi M, Mabu S, Kobayashi K. Enhancing EEG signals recognition using ROC curve. Journal of Robotics, Networking and Artificial Life. 2018;4(4):283-286
  8. 8. Kuremoto T, Baba Y, Obayashi M, Mabu S, Kobayashi K. Mental task recognition by EEG signals: A novel approach with ROC analysis. In: Anbarjafari G, Escalera S, editors. Chapter 4Human-Robot Interaction–Theory and Application. InTech; 2018. pp. 65-78
  9. 9. Kuremoto T, Sasaki T, Mabu S. Mental task recognition using EEG signal and deep learning methods. Stress Brain and Behavior. 2019;1:18-23
  10. 10. Ishikawa J, Tomokage T, Mitsushima D. A possible coding for experience: Ripple-like events and synaptic diversity. BioRxiv. 2019:1-45. DOI: 10.1101/2019.12.30.891259
  11. 11. Mitsushima D, Sano A, Takahashi T. A cholinergic trigger drives learning-induced plasticity at hippocampal synapses. Nature Communications. 2013;4:2760. DOI: 10.1038/ncomms3760
  12. 12. Buzsaki G. Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning. Hippocampus. 2015;25:1073-1188
  13. 13. Joo HR, Frank LM. The hippocampal sharp wave-ripple in memory retrieval for immediate use and consolidation. Nature Reviews Neuroscience. 2018;19:744-757
  14. 14. Kay K, Frank LM. Three brain states in the hippocampus and cortex. Hippocampus. 2019;29:184-238
  15. 15. Fenandez-Ruiz A et al. Long-duration hippocampal sharp wave ripples improve memory. Science. 2019;264:1082-1086
  16. 16. Hearst MA et al. Support vector machines. IEEE Intelligent Systems; July/August 1998
  17. 17. Kuremoto T, Ishikawa J, Sasaki T, Mabu S, Mitsushima D. Matching the ripple-wave to the episodic memory – a case study of rat. Stress Brain and Behavior. 2023;1(e02204):19-30
  18. 18. Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision. 2015;115(3):211-252
  19. 19. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. arXiv: 1409.1556
  20. 20. New Technology Life Style. Available from: https://newtechnologylifestyle.net/vgg16originalpicture/. 2019
  21. 21. He K, Zhang X, Ren S, Sun J. Deep residual Learning for image recognition. arXiv:1512.03385v1. 2015
  22. 22. Maaten L, Hinton G. Visualizing data using t-SNE. Journal of Machine Learning Research. 2008;9:2579-2606
  23. 23. Sevaraiu RR, et al. Grad-CAM: Explanations from deep networks via gradient-based localization. arXiv:1610.02391v4 [cs.CV]. 2019
  24. 24. Buzsáki G, Logothetis N, Singer W. Scaling brain size, keeping timing: evolutionary preservation of brain rhythms. Neuron. 2013;80(3):751-764

Written By

Takashi Kuremoto, Junko Ishikawa, Shingo Mabu and Dai Mitsushima

Submitted: 29 May 2023 Reviewed: 11 July 2023 Published: 02 August 2023