Open access peer-reviewed chapter

Machine Learning and EEG for Emotional State Estimation

Written By

Krzysztof Kotowski and Katarzyna Stapor

Submitted: 30 November 2020 Reviewed: 10 March 2021 Published: 22 April 2021

DOI: 10.5772/intechopen.97133

From the Edited Volume

The Science of Emotional Intelligence

Edited by Simon George Taukeni

Chapter metrics overview

611 Chapter Downloads

View Full Metrics

Abstract

Defining “emotion” and its accurate measuring is a notorious problem in the psychology domain. It is usually addressed with subjective self-assessment forms filled manually by participants. Machine learning methods and EEG correlates of emotions enable to construction of automatic systems for objective emotion recognition. Such systems could help to assess emotional states and could be used to improve emotional perception. In this chapter, we present a computer system that can automatically recognize an emotional state of a human, based on EEG signals induced by a standardized affective picture database. Based on the EEG signal, trained deep neural networks are then used together with mappings between emotion models to predict the emotions perceived by the participant. This, in turn, can be used for example in validation of affective picture databases standardization.

Keywords

  • EEG
  • emotion recognition
  • emotion perception
  • machine learning
  • deep neural networks

1. Introduction

In psychological research, the most common method of measuring perceived emotions or emotional states is through self-assessment forms filled manually by participants. The information they give is useful but very subjective and dependent on many extraneous factors, i.e. the construction of the form, the instructions, and the level of emotional intelligence of the participant. Also, the forms cannot be used when working with children or mentally disabled people. The physiological signals can give a more objective view of the emotional reactions of the body. Among measurement techniques using galvanic skin response (GSR), facial electromyography (EMG), electrocardiography (ECG), breathing rate, or temperature; electroencephalography (EEG) is one of the most common in emotion recognition applications. It is non-invasive and offers high-resolution, high-dimensional data about the source of the emotions itself - the brain activity. In EEG, highly conductive electrodes placed on the scalp collect the electrical charge induced by the activity of the brain.

The correlation between emotional state and EEG is widely used in cognitive psychology, psychophysiology, and medicine [1] for the examination of mental disorders like depression [2], autism spectrum disorder (ASD) [3], attention-deficit hyperactivity disorder (ADHD) [4], or schizophrenia [5]. From a psychological point of view, EEG gives insights into the mechanisms of how emotions are made. Emotion recognition systems, like the one presented in this chapter, can be used to assess the emotional perception of humans. However, the analysis of complex and high-dimensional EEG patterns and correlations would be virtually impossible without computers and computational methods like machine learning. The emotion recognition algorithms are a part of a special branch of computer science called affective computing [6]. It is also a part of the artificial intelligence field as it relates to the understanding and displaying emotions by machines. Automatic emotion recognition systems based on EEG have already shown outstanding accuracy in many different applications [7] and well-established benchmarks like DEAP (database for emotion analysis using physiological signals) [8]. Machine learning algorithms are used in the vast majority of these systems and are considered state-of-the-art in the domain. Among them, deep neural networks are the most promising emerging approach which does not require additional feature extraction steps [9].

The chapter presents the idea and design of the system for validation of affective picture databases by confronting its result with predictions of EEG-based artificial deep neural networks. Consecutive sections are a step-by-step guide for creating such a system. In Section 2, different psychological models of emotion are described, the problem of mapping between emotion models is introduced, and our new mapping is proposed. In Section 3, the instructions for designing a complete EEG experiment for machine learning emotion recognition are given, together with a list of affective picture sets, and state-of-the-art algorithms. In Section 4, the system for validation of affective databases is presented. The chapter ends with a summary and future work section.

Advertisement

2. Psychological models of emotion

Recognition of emotions must start from the definition of the model in which they are measured. This is the main dividing line in the field of emotion analysis [10]. The theory of emotions is still an open topic despite plenty of publications and research. The reason is that human emotions are mental states generated by the central nervous system [11], and as such, they are hard to assess, nondeterministic, and subjective phenomena. Individuals with different levels of emotional intelligence may not be able to assess their emotional state accurately [ref]. Moreover, similar stimuli may induce very different states in two similar people, and the same person may respond differently to the seemingly similar stimuli. The age, time of the day, mood, experience, fatigue may all affect the perception of emotions.

However, there is some evidence for neural circuits that are responsible for particular basic emotional events [3], so some assumptions and simplifications were made to extract several different emotion models. In general, they divide into discrete (or categorical) and dimensional (or continuous) models.

The discrete emotion models describe different numbers of independent emotion categories. One of the most popular models by Paul Ekman describes six universal basic emotions of anger, disgust, fear, happiness, sadness, and surprise [12]. The model is derived from the observation of universal facial expressions. The paper describing the model has been cited and discussed by thousands of researchers, but, the existence of basic emotions is still an unsettled issue in psychology, rejected by many researchers [13, 14, 15]. Another model by Plutchik describes 8 primary bipolar emotions: joy and sadness; anger and fear; surprise and anticipation; and trust and disgust [16]. But, unlike in Ekman’s model, Plutchik’s wheel of emotions relates these pairs in the circumplex model. Recently, the model consisting of as many as 27 classes bridged by continuous gradients was proposed [17].

The continuous models are usually represented in numerical dimensional space. The most popular dimensions were defined by Mehrabian and Russell in [18] as pleasure, arousal, and dominance (PAD model). The first dimension is frequently called valence in the literature, it describes how pleasant (or unpleasant) is the stimuli for the participant. The arousal dimension defines the intensity of emotion. Dominance is described as a level of control and influence over one’s surroundings and others [19]. Usually, less attention is paid to this third dimension in the literature [20]. However, only the dominance dimension enables to distinguish between angry and anxious, alert and surprised or relaxed from protected [19]. The model that includes only valence and arousal levels is called a circumplex model of affect [21] and is one of the most commonly used to describe the emotions elicited with stimuli. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions [22, 23]. The effort to present scientific results in a simple and structured form may lead to a critical reduction of the phenomena. The newest research findings on the global meaning structure of the emotion domain pointed out that more than two dimensions are needed to describe the nature of the human emotional experience sufficiently [23, 24].

2.1 Mappings between models

Discrete and dimensional models are not defined as contradictory. Instead, they both can give unique value that can assist in understanding the functions of emotions [25]. There are multiple works on mappings between different, both discrete and continuous, emotion models [22, 26, 27]. They are usually based on self-assessment questionnaires of the group of participants who assesses the discrete emotions (induced or represented by words, images, videos, short stories, or facial expression) in a few continuous dimensions of circumplex, PAD, or similar models, i.e. Valence-Arousal-Control-Utility [22], Valence-Arousal-Approach/Avoidance [28]). Formerly, the questionaries were based on Self-Assessment Manikins (SAMs) [29] or several-point (usually 5, 7, or 9 points) Likert scale (like in IAPS [30] or OASIS [31] datasets). The new trend is to use more fine-grained continuous scales like selecting a point on the 10 cm line [22] or Affective Slider [32].

Two popular mappings based on emotion words are presented in Figure 1. The three-dimensional visualizations are adapted from [22]. The emotion words are placed in the position representing their average PAD assessment by 300 [19] and 70 subjects [27] accordingly. The length of the dashed lines is proportional to the pleasure/valence value. In both models, the most pleasure-inducing words are Love, Happiness, Hope, and Gratefulness. On the other end, we have highly-arousing Anger and Fear that can be differentiated only by the dominance dimension. The least arousing and pleasant word is Sadness which is also of low dominance. The main difference between mappings is the location of Hate which is relatively less arousing in the Hoffmann mapping. Some of the emotion words like Contempt, Disgust, or Compassion have equivalents only in the model presented in Figure 2.

Figure 1.

Average locations of 12 emotion words in PAD dimensions according to Russell and Mehrabian lexicon [19] (on the left) and Hoffmann et al. [27] (on the right).

Figure 2.

Average locations of 16 emotion words in Valence-Arousal-Control dimensions before (on the left) and after multidimensional scaling (on the right) as calculated in [22].

Figure 2 is based on data from [22], it presents the average assessment of 16 common emotion words by 187 subjects in Valence-Arousal-Control dimensions (the dimension of Utility was also assessed, but is omitted in the figure) before and after multi-dimensional scaling (MDS) into 3 dimensions that results with a far more “honest” Euclidean space between emotion instances. As can be observed, the locations of emotion words after MDS are much more scattered across space, but they keep some basic relationships, i.e. Love, Happiness, Gratefulness, and Compassion have still larger values in Dimension 1 (similar to Valence); pairs of similar emotions like Sadness and Disappointment, or Happiness and Love are still relatively close to each other. Thus, this MDS mapping may be a good basis for machine learning algorithms based on dimensional proximities.

2.2 Own mapping between NAPS and CAP-D affective picture sets

In our example, we will use the set of 266 affective pictures from NAPS (Nencki Affective Picture System) [28] and NAPS BE (a subset of NAPS with 6 basic emotion labels added) [33] that were included in CAP-D (Categorized Affective Pictures Database) [34]. Subsets of images from this set were assessed in several emotion models by different groups of participants:

  • valence, arousal, and approach-avoidance dimensions (266 images assessed by 119 female and 85 male subjects in NAPS)

  • valence and arousal dimensions (144 images assessed by 67 female and 57 male subjects in NAPS-BE)

  • arousal dimension (266 images assessed by 73 female and 60 male subjects in CAP-D)

  • intensities of 6 basic Ekman emotions and dominant emotion (or emotions) per picture (144 images assessed by 67 female and 57 male subjects in NAPS-BE)

  • categorization and intensity in 10 emotion categories including 6 basic Ekman emotions: anger, compassion, disgust, fear, happiness, love, peacefulness, pride, sadness, surprise (266 images assessed by 73 female and 60 male subjects, and 15 clinical psychologists in CAP-D)

Several mappings between dimensional and discrete emotion models can be built on this diverse set of responses. The diagram of possible mappings is presented in Figure 3.

Figure 3.

The diagram of possible mappings between dimensional and discrete emotion models assessed by participants in NAPS, NAPS BE, and CAP-D. Arrows are directed from subsets to supersets of images. Solid/dashed lines represent mappings based on the assessment of the same/different group of participants accordingly.

Among the options presented in Figure 3, we selected three mappings from 10 emotions from CAP-D onto Valence-Arousal-Approach/Avoidance from NAPS (Table 1 and Figure 4), Valence-Arousal from NAPS, and Valence-Arousal from NAPS-BE (Figure 5). In order to establish each mapping, the dimensional assessments for all images representing a specific discrete class in CAP-D (as the 1st emotion) were normalized to <−1, 1 > range, averaged, and placed in the calculated coordinates in the dimensional space (Figures 4 and 5). In practice, only 9 discrete emotions could be mapped for NAPS as there were no images representing surprise as the 1st emotion in CAP-D. And, only 8 discrete emotions for NAPS-BE (no surprise and pride as the 1st emotions). In all three mappings, two main groups can be observed: the group of higher valence, lower arousal emotion categories (happiness, love, peacefulness), and the group of lower valence, higher arousal emotions, with disgust and anger as the most extreme examples. The main difference from mappings in Figure 1 (based on emotion words, not images) is that love and happiness have relatively low arousal. As commented by authors of NAPS, it is hard to induce highly arousing positive emotions using just still images (without using erotic content such as included in NAPS ERO [35]). Another observation is that pairs of emotions sadness-compassion, love-peacefulness, and anger-disgust are very close to each other in Valence-Arousal mappings. Considering the fact they are based on the assessment of different groups of people, it may suggest that these pairs of emotions are universally, closely related with each other when induced using images (or at least images from NAPS).

AngerCompas.DisgustFearHapp.LovePeace.PrideSadness
Valence−0.50−0.28−0.47−0.310.440.470.44−0.11−0.31
Arousal0.380.250.360.34−0.03−0.21−0.230.280.21
Appr/Avoid−0.41−0.17−0.49−0.170.360.380.42−0.08−0.20

Table 1.

Average assessment of 9 discrete emotions from CAP-D in Valence-Arousal-Approach/Avoidance dimensions as assessed in NAPS.

Figure 4.

Average locations of 9 discrete emotions from CAP-D in Valence-Arousal-Approach/Avoidance dimensions as assessed in NAPS.

Figure 5.

Average locations of discrete emotions as assessed in CAP-D mapped to Valence-Arousal dimensions as assessed in NAPS (on the left) and NAPS-BE (on the right).

The presented mapping will be used in section 4. as a part of the EEG system for validation of affective databases standardization. When using this mapping in this system, we need also a specific method for the discretization of the coordinates. We can use the estimate which checks if the coordinates in the dimensional space predicted by the algorithm are closer than a standard deviation from the discrete emotion position in the mapping. Also, we can just limit to the nearest discrete emotion in the dimensional space. A detailed discussion about discretization and precision metrics in emotion recognition can be found in [26].

Advertisement

3. Machine learning for EEG-based emotion recognition

The emotion recognition from EEG is an example of a problem that wouldn’t have a solution without the use of modern machine learning methods. Physiological signals like EEG have very high dimensionality, high level of noise, and physiological artifacts. It is very hard to define simple hand-crafted algorithms to deal with this kind of data. This section is a short introduction to the design of machine learning classifiers, and a summary of current trends and applications of computer-aided emotion recognition.

Machine learning (ML) describes the methods of automatic knowledge extraction and drawing conclusions from the provided database. It is a part of the broader domain of artificial intelligence (AI) that is connected with automatic reasoning and higher cognitive functions in machines. The simplest ML algorithms like k-nearest neighbors (kNN) or k-Means just compare the test samples with the existing database and classify them based on the similarity. More complex ML algorithms induce general rules present in the database and use these rules to predict test samples (decision trees). Algorithms like support vector machines (SVMs) transform and divide the database using multi-dimensional planes that split samples of different categories.

All these traditional algorithms have one common disadvantage: they do not work well with massive amounts of high-dimensional data like EEG. Thus, it is usually necessary to extract some lower-dimensional features like power or frequencies of brain waves. This is not the case for deep learning methods that can operate on raw data. Deep learning is inherently connected with artificial neural networks. They are inspired by the biological model of neural networks in the brain. Such deep artificial neural networks can be seen as very complex non-linear functions translating input data into output data of any kind. They encode all the features and knowledge about the data in the connections between neurons in the network. Deep neural networks have shown outstanding accuracy in different EEG applications [9]. Thus, we use them as a “core algorithm” in our examples. However, it is possible to replace it with any other traditional machine learning method based on features like brain waves, event-related potentials (ERPs) and synchronization, frontal EEG asymmetry, or steady-state visually evoked potentials (SSVEPs) [7, 36].

The main part of the system is an emotion recognition machine learning algorithm. The algorithm learns to translate EEG signals into values (discrete, dimensional, or both) defined by each emotion model (or combination of models). The core (architecture, hyperparameters, initialization) of the algorithm is the same for each model, only the definition of the outputs and loss functions are changing. For discrete models, the traditional classification approach is applied. For dimensional models, emotion recognition becomes a regression problem [37]. There is also the possibility to design a multi-output algorithm based on both discrete and dimensional models. If this multi-target optimization increases the generalizability of the algorithm it may support the importance of both dimensional and discrete models of emotions [25]. In our example in Figure 6, we present an intra-subject learning approach where the neural network is trained on a representative sample of affective images – the distribution of pictures’ features (e.g. picture categories, emotions induced, colors, brightness) used during training should be similar in the affective database validated in the final system. We keep the same set of participants in training and in the final system to ensure comparability of the physiological responses.

Figure 6.

The schematic diagram of the process of training deep neural networks for EEG-based emotion recognition from affective pictures. The presented stimuli come from the NAPS set [28].

3.1 Designing an EEG experiment for emotion recognition

Perhaps, the hardest, but essential part of creating an EEG-based classifier is the design of proper experimental procedures for data acquisition. It is a crucial part that requires specialistic knowledge in psychology, hardware, and signal processing. One mistake in this phase may cause a failure of the whole study. The best way to start is to check the literature for similar experiments and learn from their ideas and mistakes. To train and then test EEG-based classifiers correctly, it is important to follow the same procedures and maintain the conditions of the experiments. Our knowledge about cognitive brain functions is incomplete, so potentially irrelevant confounding variables may have a strong impact on the brain response. The list of confounding variables typically includes: observer-expectancy effect (the way instructions are provided, presence of researcher during the experiment), age and gender of participants (many confirmed differences between women and men in the literature), the time of the day, the mood, fatigue, and motivation of the participant (usually increased by some reward), left/right-handedness (if the participant responds for stimuli), or impact of drugs and stimulants.

The dependent variable in the emotion recognition EEG experiments is usually defined in time or frequency space, and the independent variable is usually a class of emotion or a value in the dimensional model that intends to be induced using the specific stimulus. According to a thorough survey from [7], the most frequently used types of stimuli are affective images (in over 35% of articles) before videos, music, and other modalities like games or imagination techniques. This is partly because of the high availability of affective picture sets described in the next section.

3.2 Affective picture databases

There are several publicly accessible affective picture sets for emotion recognition (Table 2). Arguably, the most popular one in the literature is IAPS [38] (International Affective Picture System, pronounced “eye-apps”). It contains color photographs of objects, landscapes, and animals, but also dead bodies and erotic content in order to induce a wide range of emotional states. It uses three-dimensional scales of valence, arousal, and dominance/control. However, there are newer sets like NAPS (Nencki Affective Picture System) [28] and OASIS (Open Affective Standardized Image Set) [31] that contain many more pictures and/or assessments. The largest NAPS set also has scales in three similar dimensions of valence, arousal, and approach/avoidance, and may be easily extended by discrete emotion labels from NABS-BE [33], erotic pictures from NAPS-ERO [35], or fear-inducing pictures from SFIP [39] (Set of Fear Inducing Pictures). The pictures in NAPS are of high-quality, and represent 5 main categories (people, faces, animals, objects, and landscapes). The newest CAP-D dataset [34] aggregates subsets of pictures from IAPS, NAPS, and GAPED, and extended them with discrete emotional categories.

Dataset name [ref] (Year)Number of pictures and assessmentsAssessment methodEmotion models used
IAPS [38] (2005)956 pictures, 100 subjects (50 women)5-point Self-Assessment Manikin (SAM)Dimensional model: valence, arousal, dominance/control
NAPS [28] (2014)1356 pictures, 204 subjects (119 women)9-point sliding scaleDimensional model: valence, arousal, approach/avoidance
6 basic emotions (only for a subset of 510 images) [33]
OASIS [31] (2017)900 pictures, 822 subjects (420 women)7-point Likert scaleDimensional model: valence, arousal
GAPED [40] (2011)730 pictures, 60 subjects (no gender given)100-points rating scaleDimensional model: valence, arousal, congruence with moral and legal norms
CAP-D [34] (2018)513 pictures, 133 subject (73 women), 15 clinical psychologistsDescribing the picture with 1 of 10 emotion words10 discrete emotions, arousal and intensity dimensions
SFIP [39] (2017)288 pictures, 1671 subjects5-point Likert scale for fear,
9-point Self-Assessment Manikin for valence
Intensity of fear, valence

Table 2.

The affective picture sets for emotion recognition.

3.3 EEG devices

The selection of an EEG device is dependent on the purpose and goal of the study. For sophisticated psychological or medical research in emotion recognition, it is crucial to use more expensive research-grade or medical-grade hardware. The examples of EEG caps of such devices are presented in Figure 7. However, the heart of the system is not the cap, but the amplifier. It should provide at least 32 channels for electrodes with at least 256 Hz sampling to record all relevant frequencies, and the voltage resolution of less than a few nanovolts to capture small differences in the signal between conditions. Additional channels for electrooculogram (EOG) and accelerometers are necessary for artifact filtering algorithms.

Figure 7.

Three most popular EEG caps from research-grade EEG systems. From left to right: Biosemi ActiveTwo 128 channels, BrainProducts ActiCap 32 channels, and Compumedics Quik-Cap 64 channels (image source: [1]).

There is an emerging interest in low-cost solutions, especially for applications in brain-computer interfaces. One of the examples is Emotiv EPOC+ that was validated to work well with emotion recognition [41, 42].

3.4 State-of-the-art emotion recognition algorithms

There are a couple of thorough reviews of EEG-based emotion recognition systems in the literature [1, 7, 36, 43]. The vast majority of top-performing algorithms are based on machine learning approaches. The methods from the literature achieve levels of up to 94% for 2-class discrete problems (such as arousal vs. neutral or happiness vs. sadness) and up to 82% for 4-class classification (such as joy, anger, sadness, and pleasure). On the example of the DEAP (database for emotion analysis using physiological signals) [8], the paper [44] shows the comparison of different classifiers for 4 quadrants of the circumplex model: 63% for the kNN, 67% for the SVM, 70% for deep convolutional neural network and 75% for the deep hybrid neural network. On the example of the eNTERFACE06_EMOBRAIN database, the best classification accuracy among calm, exciting positive, and exciting negative emotional states achieved around 77% [45]. On the SEED dataset, the emotion classification into positive, neutral, and negative classes has achieved accuracy up to 83% [46]. Presented accuracies are virtually unreachable for humans.

Advertisement

4. EEG-based system for validation of affective picture databases standardization

In this section, we present the idea of the system for EEG-based validation of affective picture databases (Figure 8). The system consists of:

  • a computer displaying affective pictures, collecting self-assessment responses, and providing feedback to the participant

  • EEG device placed on the participant’s head

  • a set of trained deep neural networks (DNNs) for emotion recognition from EEG

  • a set of mappings between emotion models

Figure 8.

The diagram of the system for validation of affective picture databases standardization.

In our example, stimuli from the CAP-D picture set are displayed on the screen. Participant assesses each picture following the emotion categorization procedure from CAP-D. For each stimulus display period, the EEG signal is collected and passed to the input of trained DNNs for emotion recognition. The process of training such DNNs is described in Section 3. Based on the input EEG signal, each DNN outputs coordinates in the specific dimensional emotion model. They need to be mapped onto discrete emotions used in the emotion categorization of CAP-D. An example of such mapping is presented in Section 2.2. The mappings are crucial when operating on datasets described using different emotion models.

In the results validation phase, the information about the discrete emotion class labels from the emotion categorization, output of the selected mapping, and the ground truth label of the image are compared. There are several possible outcomes from such a comparison:

  1. All the labels are the same – the normative label from the database is in agreement with the participant’s categorization and physiological response.

  2. The normative label alone is different – the emotion induced in the participant consistently differ from the normative label.

  3. The participant’s categorization alone is different – the physiological response is in agreement with the normative label but was assessed differently by the participant.

  4. The output of the mapping alone is different – the participant’s categorization is in agreement with the normative label, but the physiological response suggests a different label.

  5. All the labels are different – there is no agreement between ground truth, self-assessment, and mappings.

Based on these outcomes several conclusions can be drawn and translated into the feedback about the database standardization. For outcome 1., the feedback should say about positive validation of the normative label. This is the desired outcome of the system. On the other hand, outcome 2. suggests a serious problem with the normative label for the particular participant, as both subjective and physiological responses agree on a different label. This situation itself does not mean the validation is negative. Only if this problem persists among the majority of participants the label should be reconsidered. The supporting example here is the picture of a happy dog that should induce happiness according to the normative label but induces fear in individuals with cynophobia (the fear of dogs). Outcome 3., if consistent among the population, may suggest problems with naming the proper emotion on the picture. The physiological response is as expected for the normative label, but participants do not select the expected label. The supporting example here is the picture with a normative label of “fear” presenting the wolf eating its prey that induces fear in the physiological response. But, participants may focus on the prey’s appearance in the subjective response and select the label “disgust”. In this example, we may face the problem of ambiguous labeling of the image. If outcome 3. is present only in individual participants it may rather suggest their problems of emotion perception. Outcome 4. should be a suggestion for the authors of the database that the normative label of the picture may be biased by subjective responses of the participants (e.g. because of some cultural or ethical reasons), so their physiological responses disagree with conscious categorization. E.g., they cannot answer differently because it would put them in a bad light. Outcome 5 is the only one resulting with clearly negative validation where all participant’s reactions are different. It may suggest that the normative label is too ambiguous or too weak to be perceived correctly.

The system was designed to be generic. The described validation may be performed for any discrete and dimensional models with little to no modifications of the flow. The only requirement is the existence of at least one algorithm trained to recognize the assessed emotions or at least one mapping which translates recognition results (in a different emotion model) into the target model. The more algorithms and mappings the more detailed validation results. Also, the system can be easily adapted to videos, sound, or text stimuli. Additionally, this system may select the most feasible emotion model for the participant and can be calibrated for him by fine-tuning the networks using his consecutive responses.

This system may be further adapted as a tool for training emotion perception - one of the branches of emotional intelligence that is measured in the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) [47]. The feedback from the system provides suggestions of improvements in the emotional perception and points to the differences between self-assessment and normative benchmark that should be considered by the participant.

Advertisement

5. Summary and future work

The chapter presents a conceptual design of the computer system that uses EEG signals and deep neural networks to assess the affective picture databases standardization. According to the presented current state-of-the-art in psychology and machine learning, this kind of system is possible to create. All elements of the systems are ready to use. The only challenge is the selection of a representative population and collection of a significant amount of EEG data to train the deep neural networks.

As there are many models for describing emotions, we focused here on the mappings between emotion models. Such mappings allow using machine learning methods trained on one model for emotion recognition in a different model. There is a lack of emotion mappings for affective picture sets, so our new mappings between dimensions of valence, arousal, approach/avoidance, and discrete emotions are the value added by the chapter. There is also a possibility that one consistent and dominant model of emotion will be established in the future. Then, the mappings may be deprecated, and one “general” model may be used to train the deep neural network.

The genericity of the system opens many possibilities for future work and adaptations to different applications. Besides emotion self-assessment validation, the system can be adapted for validation of emotion mappings, or emotional intelligence tests, e.g. emotion perception task from MSCEIT. It may be used in the future for the rehabilitation of people with emotion perception disorders like ASD. Also, the new machine learning methods can be inserted into the system and compared with existing deep neural networks. Even the EEG device may be replaced or extended with other physiological measurements without big changes in the system architecture.

The exploration of top-performing deep neural networks and emotion mappings may help to understand the underlying biological model of emotion, e.g. by using feature visualization approaches [48].

Advertisement

Acknowledgments

Authors would like to thank Future Processing Healthcare company (https://futurehealthcare.software/) for the financial support.

References

  1. 1. Kotowski K, Fabian P, Stapor K. Machine learning approach to automatic recognition of emotions based on bioelectrical brain activity. In: Simulations in Medicine Computer-aided diagnostics and therapy. DeGruyter; 2020. p. 15-34
  2. 2. Acharya UR, Sudarshan VK, Adeli H, Santhosh J, Koh JEW, Adeli A. Computer-Aided Diagnosis of Depression Using EEG Signals. Eur Neurol. 2015;73(5-6):329-36
  3. 3. Bosl WJ, Tager-Flusberg H, Nelson CA. EEG Analytics for Early Detection of Autism Spectrum Disorder: A data-driven approach. Sci Rep [Internet]. 2018 Dec [cited 2019 May 3];8(1). Available from: http://www.nature.com/articles/s41598-018-24318-x
  4. 4. Adeli H, Ghosh-Dastidar S. Automated EEG-Based Diagnosis of Neurological Disorders: Inventing the Future of Neurology. 1 edition. Boca Raton, FL: CRC Press; 2010. 423 p
  5. 5. Isaac C, Januel D. Neural correlates of cognitive improvements following cognitive remediation in schizophrenia: a systematic review of randomized trials. Socioaffective Neurosci Psychol. 2016 Jan;6(1):30054
  6. 6. Picard RW. Affective Computing. Cambridge, MA, USA: MIT Press; 1997
  7. 7. Al-Nafjan A, Hosny M, Al-Ohali Y, Al-Wabil A. Review and Classification of Emotion Recognition Based on EEG Brain-Computer Interface System Research: A Systematic Review. Appl Sci. 2017;7(12)
  8. 8. Koelstra S, Muhl C, Soleymani M, Jong-Seok Lee, Yazdani A, Ebrahimi T, Pun T, Nijholt A, Patras I. DEAP: A Database for Emotion Analysis ;Using Physiological Signals. IEEE Trans Affect Comput. 2012 Jan;3(1):18-31
  9. 9. Kotowski K, Stapor K, Ochab J. Deep Learning Methods in Electroencephalography. In: Tsihrintzis GA, Jain LC, editors. Machine Learning Paradigms: Advances in Deep Learning-based Technological Applications [Internet]. Cham: Springer International Publishing; 2020 [cited 2020 Jul 27]. p. 191-212. (Learning and Analytics in Intelligent Systems). Available from: https://doi.org/10.1007/978-3-030-49724-8_8
  10. 10. Calvo RA, Kim SM. Emotions in Text: Dimensional and Categorical Models. Comput Intell. 2013;29(3):527-43
  11. 11. Panksepp J. Affective neuroscience: The foundations of human and animal emotions. New York, NY, US: Oxford University Press; 1998. xii, 466 p. (Affective neuroscience: The foundations of human and animal emotions.)
  12. 12. Ekman P. Basic Emotions. In: Handbook of Cognition and Emotion. Wiley-Blackwell; 2005. p. 45-60
  13. 13. Gendron M, Roberson D, van der Vyver JM, Barrett LF. Perceptions of Emotion from Facial Expressions are Not Culturally Universal: Evidence from a Remote Culture. Emot Wash DC. 2014 Apr;14(2):251-62
  14. 14. Russell JA. Core affect and the psychological construction of emotion. Psychol Rev. 2003;110(1):145-72
  15. 15. Jack RE, Garrod OGB, Yu H, Caldara R, Schyns PG. Facial expressions of emotion are not culturally universal. Proc Natl Acad Sci U S A. 2012 May 8;109(19):7241-4
  16. 16. Plutchik R. The Nature of Emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. Am Sci. 2001;89(4):344-50
  17. 17. Cowen AS, Keltner D. Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proc Natl Acad Sci. 2017 Sep 19;114(38):E7900-9
  18. 18. Mehrabian A, Russell JA. An approach to environmental psychology. Cambridge, MA, US: The MIT Press; 1974
  19. 19. Russell JA, Mehrabian A. Evidence for a three-factor theory of emotions. J Res Personal. 1977 Sep;11(3):273-94
  20. 20. Bakker I, van der Voordt T, Vink P, de Boon J. Pleasure, Arousal, Dominance: Mehrabian and Russell revisited. Curr Psychol. 2014 Sep;33(3):405-21
  21. 21. Russell JA, Lewicka M, Niit T. A cross-cultural study of a circumplex model of affect. J Pers Soc Psychol. 1989;57(5):848-56
  22. 22. Trnka R, Lačev A, Balcar K, Kuška M, Tavel P. Modeling Semantic Emotion Space Using a 3D Hypercube-Projection: An Innovative Analytical Approach for the Psychology of Emotions. Front Psychol [Internet]. 2016 Apr 19 [cited 2020 Nov 22];7. Available from: http://journal.frontiersin.org/Article/10.3389/fpsyg.2016.00522/abstract
  23. 23. Fontaine JJR, Scherer KR. The global meaning structure of the emotion domain: Investigating the complementarity of multiple perspectives on meaning. In: Components of emotional meaning: A sourcebook. New York, NY, US: Oxford University Press; 2013. p. 106-25. (Series in affective science)
  24. 24. Fontaine JRJ, Scherer KR, Roesch EB, Ellsworth PC. The World of Emotions is not Two-Dimensional. Psychol Sci. 2007 Dec;18(12):1050-7
  25. 25. Harmon-Jones E, Harmon-Jones C, Summerell E. On the Importance of Both Dimensional and Discrete Models of Emotion. Behav Sci. 2017 Sep 29;7(4):66
  26. 26. Landowska A. Towards New Mappings between Emotion Representation Models. Appl Sci. 2018 Feb 12;8(2):274
  27. 27. Hoffmann H, Scheck A, Schuster T, Walter S, Limbrecht K, Traue HC, Kessler H. Mapping discrete emotions into the dimensional space: An empirical approach. In: 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC) [Internet]. Seoul, Korea (South): IEEE; 2012 [cited 2020 Nov 22]. p. 3316-20. Available from: http://ieeexplore.ieee.org/document/6378303/
  28. 28. Marchewka A, Żurawski Ł, Jednoróg K, Grabowska A. The Nencki Affective Picture System (NAPS): Introduction to a novel, standardized, wide-range, high-quality, realistic picture database. Behav Res Methods. 2014 Jun;46(2):596-610
  29. 29. Lang PJ. Behavioral treatment and bio-behavioral assessment: Computer applications. In: Sidowski JB, Johnson JH, Williams TA, editors. Technology in mental health care delivery systems. Norwood, NJ: Ablex; 1980. p. 119-37
  30. 30. Bradley MM, Lang PJ. International Affective Picture System. In: Zeigler-Hill V, Shackelford TK, editors. Encyclopedia of Personality and Individual Differences [Internet]. Cham: Springer International Publishing; 2017 [cited 2020 Nov 22]. p. 1-4. Available from: https://doi.org/10.1007/978-3-319-28099-8_42-1
  31. 31. Kurdi B, Lozano S, Banaji MR. Introducing the Open Affective Standardized Image Set (OASIS). Behav Res Methods. 2017 Apr;49(2):457-70
  32. 32. Betella A, Verschure PFMJ. The Affective Slider: A Digital Self-Assessment Scale for the Measurement of Human Emotions. Tran US, editor. PLOS ONE. 2016 Feb 5;11(2):e0148037
  33. 33. Riegel M, Żurawski Ł, Wierzba M, Moslehi A, Klocek Ł, Horvat M, Grabowska A, Michałowski J, Jednoróg K, Marchewka A. Characterization of the Nencki Affective Picture System by discrete emotional categories (NAPS BE). Behav Res Methods. 2016 Jun;48(2):600-12
  34. 34. Moyal N, Henik A, Anholt GE. Categorized Affective Pictures Database (CAP-D). J Cogn. 2018 Sep 26;1(1):41
  35. 35. Wierzba M, Riegel M, Pucz A, Leśniewska Z, Dragan WŁ, Gola M, Jednoróg K, Marchewka A. Erotic subset for the Nencki Affective Picture System (NAPS ERO): cross-sexual comparison study. Front Psychol [Internet]. 2015 Sep 10 [cited 2020 Nov 25];6. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4564755/
  36. 36. Kim M-K, Kim M, Oh E, Kim S-P. A Review on the Computational Methods for Emotional State Estimation from the Human EEG. Comput Math Methods Med. 2013;2013:1-13
  37. 37. Buechel S, Hahn U. Emotion analysis as a regression problem - dimensional models and their implications on emotion representation and metrical evaluation. In: Proceedings of the Twenty-second European Conference on Artificial Intelligence [Internet]. NLD: IOS Press; 2016 [cited 2021 Mar 24]. p. 1114-1122. (ECAI’16). Available from: https://doi.org/10.3233/978-1-61499-672-9-1114
  38. 38. Bradley MM, Lang PJ. The International Affective Picture System (IAPS) in the study of emotion and attention. In: Handbook of emotion elicitation and assessment. New York, NY, US: Oxford University Press; 2007. p. 29-46. (Series in affective science.)
  39. 39. Michałowski JM, Droździel D, Matuszewski J, Koziejowski W, Jednoróg K, Marchewka A. The Set of Fear Inducing Pictures (SFIP): Development and validation in fearful and nonfearful individuals. Behav Res Methods. 2017 Aug 1;49(4):1407-19
  40. 40. Dan-Glauser ES, Scherer KR. The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and normative significance. Behav Res Methods. 2011 Jun;43(2):468-77
  41. 41. Kotowski K, Stapor K, Leski J, Kotas M. Validation of Emotiv EPOC+ for extracting ERP correlates of emotional face processing. Biocybern Biomed Eng. 2018 Jan;38(4):773-81
  42. 42. Pham TD, Tran D. Emotion Recognition Using the Emotiv EPOC Device. In: Huang T, Zeng Z, Li C, Leung CS, editors. Neural Information Processing. Springer Berlin Heidelberg; 2012. p. 394-9
  43. 43. Zangeneh Soroush M, Maghooli K, Setarehdan SK, Motie Nasrabadi A. A Review on EEG Signals Based Emotion Recognition. Int Clin Neurosci J. 2017 Oct 8;4(4):118-29
  44. 44. Li Y, Huang J, Zhou H, Zhong N. Human Emotion Recognition with Electroencephalographic Multidimensional Features by Hybrid Deep Neural Networks. Appl Sci. 2017 Oct 13;7(10):1060
  45. 45. Z. Khalili, M. H. Moradi. Emotion recognition system using brain and peripheral signals: Using correlation dimension to improve the results of EEG. In: 2009 International Joint Conference on Neural Networks. 2009. p. 1571-5
  46. 46. Li X, Song D, Zhang P, Zhang Y, Hou Y, Hu B. Exploring EEG Features in Cross-Subject Emotion Recognition. Front Neurosci. 2018 Mar 19;12:162-162
  47. 47. Mayer JD, Roberts RD, Barsade SG. Human Abilities: Emotional Intelligence. Annu Rev Psychol. 2007 Dec 21;59(1):507-36
  48. 48. Nguyen A, Yosinski J, Clune J. Understanding Neural Networks via Feature Visualization: A Survey. In: Samek W, Montavon G, Vedaldi A, Hansen LK, Müller K-R, editors. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning [Internet]. Cham: Springer International Publishing; 2019 [cited 2020 Nov 29]. p. 55-76. (Lecture Notes in Computer Science). Available from: https://doi.org/10.1007/978-3-030-28954-6_4

Written By

Krzysztof Kotowski and Katarzyna Stapor

Submitted: 30 November 2020 Reviewed: 10 March 2021 Published: 22 April 2021