Open access

Emotion Recognition Based on Brain-Computer Interface Systems

Written By

Taciana Saad Rached and Angelo Perkusich

Submitted: 27 June 2012 Published: 05 June 2013

DOI: 10.5772/56227

From the Edited Volume

Brain-Computer Interface Systems - Recent Progress and Future Prospects

Edited by Reza Fazel-Rezai

Chapter metrics overview

5,965 Chapter Downloads

View Full Metrics

1. Introduction

Emotions are intrinsically related to the way that individuals interact with each other as well as machines [1]. A human being can understand the emotional state of another human being and behave in the best manner to improve the communication in a certain situation. This is because emotions can be recognized through words, voice intonation, facial expressions and body language. In contrast, machines cannot understand the feelings of an individual.

In this context, affective computing aims to improve the communication among individuals and machines by recognizing human emotions and thus making that interaction easier, usable and effective. There are several studies using different approaches in human emotion detection [2]. For instance, McDaniel et. al. in [3] investigated facial expressions to detect emotions in a learning activity by the interaction among university students and a computer. The students were asked to show their emotions while interacting with a software called AutoTutor. Facial expressions were recorded by video cameras to recognize six kinds of emotions, namely: confusion, surprise, boredom, frustration, pleasure and acceptance. The authors observed that all emotions were able to be detected, except the boredom which it was indistinguishable from a neutral facial expression.

Lee et. al. in [4] explored the use of information concerning the dialogues and speeches along with voice intonation to recognize emotions from speech signals. The focus of the study was to detect negative and non-negative emotions using the information obtained from the spoken language of a call center. The main problems in emotion recognition systems based on facial expressions or spoken language is that these two sources of information are susceptible to ambiguity and false simulations. Bernhardt in [5] developed a emotion recognition system based on body language in daily activities such as walking, picking up an object, among others.

Beyond the detection systems based on affective facial expressions, spoken language and body language, there are several applications in affective computing that focus in detecting emotions through learning techniques to identify patterns in physiological activity that match the expression of different emotions. Liu et. al. in [6] investigated the use of cardiovascular signals, electrodermal activity, electromyography and peripheral temperature for affective detection. The aim of this study was to recognize the emotions of children affected by autism to develop a system that works as a therapist. Systems based on electroencephalogram (EEG) signals have also been used to detect emotions. For instance, in Schaaff and Schultz [7] implemented a system based on brain signals to enable a robot to recognize human emotions. Emotions were elicited by images and classified in three categories, namely: pleasant, unpleasant and neutral. Brain signals are a reliable information source due to the fact that the process of emotion interpretation starts in the central nervous system. Furthermore, an individual cannot control his brain signals to simulate a fake emotional state.

This chapter presents an emotion recognition system based on brain signals. We used a brain computer interface (BCI) as a technique to acquire and classify the brain signals into emotions. This BCI system is based on EEG signals. We used the discrete wavelet transform to select EEG features and a neural network to map these features to emotions.

The rest of this chapter is divided in six sections. In Section 2 the main concepts related to BCI and the key problem are presented. Section 3 shows some application areas of emotion recognition systems based on brain computer interface. Section 4 discusses the research course. Section 5 illustrates the methods used to acquire and process the brain signals into human emotions. Section 5 presents the results achieved using our approach. Finally, in conclusion Section 6 are presented.

Advertisement

2. Problem statement

BCIs are systems that enable any user to exchange information with the environment and control devices by using brain activity, i.e., without using the neuromuscular output pathways of the brain [8]. Brain signals can be acquired by means of invasive or non-invasive methods. In the former, electrodes are implanted directly in the brain. In the latter, the signal is acquired from the scalp of the user. Despite the existence of several methods to acquire brain signals, the most used method is the electroencephalogram (EEG) because it is non-invasive, portable, inexpensive, and can be used in almost all environments [9]. Moreover, low cost and increasingly portable EEG equipment have been developed in the last years.

As discussed by Wang et al. In [10], BCIs systems have been used in rehabilitation, e.g., speller systems, neuroscience, e.g., monitoring attention systems and cognitive psychology, e.g., treatment of attention-deficit hyperactivity disorder. BCIs systems have been investigated recently in recognizing emotions and are seen as a promising technique in this area because the emotions are generated in the brain.

There are several challenges in using BCIs systems for detecting emotions, such as the choice of the method and the channels of acquisition of brain signals that best provide information regarding the emotional state of an individual as well as processing techniques in order to reach a good accuracy in the recognition of emotions.

Advertisement

3. Application area

The emotion recognition systems based on BCI can be applied to many areas, such as:

  • Entertainment

  • Education

  • Medicine

  • Gaming

  • Intelligent tutoring systems

An example of application in the field of entertainment is EEG-based music player [11]. In this application, the current emotional state of the user is identified, and a music related to this state is played. The songs are classified into six emotion types: fear, sad, frustrated, happy, satisfied and pleasant.

Advertisement

4. Research course

The way which a person acts with other people, objects and situations in their day-to-day is fully connected with their emotions. In this context, the recognition of human emotions has been target of several studies recently.

The recognition of emotions can be performed from the facial and body expressions, voice and physiological signals, among others. Since the focus of this work is the definition of processing techniques of EEG signals to provide better results in the classification of the brain signals into emotions, this section presents works based on EEG signals for recognition of emotions.

In [12], the authors used the EEG signals and facial expressions together for the recognition of emotions. The authors aim was to investigate which emotion (positive or negative) is generated from the execution of a particular song. EEG signals were acquired by electrodes placed in the temporal region of the brain and facial expressions were acquired from video images. As a result of this work, the authors determined that it was not possible to distinguish the type of emotion from the signals used together.

Liu et al in [11] presented an algorithm for classification of brain electrical signals in human emotions. This algorithm was based on the model of fractal dimension. The authors used some songs in the first experiment and sounds of the international affective digitized sounds in a second experiment to induce certain emotions in the participants of this study, The brain signals were acquired from three channels, FC6, F4 and AF3. Through the channel FC6 was possible to classify emotions regarding the level of excitement. The channels AF3 and F4 were used in the classification of emotions with respect to valence. According to the authors, in the forebrain of an individual can be identified greater activation in one hemisphere during the feeling of positive emotion and greater activation in the other hemisphere during feeling a negative emotion. But what hemisphere corresponds to that kind of emotion depends on each individual. Therefore, for this approach to be used, a training phase was included in the work. Using the fractal dimension model for recognition of emotions, the authors identified six basic emotions in the bi-dimensional valence-arousal graph, as shown in Figure 1.

Figure 1.

Bi-dimensional valence-arousal approach

Bos in [13] investigated the use of EEG signals for recognition of human emotions. The author used auditory and visual stimuli extracted from such international affective digitized sounds and affective figures, respectively, to induce the feeling of a certain emotion in the participants of the experiment. The brain electrical signals were acquired through three channels, F3, F4 and FPZ, according to the international 10-20 system [14]. The signal characteristics of EEG, alpha (8-12 Hz) and beta (12-30 Hz), were selected to be used in the recognition of emotions. According to the authors of this paper, the noise signals due to the electrooculogram (EOG) are dominant below the frequency of 4 Hz, the noise related to the electrocardiogram (ECG) signals is around 1.2 Hz and above 30 Hz we can find the noise related to electromyogram (EMG). Therefore extracting features only alpha and beta reduced the noises. Bos used a band pass filter to extract only the features in the frequency range 8-30 Hz.

The algorithm frequency Fourier analysis divided the original signal into frequency bands. The principal components analysis reduced the number of features. And finally, Bos classified the EEG signals into emotions with the binary Fisher linear classifier. The classification of emotions was based in the bi-dimensional valence-arousal approach. As a result, Bos obtained a rate of 82.1% accuracy in classification of emotions in brain signals.

In [7], the authors investigated the use of EEG for the recognition of human emotions by humanoid robots. The goal was to provide the ability for robots to detect emotion and react to it in the same way as occurs in a human-human interaction. Also, an EEG device developed by the authors was used to obtain the brain signals. The EEG device consists of only four channels for data acquisition, located in the forebrain according to 10-20 International system in the positions Fp1, Fp2, F7 and F8.

Images of the international affective pictures induced the emotional states pleasant, neutral and unpleasant in participants of the experiments. The classification of emotions was performed using the support vector machine method. It was verified an accuracy of 47.11 % in the recognition of three emotional states mentioned above. According to the authors, to improve the accuracy of the system would require the development of a more complex system, considering multiple data sources, such as cameras and microphones.

Savran et al. [15] studied the use of brain signals and facial expressions for the recognition of human emotions. According to the authors, due to the sensitivity of the signals of EEG to electrical signals generated by the facial muscles while emotions are expressed, the EEG signals and facial expressions cannot coexist. For this reason, the authors used the near infrared spectroscopy as a technique for acquiring brain signals together with facial expressions. Although the spectroscopic technique is non-invasive and low cost, its main problem is the low temporal resolution, which limits the use of this technique in real time applications. The authors used the international affective pictures to induce emotions in participants of the experiment for the recognition of emotions and built a database with the information regarding the facial expressions and brain signals acquired through video and spectroscopy, respectively. Despite the development of the database, the authors performed the data analysis separately, not reporting results on fusion of brain signals and facial expressions for recognizing emotions.

The authors also constructed a second database with the EEG, near infrared spectroscopy and physiological signals, such as skin conductance and heart rate. These signals were acquired during an experiment of induction of emotions taking pictures of the international affective pictures as stimuli. They used a EEG device with 64 channels, but 10 channels were eliminated due to obstruction of the near infrared spectroscopy equipment. The authors did not discuss the results obtained in the classification of brain signals, only presented the protocol used in the construction of databases.

Murugappan et al. [16] showed a brain-computer interface system (ICC) for the recognition of human emotions. The acquisition of brain signals was performed by an EEG device with 64 channels. A Laplace filter was applied in pre-processing of EEG signals. The authors used the wavelets transform algorithm analysis in selecting the characteristics of brain signals and two methods for features classification, the k nearest neighbors and linear discriminant. The authors of this study chose the classification using the discrete emotions approach (happiness, surprise, fear, disgust and neutral).

Twenty subjects aged between 21 and 39 years participated in this experiment. Audio-visual induced emotions in participants of the aforementioned experiment. EEG signals were acquired at a rate of 256 Hz and have been preprocessed using the Laplace filter, as previously mentioned. The wavelet decomposed the EEG signals into five frequency bands (delta, theta, alpha, beta and gamma). The authors calculated the statistical data of alpha band (entropy, energy, standard deviation and variance) and applied this information as input for the classifiers k nearest neighbors and linear discriminant analysis. Table 1 presents the results obtained by the authors of this study.

Results 62 channels 24 channels 8 channels
k nearest neighbors 78,04 % 77,61 % 71,3 %
linear discriminant analysis 77,83 % 70,65 % 56,09 %

Table 1.

Results obtained in [16]

Advertisement

5. Methods

This section presents the system developed for the recognition of human emotions based on a BCI system. We use only information from brain signals to detect the emotional state of an individual in this work. Furthermore, due to the lack of an apparatus for reading brain signals, we use one database of EEG signals [17]. Figure 2 shows the architecture of the emotion recognition system proposed in this chapter.

Figure 2.

Emotion recognition system architecture

The architecture presented in Figure 2 includes a normal BCI system composed by the brain signal acquisition, signal pre-processing, features selection and classification. The output of the BCI system can be one between four different kind of emotions: positive/excited, positive/calm, negative/excited, and negative/calm. The next subsections discuss about each step of our emotion recognition system based on BCI interface.

5.1. Database

We use an EEG database as the source of brain signals [17]. This database was recorded by using music videos to induce emotions in the participants of the experiment. Initially, the authors selected 120 stimuli. Half of these stimuli was selected by a semi-automatically method [18] and the another half was selected manually. After the stimuli selection, one minute of each video was extracted to be used in the research, as stimuli. Finally, the authors of the database chose 40 stimuli [18]. Those stimuli were selected to elicit four different emotions in the individuals: calm/positive, calm/negative, excited/positive and excited/negative. Figure 3 presents the protocol used to conduct the experiment.

Figure 3.

Experiment protocol

The experiments were conducted in two laboratory environments with controlled lighting. Thirty-two participants took part in the experiment, and their EEG signals and peripheral physiological signals (eye movements, facial muscles movements, temperature and blood pressure, among others) were acquired with the system Biosemi Active Two. The signals were recorded from 32 channels according to the 10-10 international system. Moreover, these EEG signals in the database were filtered and the electrooculogram (EOG) artifacts were removed. It was used a camera to capture the images of 22 among the 32 participants in the frontal position. Figure 4 presents the 10-10 international system.

Figure 4.

Illustration of the 10-10 international system

Two computers were used in this experiment, one for storing data and another for presentation of stimuli. To keep two computers synchronized bookmarks were sent from one computer to another [18].

The EEG data was stored

There are some areas of the brain that are related with the human emotional behavior: brainstem, hypothalamus, thalamus, prefrontal area and limbic system. We chose to use the EEG signals acquired from the channel FP1 in this work. The channel FP1 is located on the prefrontal area of the brain. The prefrontal area is involved in the following functions:

  • The choice of options and behavioral strategies most appropriate to the physical and social conditions of an individual, as well as the ability to change them when such situations are modified.

  • Sustained attention and the ability to follow sequences of thoughts sorted.

  • Control of emotional behavior.

5.2. Signals pre-processing

In the pre-processing of data from database used in this study, first the data had their sample rate reduced from 512Hz to 128 Hz. The authors of the database removed the artifacts due to eye movements from EEG signals using the technique discussed in [18]. These signals were filtered with a band pass filter with minima cutoff frequency of 4 Hz and maxima of 45 Hz. A common reference was used for all EEG channels. The data was segmented into 60 second samples being the 3 first seconds eliminated. The preprocessing of data in the database was not done in the context of this work.

The authors of the database [17] stored the EEG data pre-processed in 32.mat (matlab) files, one per participant. Each participant file contains two arrays, as illustrated in Table 2.

Array name Array shape Array contents
data 40 x 40 x 8064 Video/trial x channel x data
labels 40 x 4 Video/trial x label (valence, arousal, dominance, liking )

Table 2.

Contents of each participant file

5.3. Signals processing

The processing of EEG signals in a BCI system is divided into two parts: the selection of the signal characteristics and classification of these characteristics. The choice of the method to be used in the first step depends if the signal characteristics are time or frequency domain. In the second stage, the choice of method is independent of the signal domain.

5.3.1. Signal characteristics selection

Wavelets [16] has been widely used to select the characteristics of the EEG signals in emotion recognition systems and are defined as small waves that have limited duration and average values as zeros. They are mathematical functions, in which a function or data set are located on both time and frequency.

Wavelet analysis consists in the decomposition of a signal into different shifted versions in different scales from the original wavelet. The wavelet analysis is divided into continuous and discrete.

We used the Daubechies 4 discrete wavelet (db4) in this work. We chose the Daubechies 4 based on [16]. The authors performed several experiments with several families of wavelets. In those experiments, the authors found that the wavelet db4 best represents the EEG signals. Figure 5 illustrates an example of the wavelet db4.

Figure 5.

An example of the wavelet db4

The family of Daubechies wavelets was invented by Ingrid Daubechies, one of the most important people in wavelets research. These wavelets are orthonormal compact, making discrete wavelet transform practice.

We developed a routine in Matlab for reading the EEG signals from the database used in this work. Beyond that, the routine selects the features delta, theta, alpha and beta from the EEG signals. Table 3 presents the rhythmic characteristics of the EEG signals along with their frequency bands.

Characteristics Frequency
Delta 1 – 4 Hz
Theta 4 - 7 Hz
Alpha 8 – 12 Hz
Beta 12 – 30 Hz

Table 3.

Rhythmic characteristics of the EEG signal

The EEG signals from the database presented in Section 5.1 were charged using the aforementioned routine. We extracted the EEG data of the FP1 channel from each participant file, discussed in Section 5.2 The EEG data extracted was stored in an new file per participant. Table 4 shows the content of each participant file.

Array name Array shape Array contents
data 40 x 8064 label (valence, arousal, dominance, liking ) x data

Table 4.

Content of the new participant file

The EEG signals were sampled at a sampling rate of 512 Hz, but as discussed in Section 5.2, in the pre-processing of the signal that rate was reduced to 128 Hz. Therefore, we used a db4 wavelet of order 4 for selecting the characteristics of the signal.

First, we calculated the wavelets coefficients using two matlab functions: detcoef (it returns the details coefficients of a wavelet) and appcoef(it returns the approximation coefficients of a wavelet). Then, we calculated the details and approximations of the wavelet using the upcoef matlab function. These steps were applied for each row of the data array

The human emotions are related with the theta and alpha characteristics. For that reason, we chose to select theta and alpha features to classify them into emotions. We calculated the values of entropy and energy components of the alpha and theta to evaluate which parameter can provide better results when classified into emotions.

We calculated the entropy and energy using the wentropy and wenergy matlab functions, respectively. That functions receive as parameter an array. In this work, we used arrays with theta and alpha features to estimate the entropy and energy.

5.3.2. Signal characteristics classification

The second stage in EEG signals processing is the classification of the signals into signals of interest for a given application using translation algorithms. Examples of translation algorithms include linear discriminant analysis, k-nearest neighbor, support vector machine, and artificial neural network [19], among others.

Artificial neural network has been widely used as algorithm to classify different kind of human information into human emotions. We chose use this technique based on the literature where we can find good results with the use of the artificial neural network.

Artificial neural networks are computational learning models inspired in the biology of the human brain. These models consist of neurons interconnected by synapses. From a functional point of view, neural networks copy the ability of the brain to learn and ideally can be trained to recognize any information, given a set of input data, by adjusting the synaptic weights. A properly trained network, in principle, should be empowered to apply their knowledge and respond appropriately to completely new entries. The most common application of neural networks is the supervised classification and therefore it requires a set of training and test data. Since learning is performed by the training data, the mathematical formalization is based on these data.

The classification of the characteristic of the brain signals in emotions was performed by neural networks algorithm in this work. A neural network consists of input layer, hidden layer and output layer. The input layer is composed of neurons that receive input stimuli. The output layer is composed by neurons which have as their output the network output. The hidden layer or intermediate layer is composed of neurons which perform any data processing network. This layer may be composed of only one layer or several layers of neurons depending on the complexity of the network. In Figure 6, is shown a neural network with two layers.

Figure 6.

Representation of a neural network

In a neural network there are several parameters to be defined as the number of hidden layers, the number of neurons in each layer of the network and training method. The choice of these parameters in this study was performed based on the literature and in some experiments with the data from the database discussed Section 5.1.

There are several methods to train a neural network. In this work, experiments were performed with three of these methods: Levenberg-Marquardt, Bayesian regularization and resilient propagation algorithms. With the Levenberg-Marquardt algorithm, despite being the fastest, it were not obtained good results, since this method is suitable for solving problems of nonlinear regression and not to problems of pattern recognition. The worst results were obtained with the Bayesian regularization algorithm. As expected, the best results were obtained with the resilient propagation technique that according to the literature is more suitable for pattern recognition.

After the experiments, it was defined that the most appropriate neural network to classify the characteristics of alpha and theta EEG signals on emotions has the following parameters:

  • An input layer with one input signal. The neural network was trained and evaluated with the energy and entropy from the characteristics theta and alpha as input data. However, all four data types were used one at a time

  • Three hidden layers with 40 neurons in each hidden layer.

  • An output layer with the one network output. The neural network output can be one of the EEG signals classified into emotions: positive/excited, positive/calm, negative/excited, and negative/calm.

  • The technique used for training the network was resilient propagation.

The neural network described above was used to classify the data of the database on human emotions. The results of this experiment are discussed in Section 6.

Advertisement

6. Results

We chose to use the EEG signals acquired by channel FP1 due to its location. As discussed previously, the prefrontal lobe of the brain is intrinsically related to human emotions.

We processed a total of 1280 trials of EEG signals among all thirty two participants of our experiment. We selected theta and alpha rhythms from the EEG signals and calculated the energy and entropy from both features. We applied energy and entropy as inputs for the neural network used to classify brain signals into four emotional human states. As discussed in Section 5.3.2, the neural network used in classifying EEG signals into emotions in this work has just one input, therefore, we applied those parameters as input for the neural network one at a time.

First, we trained, validate, and tested the neural network with the energy calculated based on the theta feature. Then, we used the entropy obtained from the theta rhythm as the input of the neural network. The same steps were done with the energy and entropy calculated based on the alpha feature. Finally, we estimate for each classified emotion the mean and standard deviation of the results achieved for the thirty two participants of the experiment.

The Table 5 illustrates the mean and standard deviation of the results of our experiments for the emotion positive/excited.

Emotion Positive/Excited
Feature Mean Standard Deviation
Theta (Energy) 78.125 % 19.572 %
Theta (Entropy) 90.625 % 7.7771 %
Alpha (Energy) 64.8438 % 27.7586 %
Alpha (Entropy) 95.5781 % 7.6855 %

Table 5.

Mean and standard deviation of the results related to the emotion positive/excited

One can observe in Table 5 that the best results in classifying EEG signals into the emotion positive/excited was achieved when we used the entropy calculated based on the alpha features as the input for the neural network. When we applied the entropy based on the theta features as the input for the neural network, a good result was obtained.

The Table 6 illustrates the mean and standard deviation of the results of our experiments for the emotion positive/calm.

Emotion Positive/Calm
Feature Mean Standard Deviation
Theta (Energy) 65 % 27.0006 %
Theta (Entropy) 90.625 % 11.8967 %
Alpha (Energy) 71.875 % 20.8586 %
Alpha (Entropy) 86.875 % 12.2967 %

Table 6.

Mean and standard deviation of the results related to the emotion positive/calm

One can observe in Table 6 that the best results in classifying EEG signals into the emotion positive/calm was achieved when we used the entropy calculated based on the theta features as the input for the neural network. When we applied the entropy based on the alpha features as the input for the neural network, a good result was obtained.

The Table 7 illustrates the mean and standard deviation of the results of our experiments for the emotion negative/calm.

Emotion Negative/Calm
Feature Mean Standard Deviation
Theta (Energy) 76.6667 % 19.5348 %
Theta (Entropy) 90 % 10.16 %
Alpha (Energy) 72,1875 % 26.7285 %
Alpha (Entropy) 87.125 % 8.7009 %

Table 7.

Mean and standard deviation of the results related to the emotion negative/calm

One can observe in Table 7 that the best results in classifying EEG signals into the emotion positive/excited was achieved when we used the entropy calculated based on the theta features as the input for the neural network. When we applied the entropy based on the alpha features as the input for the neural network, a good result was obtained.

The Table 8 illustrates the mean and standard deviation of the results of our experiments for the emotion negative/excited.

Emotion Negative/ Excited
Feature Mean Standard Deviation
Theta (Energy) 74.9719% 16.9485 %
Theta (Entropy) 91.4031% 7.188 %
Alpha (Energy) 81.2469% 12.5276 %
Alpha (Entropy) 93.7562 % 6.3564 %

Table 8.

Mean and standard deviation of the results related to the emotion negative/excited

One can observe in Table 8 that the best results in classifying EEG signals into the emotion negative/excited was achieved when we used the entropy calculated based on the alpha features as the input for the neural network. When we applied the entropy based on the theta features as the input for the neural network, a good result was obtained.

Finally, we observed that using FP1 channel for EEG acquisition, wavelets as the algorithm to select characteristics from EEG signals, and the neural network in classifying those features into emotions, we can recognize at least four human emotions with a good accuracy. Furthermore, we achieved the best results achieved when we used the entropy calculated based on theta or alpha features as the input for the neural network.

Advertisement

7. Conclusions

The emotional state of a person defines their interaction with other people or objects. Therefore, the recognition of human emotions is becoming a concern in the development of systems that require human-machine interaction. The goal in recognizing human emotions is easier and more enjoyable computer use, for example.

There are several sources of information to assist in the recognition of emotions, such as facial expressions, voice and physiological signals, among others. In this study we implemented an emotion recognition system based on the BCI interface. We used a database of EEG signals acquired during experiments to induce emotions in the participants.

The database includes brain signals from thirty-two subjects. Those signals were recorded from thirty-two channels according the 10-10 international system. The database signals were pre-processed and the artifacts due to eye movements were removed. We chose use just the signals from the channel FP1 because your location and to avoid wasting time processing unnecessary information.

We selected the characteristics theta and alpha with the algorithm wavelets. We used in this work a discrete wavelet transform Daubechies db4. We calculated the parameters energy and entropy based on theta and alpha rhythms. The classification of these parameters into emotional states was accomplished with the method neural networks.

We could observe that we achieve good results in recognizing emotions with our approach. When we considered our system based on theta features with the entropy as input for the neural network, we had 90.625 %, 90.625 %, 90 % and 91.4031 % of accuracy for the emotions positive/excited, positive/calm, negative/calm, and negative/excited, respectively.

When we considered our system based on alpha features with the entropy as input for the neural network, we had 95.5781 %, 86.875 %, 87.125 % and 93.7562 % of accuracy for the emotions positive/excited, positive/calm, negative/calm, and negative/excited, respectively.

We recognized four different kinds of emotions based on the bi-dimensional approach: positive/excited, positive/calm, negative/excited, and negative/calm. The best result that we achieved was 95.5781 % when we classified EEG signals into the emotion positive/excited using the entropy calculated based on the alpha characteristics.

Therefore, we could conclude that the combination of wavelets and neural network algorithms is a good choice for classifying emotions by emotion recognition systems based on BCI interface. Furthemore, the FP1 as the signal acquisiton was a good choise based on the results achieved in this work.

According to [20], some individuals have their theta features more active than alpha features during the feeling of emotions. In other cases, the opposite happens, i. e., the subjects have the alpha rhythms more active than the theta rhythms.

As future work, we plan improve our results analyzing what EEG feature is more active during the feeling of emotion by each participant of our experiment. According to this evaluation, we will insert a step to identify the feature that is more active during the experiments in each user and adapt our emotion recognition system to receive the more significant rhythm for each participant.

References

  1. 1. Scherer K What are emotions? And how can they be measured? Social Information, January 2005
  2. 2. Calvo R. A D Mello S. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1 1 18 37January 2010
  3. 3. Mcdaniel B D Mello S King B Chipman P Tapp K Graesses A. Facial features for attractive state detection in learning environments. Proc. 29th Ann. Meeting of the Cognitive Science Soc., 2007
  4. 4. Lee C. M Narayanan S. S Toward detecting emotions in spoken dialogs. Speech and Audio Processing, IEEE Transactions on, 13 2005 2 293 303
  5. 5. Bernhardt D Emotion inference from human body motion. Technical Report 787, Computer Laboratory- University of Cambridge, October 2012
  6. 6. Liu C Conn K Sarkar N Stone W Physiology-based affect recognition for computer- assisted intervention of children with autism spectrum disorder. Int. J. Hum.-Comput. Stud., 66 662 677September 2008
  7. 7. Schaaff K Schultz T Towards an EEG-based emotion recognizer for humanoid robots. In RO-MAN 2009- The 18th IEEE International Symposium on Robot and Human Interactive Communication, 792 796IEEE, September 2009
  8. 8. Wolpaw J. R Birbaumer N Mcfarland D. J Pfurtscheller G Vaughan T. M Brain-computer interfaces for communication and control. Clinical Neurophysiology, 1 767 791March 2002
  9. 9. Schölgl A Brunner C Biosig: A free and open source software library for BCI research. Computer, 41 2008 10 44 50
  10. 10. Wang J Yan N Liu H Liu M Tai C Brain-computer interfaces based on attention and complex mental tasks. In Proceedings of the 1 st International Conference on Digital Human Modeling, ICDHM’07, 467 473Berlin, Heidelberg, 2007Springer-Verlag.
  11. 11. Liu Y Sourina O Nguyen M. K Real-time EEG-based emotion recognition and its applications. In Transactions on computational science XII, 256 277Berlin, Heidelberg, 2011Marina L. Gavrilova and C. J. Kenneth Tan (Eds.). Springer-Verlag.
  12. 12. SuprijantoSari L., Nadhira V., Merthayasa IGN., Farida I. M. Development system for emotion detection based on brain signals and facial images. World Academy of Science, Engineering and Technology 50, 2009
  13. 13. Bos D. O EEG-based emotion recognition the influence of visual and auditory stimuli. Emotion, 57 2006 7 1798 806
  14. 14. Kubler A Muller K. R Toward brain-computer interfacing, chapter An introduction to Brain-computer interfacing, 1 25The MIT Press, 2007
  15. 15. Savran A Ciftci K Chanel G Mota J. C Viet L. H Sankur B Akarun L Caplier A Rombaut M Emotion detection in the loop brain signals and facial images. In Proceedings of the eNTERFACE 2006 Workshop, Dubrovnik, July 2006
  16. 16. Murugappan M Nagarajan R Yaacob S Appraising human emotions using time frequency analysis based EEG alpha band features. In Innovative Technologies in Intelligent Systems and Industrial Applications, 2009CITISIA 2009, 70 75July 2009.
  17. 17. DEAPdataset: a dataset for emotion analysis using eegphysiological and video signals. http://www.eecs.qmul.ac.uk/mmv/datasets/deap/accessed 23 February 2012
  18. 18. Koelstra S Muhi C Soleymani M Lee J-S Yazdani A Ebrahimi T Pun T Nijholt A Patras I DEAP: A database for emotion analysis using physiological signals. Affective Computing IEE Transactions on, 3 1 18 31Jan.-March 2012
  19. 19. JananKhani PKodogiannis V., Revelt K. EEG signal classification using wavelet feature extraction and neural networks. In Procedings of the IEEE John Vincent Atanasoff 2006 International Symposium on Modern Computing, 120 124Washington, DC, USA, 2006IEEE Computer Society.
  20. 20. Benbadis S Husain A Kaplan P Tatum W Handbook of EEG interpretation. Demos Medical Publishing. Springer Demos Medic Series, 2007

Written By

Taciana Saad Rached and Angelo Perkusich

Submitted: 27 June 2012 Published: 05 June 2013