Open access

Classification of Emotional Stress Using Brain Activity

Written By

Seyyed Abed Hosseini and Mohammad Bagher Naghibi-Sistani

Submitted: 26 October 2010 Published: 23 August 2011

DOI: 10.5772/18294

From the Edited Volume

Applied Biomedical Engineering

Edited by Gaetano D. Gargiulo and Alistair McEwan

Chapter metrics overview

4,662 Chapter Downloads

View Full Metrics

1. Introduction

Stress and Emotion are complex phenomena that play significant roles in the quality of human life. Emotion plays a major role in motivation, perception, cognition, creativity, attention, learning and decision-making (Seymour et al., 2008). A major problem in understanding emotion is the assessment of the definition of emotions. In fact, even psychologists have problem agreeing on what is considered an emotion and how many types of emotions exist. Kleinginna gathered and analyzed 92 definitions of emotion from literature present that day. He concludes that Emotion is a complex set of interactions among subjective and objective factors, mediated by neural/hormonal systems (Horlings, 2008). In fact, Emotion is a subcategory of stress.

A lot of research has been undertaken in assessment of stress and emotion over the last years. Most of researches in the domain of stress and emotional states use peripheral signals such as respiratory rate, Skin Conductance (SC), Blood Volume Pulse (BVP) (Zhai et al., 2006) and Temperature (McFarland, 1985). Most previous research, have investigated the use of EEG and peripheral signals separately, but little attention has been paid so far to the fusion between EEG and peripheral signals (Chanel, 2009; Chanel et al., 2009; Hosseini, 2009).

In one study, Aftanas et al. (2004) that showed significant differentiation of arousal based on EEG data collected from participants watching high, intermediate and low arousal images. Chanel (2009) asked the participants to remember past emotional episodes, and obtained the accuracy of 88% using EEG for 3 categories with Support Vector Machine (SVM) classifier. Hosseini et al. (2009) used the induction visual images based acquisition protocol for recording the EEG and peripheral signals under 2 categories of emotional stress states (Calm-neutral and Negatively-exited) of participants, and obtained the accuracy of 78.3% using EEG signals with SVM classifier. Kim et al. (2004) used the combination of music and story as stimuli and there were 50 participants, to introduce a user independent system, the results showed the accuracy of 78.4% and 61% for 3 and 4 categories of different emotions respectively. Takahashi (2004) used film clips to stimulate participants with five different emotions, resulting in 42% of correctly identified patterns. Schaaff & Schultz (2009) used pictures from the International Affective Picture System (IAPS) to induce three emotional states: pleasant, neutral, and unpleasant. They obtained the accuracy of 66.7% for three classes of emotion, solely based on EEG signals.

The aim of this chapter is to produce a new fusion between EEG and peripheral signals for emotional stress recognition. Since ElectroEncephaloGram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research, it is used as the main signal. Brain waves occur during the activity of brain cells and have a frequency range of 1 to 100 Hz. Researchers have found that the following are the frequency bands of interest to interpret EEG signal: delta (1-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), and beta (13-30 Hz) and gamma (> 30 Hz) (Ko et al., 2009).

One of the recent weaknesses, lack of proper channels selected brain signals are recorded. In this study, in order to choose the proper EEG channels, a cognitive model of the brain under emotional stress has been used (Hosseini et al., 2010a).

Every standard test in stress and emotion assessment has its own advantages and disadvantages (Hosseini, 2009). A efficient acquisition protocol was designed to acquire the EEG signals in five channels and peripheral signals such as Blood Volume Pulse (BVP), Skin Conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants The visual stimuli images were selected from the subset IAPS database (Lang et al., 2005).

An important issue in every cognitive system is the correct labelling of the data. Here, labelling means the assessment of the data using a series of visual criteria used by psychologists and a proposed cognitive system for peripheral signals in order to verify the existence of a close correlation of the data and the psychological state of the subject. In this kind of research, putting the subject in the desired psychological state is very important. The process of labelling EEG signals consists of three stages: first self-assessment, second the qualitative analysis of peripheral signals and third the quantitative analysis of peripheral signals. Therefore, this new fusion link between EEG and peripheral signals are more robust in comparison to the separate signals.

Advertisement

2. Cognitive model of emotional stress

Cognitive models (also termed agent architectures) aim to emulate cognitive processing such as attention, learning, perception, and decision-making, and are used by cognitive scientists to advance understanding of the mechanisms and structures mediating cognition (Hudlicka, 2005). In the case of fear conditioning leading to emotional stress, several hypotheses have been proposed to explain how neural changes occur in the different components in a circuit leading to the observed behavioural responses. In mammalians, a part of the brain, called the limbic system, is mainly responsible for emotional processes (Xiang, 2007). We are going to describe developing a cognitive model of the limbic system based on these concepts. The main components of the limbic system involved in emotional stress processes are amygdala, orbito-frontal cortex, thalamus, sensory cortex, hypothalamus, hippocampus and some other of important areas. In this section, we are trying to briefly describe these components and their tasks. The amygdala, a small structure in the temporal lobes, plays a central role in emotion. It is generally accepted that the amygdala is crucial for the acquisition and expression of conditioned fear responses (for a review, see (LeDoux, 1996)). The amygdala and orbito-frontal cortex receive highly analyzed input from the sensory cortex. The amygdala, specifically its lateral nucleus, receives inputs from all the main sensory systems, as well as from higher-order association areas of the cortex and the hippocampus (Armony et al., 1997). Sensory information reaches the amygdala from the thalamus by the way of two parallel pathways: the direct pathways reach the amygdala quickly, but they are limited in their information content, as the thalamic cells of origin of the pathway are not very precise stimulus discriminators. The cortical pathway, on the other hand, is slower but capable of providing the amygdala with a much richer representation of the stimulus (Armony et al., 1997). The sensory cortex is the component next to the thalamus and receives its input through this component. The orbito-frontal cortex is another component, which interacts with the amygdala reciprocally. The orbito-frontal cortex also plays a role in reinforcement learning of emotions. The term prefrontal cortex refers to the very front of the brain, behind the forehead and above the eyes. It appears to play a critical role in the regulation of emotion and behaviour by anticipating the consequences of our actions. The prefrontal cortex may play an important role in delayed gratification by maintaining emotions over time and organizing behaviour toward specific goals (Xiang, 2007). The locus ceruleus contains a large proportion of the noradrenalin cell bodies found in the brain and it is a key brain stem region involved in arousal (Steimer, 2002). The Bed Nucleus of the Stria Terminalis (BNST) is considered part of the extended amygdala. It appears to be a centre for the integration of information originating from the amygdala and the hippocampus, and is clearly involved in the modulation of the neuroendocrine stress response (Steimer, 2002). The hypothalamus (ParaVentricular Nucleus (PVN) and Lateral Hypothalamus (LH)) lies below the Thalamus and it is believed to have various functions that regulate the endocrine system, the autonomous nervous system and primary behavioural surviving states (Steimer, 2002; Schachter, 1970). The Hypothalamic-Pituitary-Adrenal (HPA) axis ultimately regulates the secretion of glucocorticoids, which are adrenocortical steroids that act on target tissues throughout the body in order to preserve homeostasis during stress (Ramachandran, 2002). The hypothalamus also releases Corticotrophin-Releasing Hormone (CRH), which travels to the anterior pituitary gland, where it triggers the release of AdrenoCorticoTropic Hormone (ACTH), which, along with β-endorphin, is released into the bloodstream in response to stress. ACTH travels in the blood to the adrenal glands, where it stimulates the production and release of glucocorticoids such as cortisol. Cortisol feedback at the hypothalamus reduces CRF release. At the pituitary, it inhibits ACTH release, and at the adrenal gland, it inhibits further cortisol release. Cortisol feedback at the hippocampus inhibits CRF secretion from the hypothalamus. The release of all these chemicals causes important changes in the body’s ability to respond to threats such as increased energy, heart rate and blood sugar resulting in increased arousal and pain relief (Carey, 2006). In order to choose the best channels for EEG signals, we implemented a new cognitive model (for a review of complete model, see (Hosseini et al., 2010a; Hosseini et al., 2010d)). The detail is shown in Fig. 1. (Hosseini et al., 2010a).

Advertisement

3. Acquisition protocol

3.1. Stimuli

Every standard test in stress and emotional states assessment has its own advantages and disadvantages (Hosseini, 2009). Most experiments that measure emotion from EEG signals use pictures from the International Affective Picture System (IAPS). The IAPS evaluated by several American participants on two dimensions of nine points each (1-9). The use of IAPS allows better control of emotional stimuli and simplifies the experimental design (Horlings, 2008).

Figure 1.

A general cognitive map of brain for stress state

In this study, we chose the picture presentation test, base on the closeness of its assessment to our aims. The stimuli to elicit the target emotions (calm-neutral and negatively excited) were some of the pictures (http://www.unifesp.br/dpsicobio/adap/exemplos_fotos.htm). The valence dimension ranging from negative to positive and the arousal dimension, ranging from calm to excited. Information about both dimensions has been found to be present in EEG signals, which shows that emotion assessment from EEG signals should be possible.

The participant sits in front of a portable computer screen in a bare room relatively, the images to inform him about the specific emotional event he has to think of. Each experiment consists of 8 trials. Each stimulus consists of a block of 4 pictures, which ensures stability of the emotion over time. In addition, each picture is displayed for 3 seconds leading to a total 12 seconds per block. Prior to displaying images, a dark screen with an asterisk in the middle is shown for 10 seconds to separate each trial and to attract the participant’s attention. The detail of each trial is shown in Fig. 2.

Figure 2.

The protocol of data acquisition

This epoch duration was chosen because to avoid participant fatigue. In Fig. 3, each presentation cycle started with a black fixation cross, which was shown for ten seconds. After that pictures were presented for twelve seconds.

Figure 3.

Process of picture presentation

3.2. Subjects

Fifteen healthy volunteered subjects were right-handed males between the age of 20 and 24 years. Most subjects were students from biomedical engineering department of Islamic Azad University- Mashhad Branch. Each participant was examined by a dichotic listening test to identify the dominant hemisphere (Sadock, 1998; Hosseini, 2009). All subjects had normal or corrected vision; none of them had neurological disorders. These were done to eliminate any differences in subjects. All participants gave written informed consent. Then each participant was given a particulars questionnaire. During the pre-test, several questionnaires have been evaluated in order to check the best psychological input to start the protocol phase; this test is State-Trait Anxiety Inventory (STAI). At the end of the experiment, participants were asked to fill in a questionnaire about the experiment and give their opinions (Hosseini, 2009), because, it is possible that the emotion that a participant experiences differs from the expected value. For that reason, the participant is asked to rate his emotion on a self-assessment.

3.2. Procedure

We used a 10 channel Flexcom Infiniti device, with 14-bit resolution for data acquisition (http://www.thoughttechnology.com/flexinf.htm). It is connected to a PC using the USB port. An optical cable connects to device, to prevent any electrical charge from reaching the participant. The Flexcom Infiniti hardware only worked well with the accompanying software. Two programs were available, Biograph Infiniti Acquisition and ezscan. The central activity is monitored by recording EEGs. The peripheral activity is assessed by using the following sensors: a skin conductance sensor to measure sudation; a respiration belt to measure abdomen expansion; a plethysmograph to record blood volume pulse. We recorded SC by positioning two dedicated electrodes on the top of left index and middle fingers. The sample rate of the BVP and SC signals acquisition was 2048 Hz and then down-sampled to 128 Hz and respiration signal acquisition was 256 Hz and then down-sampled to 128 Hz. For reduce of calculation volume, were implemented the down sampling on BVP and SC signals. EEG was recorded using electrodes placed at 5 positions. The scalp EEG was obtained at location FP1, FP2, T3, T4 and Pz, as defined by the international 10-20 system and Ag/AgCl electrodes. In order to measure a reference signal that is (as much as possible) free from brain activity, we have two electrodes to attach to the participants earlobes. Average of A1 and A2 was used as reference. Impedance of all electrodes was kept below 5 KΩ. The sample rate of the EEG signal acquisition was 256 Hz. Each recording lasted about 3 minutes. More details of the data acquisition protocol can be found in (Hosseini, 2009).

Advertisement

4. Labeling process of EEG signals

An important issue in every cognitive system is the correct labelling of the data. In order to choose the best emotional stress correlated EEG signals, we implemented a new emotion-related signal recognition system, which has not been studied so far (Hosseini, 2009; Hosseini et al., 2010c). We recorded peripheral signals concomitantly in order to firstly recognize the correlated emotional stress state and then label the correlated EEG signals. In other words, we used the peripheral signals as a tutor for labeling system.

The process of labeling EEG signals consists of three stages: first self-assessment, second the qualitative analysis of peripheral signals and third the quantitative analysis of peripheral signals. Fig. 4 shows the different stages of the process. After the experiment, there was also a self-assessment stage, which is a good way to have an idea about the emotional stimulation “level” of the subject because emotions are known to be very subjective and dependent on previous experience (Savran et al., 2006). In this research, we will be able to get a general idea of the quality of the data, i.e. if the data are good or bad.

One kind of this data is respiration. Emotional stress processes influence respiration (Ritz et al., 2002; Wilhelm et al., 2006). Slow respiration, for example is linked to relaxation while irregular rhythm, quick variations, and cessation of respiration correspond to more aroused emotions like anger or fear. Another one is skin conductance, which measures the conductivity of the skin. Since sweat gland activity is known to be controlled by the sympathetic nervous system, Electro Dermal Activity (EDA) has become a common source of information to measure the Autonomic Nervous System. SC increases if the skin is sweaty, for example, when one is experimenting emotions such as stress. Moreover, blood pressure and Heart Rate Variability (HRV) are variables that correlate with defensive reactions, pleasantness of a stimulus, and basic emotions. We obtained Heart Rate (HR) signal using BVP signal recorded by a plethysmograph. A method to determine HR from a BVP signal is proposed in (Wan & Woo, 2004). Analysis of HRV provides an effective way to investigate the different activities of ANS, an increase of HR can be due to an increase of the sympathetic activity or a decrease of the parasympathetic activity. Two frequency bands (HR spectrum) are generally considered for HR signal, a Low Frequency (LF) band ranging from 0.05 Hz to 0.15 Hz and a High Frequency (HF) band including frequencies between 0.15 Hz and 1 Hz (Hosseini, 2009). In order to analyze the peripheral signals quantitatively, we need to pre-process them, to remove environmental noises by applying filters. The peripheral signals were filtered by moving average filters to remove noise.

Figure 4.

Labeling process of EEG signals

We used a common set of feature values for analysis of the peripheral signals (Table 1) (Chanel et al., 2009; Hosseini, 2009). The respiration features are from time and frequency domains, the skin conductance features and the blood volume pulse features are from time domain, and the heart rate variability features are from time, frequency domains and fractal dimension.

Table 1.

Features Extracted from peripheral signals

The total number of features is: [10+9+15+6=40]. After extracting the features, we need to classify them using a classifier. There are several approaches to apply the SVM for multiclass classification (http://www.kernel-machines.org/software.html). The LibSVM MATLAB toolbox (Version 2.9) was used as an implementation of the SVM algorithms (Chang & Lin, 2009). In this study, the one-vs.-all method was implemented. Two SVMs that correspond to each of the two emotions were used. The ith SVM was trained with all of the training data in the ith class with calm labels, and the other training data with negative labels.

In the emotional stress recognition process, the feature vector was simultaneously fed into all SVMs and the output from each SVM was investigated in the decision logic algorithm to select the best emotional stress (Fig. 5). In the SVM classifier, was used a Gaussian Radial Basis function (RBF) as a kernel function. RBF projects the data to a higher dimension.

Figure 5.

Decision logic algorithm

A confusion matrix will also be used to determine how the samples are classified in the different classes. A confusion matrix gives the percentage of samples belonging to class ωi and classified as class ωj. The accuracy can be retrieved from the confusion matrix by summing its diagonal elements Pi,i weighted by the prior probability P(ωi) of occurrence of the class ωi. The confusion matrices results of the SVM used for classification of the peripheral signals under two emotional stress states is given in Table 2.

Table 2.

The confusion matrices across participants using peripheral signals using RBF kernel of SVM

The results show that, the classification accuracy with peripheral signals was 76.95% for the two categories, using SVM classifier with RBF kernel.

The numbers of rejected trials that are badly classified that is lower than the number of correctly classified. The percentage of rejected trials is 11%. Method at this stage it has been used to select suitable segments of EEG signal for improving accuracy of signal labeling according to emotional stress state. More details of the labeling process can be found in (Hosseini, 2009).

Advertisement

5. Analysis of EEG signals

5.1. Pre-processing

Before analysis, we first remove the data segment, which contains obvious eye blinking. We need to pre-process EEG signals in order to remove environmental noises and drifts. The data was filtered using a band pass filter in the frequency band of 0.5~60 Hz. Although we studed the EEG signals of up to 30 Hz, we included the 30 to 60 Hz bandwidth, because we need a double maximum frequency content when analyzing the data using HOS (Hosseini, 2009). The signals were filtered using the “filtfilt” function from the signal processing in MATLAB toolbox, which processes the input signal in both forward and reverse directions. This function allows performing a zero-phase filtering. Safety of signal phase information is very important in higher order spectra (Hosseini, 2009). In addition, a notch filter at 50 Hz was placed to discard the effect of power lines.

5.2. Quality aspect

First time, phase space introduced by W. Gibbs in 1901, that in this space all possible states of a system are represented, with each possible state of the system corresponding to one unique point in the phase space. In fact, all these unique points will make direction of trajectory. A sketch of the phase portrait may give qualitative information about the dynamics of the system. The method is based on the operated result numerically in the EEG dynamics system, the phase trajectory portrait is drawn out in the phase space with the time variance, and the course portrait of the state variables is drawn out with the time (Jiu-ming et al., 2004). The chaotic phenomena and the solution fraction are decided through comparison, analysis and integration. In the phase space, the close curve is corresponding to the periodical motion, while the chaotic motion is corresponding to the ever-non-close trajectory (strange attractor), which diverges randomly in some area, the corresponding figure is as following Fig. 6.

Figure 6.

Phase space state portraits for T3 channel of EEG signal for one participant in negative emotional stress state

5.3. Feature extraction

Feature extraction is the process of extracting useful information from the signal. We use a set of feature values for brain signals. Features are extracted for each channel of EEG signals. Since brain signals essentially have a chaotic and nonlinear behaviour, we performed emotional stress state assessment using both linear and nonlinear characteristics. Nonlinear measures have received the most attention in comparison with the measures mentioned before, for example time domain, frequency domain and other linear features. The nonlinear set of features used includes fractal dimension, approximate entropy and correlation dimension of the data.

5.3.1. Fractal dimension

Fractal dimension (FD) analysis is frequently used in biomedical signal processing, including EEG analysis. Higuchi’s algorithm unlike many other methods requires only short time intervals to calculate fractal dimension. This is very advantageous, because EEG signal remains stationary during short intervals and because in EEG analysis it is often necessary to consider short, transient events.

In Higuchi’s algorithm (Higuchi, 1988), k new time sequence are constructed from the signal x(1), x(2),…, x(N) under study:

xmk={x(m),x(m+k),...,x(m+Nmkk)},m=1,2,...kE1

Where m = 1, 2, …, k and k indicate the initial time value, and the discrete time interval between points, respectively. For each of the k time series xkm, the length Lm(k) is computed by:

Lm(k)=i=1Nmk|x(m+ik)x(m+(i1)k)|(N1)NmkkE2

Where N is the total length of the data sequence x, (N-1)/[(N-m)/k]k is a normalization factor and k average length is computed as the mean of the k lengths Lm(k) for m =1,2,..., k. This procedure is repeated for each k ranging from 1 to max k, obtaining an average length for each k. In the curve of ln(Lm(k)) versus ln(1/k), the slope of the least-squares linear best fit is the estimate of the fractal dimension.

In this research, the best results were obtained for estimating the FD of the EEG; kmax = 10, rectangular window size, N = 512 samples (2 seconds) and window overlap = 0%.

5.3.2. Correlation dimension

Correlation dimension (D2) is one of the most widely used measures of a chaotic process. In this reasearch, we used the Grassberger and Procaccia Algorithm (GPA) for estimating D2 (Grassberger & Procaccia, 1983). The choice of an appropriate time delay τ and embedding dimension m is important for the success of reconstructing the attractor with finite data. The idea is to construct a function C(r) that is the probability that two arbitrary points on the orbit are closer together than r, where r is the radius of the sphere in the multidimensional space. This is done by calculating the separation between every pair of N data points and sorting them into bins of width dr proportionate to r. More precisely GPA computes the correlation integral C(r) given by,

C(r)=1N(N1)i=1Nj=1ijNθ(rV(j)V(i))E3

Where ||V(j)-V(i)|| is the distance between the points V(j) and V(i) and Θ(.) is the heaviside function. D2 is estimated as the slope of the log(C(r)) vs. log(r) graph as follows:

ApEn(m,rf)=limN(φm(rf)φm+1(rf))E4

Figure 7.

A schematic graph of the correlation dimension plotted as a function of the embedding dimension. When the embedding dimension is equal to or greater than twice the dimension of the state space attractor (dsat) the correlation dimension became independent of m. The correlation dimension of the attractor in this case is about 8.

We calculated D2 with dsat values varying from 2 to 10 for all the subjects. It can be seen that D2 saturates after the embedding dimension of 7 (Fig. 7). Therefore, we have chosen dsat=8 for constructing the embedding space and estimation of the invariants (In the test, m = 8 and τ = 6).

The determination is based on calculating the relative number of pairs of points in the phase-space set that is separated by a distance less than r. For a self-similar attractor, the local scaling exponent is constant, and this region is called a scaling region. This scaling exponent can be used as an estimate of the correlation dimension. If the dsat=8 plots C(N, r) vs. r on a log-log scale, the correlation dimension is given by the slope of the log(C(r)) vs. log(r) curve over a selected range of r, and the slope of this curve in the scaling region is estimated by the least slope fitting (Fig. 8).

Figure 8.

A plot of log (C(r)) versus log(r) for logistic map data

5.3.3. Approximate entropy

Pincus introduced the first idea of approximate entropy (ApEn) in 1991, and it is a useful complexity measure for biological time series data (Pincus, 1991). ApEn is originated from nonlinear dynamics. ApEn is a statistical instrument initially designed to be applied to finite length and noisy time series data, it is scale invariant and model independent, evaluates both dominant and subordinate patterns in data, and discriminates series for which clear feature recognition is difficult. Notably it detects changes in underlying episodic behaviour not reflected in peak occurrences or amplitudes. To understand the concept of ApEn better, we describe the definition step by step as follows: Let the original data be <X(n)> = x(1), x(2),…, x(N), where N is the total number of data points. The calculation of ApEn of signal of finite length is performed as follows. First, fix a positive integer m and a positive real number rf. Next, form the signal x the N-m+1 vector, defined by (5).

Xm(i)={x(i),x(i+1),...,x(i+m1)}i=1,2,...,Nm+1E5

The quantity is calculated (6).

Cim(rf)=numberofsuchjthatd[Xm(i),Xm(j)]rfNm+1ij,j=1,2,...,Nm+1E6

Where the distance between the vectors and is defined as (7).

d[Xm(i),Xm(j)]=maxk=1,2,...,m+1(|x(i+k)x(j+k)|)E7

Next, the quantity is calculated as (8).

φm(rf)=1Nm+1i=1Nm+1LnCim(rf)E8

Increase the dimension to m+1. Repeat steps (1)~(4) and find φm+1(rf). Finally, the ApEn is defined as (9).

ApEn(m,rf)=limN(φm(rf)φm+1(rf))E9

In actual operation, the number of data point is limited when the data length is N and the result obtained through the above steps is the estimate of ApEn, which can be denoted as (10)

ApEn(m,rf,N)=φm(rf)φm+1(rf)E10

Obviously, the value of the estimate depends on m and rf. The parameter rf corresponds to an a priori fixed distance between the neighboring trajectory points; therefore, rf can be viewed as a filtering level and the parameter m is the embedding dimension determining the dimension of the phase space. As suggested by Pincus, rf is chosen according to the signal’s standard deviation (SD); in this paper we use the values rf =0.2 SD and m=2 with SD taken over the signal segment under consideration.

5.3.4. Wavelet coefficients

Discrete Wavelet Transform (DWT) based feature extraction has been successfully applied with promising results in physiological pattern recognition applications (Murugappan et al., 2009). Choice of suitable wavelet and the number of levels of decomposition is very important in analysis of signals using DWT. In this study, we used Daubechies wavelet function with order db4 for extracting the statistical feature from the EEG signal (Murugappan et al., 2009). The number of levels of decomposition is chosen based on the dominant frequency components of the signal. The levels are chosen such that those parts of the signal that correlate well with the frequencies required for classification of the signal are retained in the wavelet coefficients. Since the EEG signals do not have any useful frequency components above 32 Hz, the number of levels was chosen to be 5. Thus the signal is decomposed into the details D1-D5 and one final approximation, A5. The range of various frequency bands are shown in Table 3.

Table 3.

Frequencies corresponding to different levels of decomposition for “db4” wavelet with a sampling frequency of 256 Hz

The extracted wavelet coefficients provide a compact representation that shows the energy distribution of the EEG signal in time and frequency. Table 2 presents frequencies corresponding to different levels of decomposition for db4 wavelets with a sampling frequency of 256 Hz. It can be seen from table 2 that the components A5 are within the delta (0-4 Hz), D5 are within the Theta (4-8 Hz), D4 are within the alpha (8-13 Hz) and D3 are within the beta (13-30 Hz). Lower level decompositions related to higher frequencies have negligible magnitudes in a normal EEG. In order to, further diminish the dimensionality of the extracted feature vectors; statistics over the set of the wavelet coefficients was used.

  • Mean of the absolute values of the wavelet coefficients in each sub-band

  • Average power of the wavelet coefficients in each sub-band

  • Standard deviation of the wavelet coefficients in each sub-band

These features are extracted for each channel, so the total number of features by this method is: [3×4] = 12.

5.3.4. Higher order spectra parameters

We analyzed the EEG signal using higher order spectra that are spectral representations of higher order moments or cumulants of a signal (Hosseini et al., 2010b). In this part of paper, we studied features related to the third order statistics of the signal, namely the bispectrum. The bispectrum is a complex quantity, which has both magnitude and phase. The bispectrum is the Fourier transform of the third order correlation of the signal and is given by,

Bis(f1,f2)=E[X(f1).X(f2).X*(f1+f2)]E11

Where * denotes complex conjugate, X(f) is the Fourier transform of the signal x(nT) and E[.] stands for the expectation operation. This method is known as direct Fast Fourier Transform (FFT) based method (Nikias & Mendel, 1993). There is also another indirect method, which is used in this study. For more details on this method please refer to (Hosseini et al., 2010b; Swami et al., 2000). If the bispectrum of a signal is zero, none of the wave components are coupled to each other.

Assuming that there is no bispectral aliasing, the bispectral of a real-valued signal is uniquely defined with the triangle f2 ≥ 0, f1f2 and f1+f2π. For real processes, since discrete bispectrum has symmetric characteristics, it has 12 symmetry regions in the (f1, f2) plane (Swami et al., 2000). Some of these regions can be seen in (12):

Bis(f1,f2)=Bis(f2,f1)=Bis(f1f2,f2)=Bis(f1f2,f1)=Bis(f1,f1f2)=Bis(f2,f1f2)E12

The normalized bispectrum (or bicoherence) is defined as

Bic(f1,f2)=Bis(f1,f2)P(f1).P(f2).P(f1+f2)E13

Where P(f1) is the power spectrum.

Since bispectrum and bicoherence cannot fully help signal extraction, Hinich has developed algorithms to test for non-skewness (called Gaussianity) and linearity (Hinich, 1982). The basic idea is that if the third-order cumulants of a process are zero, then its bispectrum is zero, and hence its bicoherence is zero. If the bispectrum is not zero, then the process is non-Gaussian; if the process is linear and non-Gaussian, then the bicoherence is a non-zero constant (Hosseini et al., 2010b).

The Gaussianity test (actually zero-skewness test) involves deciding whether the expected value of the bicoherence is zero, that is, E{Bic(f1,f2)}=0. The test of Gaussianity is based on the mean bicoherence power,

S=|Bic(f1,f2)|2E14

The squared bicoherence is chi-squared distributed (х2 distributed) with two degrees of freedom and non-centrality parameter Lambda (λ) (Swami et al., 2000). In (14) the squared bicoherence is the sum of P points in the non-redundant region, S is the estimated statistics for the Gaussianity test with chi-squared distributed and 2P degree of freedom, and Pfa is the probability of false alarm in rejecting the Gaussian hypothesis. More details can be found in (Hosseini et al., 2010b; Swami et al., 2000). In order to calculate these features, we used a 256 sample FFT with a default C parameter of 0.51. Based on these, the 2P degree of freedom will be 96. The analysis was done using the Higher Order Spectral Analysis (HOSA) toolbox (Swami et al., 2000). The bicoherence was computed using the direct FFT method in the toolbox.

For the whole bifrequency plane region, four quantities were calculated: sum of the bispectrum magnitudes, sum of the squares of the bispectrum magnitudes, sum of the bicoherence magnitudes, and sum of the squares of the bicoherence magnitudes.

Figure 9.

The different regions used for analysis in bifrequency plane

Since bispectrum and bicoherence are functions of f1 and f2, in order to define the features, we will have five frequency intervals on each axis, as can be seen in Fig. 9. We will have 15 distinct regions. Then the defined features will be analyzed in each of these 15 regions and in the whole frequency range. These and the three other features obtained from Hinich’s tests for Gaussian and linearity add up to make 7 features for each channel.

These seven features are extracted for each channel, so the total number of features by this method is: [5×4×(15+1)] + [5×3] = 335. The contour plots of the indirect estimate of the bispectrum are shown as examples for T3 channel in Figs. 10 and 11.

Figure 10.

A contour plot of the magnitude of the indirect estimated bispectrum on the bifrequency plane, for T3 in calm state

Figure 11.

A contour plot of the magnitude of the indirect estimated bispectrum on the bifrequency plane, for T3 in negative emotional stress state

5.4. Normalization

In order to normalize the features in the limits of [-1,1], we used (15).

Ynorm=2Ys'+Ysmax'+Ysmin'Ysmin'Ysmax'E15

Here Ynorm is the relative amplitude.

5.5. Feature selection

The feature vector presented contains 82 features for a channel EEG recorded over a period of 2 seconds. This leads to the problem of dimensionality, which is solved by in this section until some of the best features should be selected. This is of interest to improve the computational speed of the classification algorithm. Several methods of selecting appropriate features exist. One of the methods described is Genetic Algorithm (GA) (Haupt et al., 2004). The emphasis on using the genetic algorithm for feature selection is to reduce the computational load on the training system while still allowing near optimal results to be found relatively quickly. The GA uses populations of 100 sizes, starting with randomly generated genomes. The probability of mutation was set to 0.01 and the probability of crossover was set to 0.4. The classification performance of the trained network using the whole dataset was returned to the GA as the value of the fitness function, Fig. 12. We attempted to detect the feature sets related to negative/calm emotion response from EEG signal.

Figure 12.

Combination of GA and SVM to achieve the best features

We used genetic algorithm in assessment of all the features because a perfect feature group is not necessarily achievable by simply putting a few superior features since the data characteristics and features may have overlapping.

5.6. Classification

After extracting the desired good features, we still have to find the related emotional stress states in the EEG. A classifier will do this process. In this research, we have used both a static and a dynamic classifier and we will explain them.

5.6.1. Support vector machine

Support vector machines are maximum margin classifiers that try to maximize the distance between the decision surface and the nearest point to this surface. Nonlinear support vector machine, maps the input space to a high dimensional feature space, and then constructs a linear optimal hyper plane in the feature space, which relates to a nonlinear hyper plane in the input space. The major problem of training a learning machine to perform supervised classification is to find a function (kernel function) that can not only capture the essential properties of the data distribution, but also prevent the over-fitting problem. We used three kernel functions including linear, polynomial and radial basis function kernels. The C parameter that regulates the trade off between training error minimization and margin maximization is empirically set to1 in this study.

5.6.2. Elman Neural Network

Elman Neural Network (ENN) is a two-layer backpropagation network, with the addition of a feedback connection from the output of the hidden layer to its input. This feedback path allows Elman network to learn to recognize and generate temporal patterns, as well as spatial patterns. The Elman network has tansig neurons in its hidden (recurrent) layer, and purelin neurons in its output layer. This combination is special in that the two-layer networks with these transfer functions can approximate any function (with a finite number of discontinuities) with arbitrary accuracy. The only requirement is that the hidden layer must have enough neurons (Demuth et al., 2008). In this part, the numbers of input neurons are to number optimum features, the numbers of output neurons is 1 and empirically the numbers of hidden neurons is chosen 8. Hence, sigmoid function has been applied for the hidden and output layers, because the sigmoid function is nonlinear and differentiable. The Levenberg-Marquardt back-propagation algorithm is used for training. The Levenberg-Marquardt algorithm will have the fastest convergence compared with other training functions. The error ratio for stop training was considered 0.001.

Advertisement

6. Result

In this research, we used a 2 seconds time intervals rectangular window without overlap, corresponding to blocks of 512 samples of EEG signals for data segmentation. In classification is important that the training set contain enough samples (or instances). On the other hand, it also important that the test set contains enough samples to avoid a noisy estimate of the model performance. We used around 75% of the EEG signals for the training, and 15% of the data for testing whether the learned relationship between the data and emotional stress is correct and the last 10% was used for validating the data. The results show that, the average classification accuracy with EEG signals were 84.6% and 83.1% for the 2 categories (calm-neutral vs. negatively excited), using the SVM and ENN classifiers respectively. This is particularly true in our case since the number of emotional stimulations is limited by the duration of the protocols, which should not be too long to avoid participant fatigue as well as elicitation of undesired emotions. Cross-validation methods help to solve this problem by splitting the data in different training/test sets so that each sample will be used at least once for training and once for testing. The two well-known cross-validation methods are the k-fold and the leave-one-out. The system was tested using the 4-fold cross-validation method. This method reduces the possibility of deviations in the results due to some special distribution of training and test data, and ensures that the system is tested with different samples from those it has seen for training. By using this method, four accuracies are obtained from the four test sets so that it is possible to compute the average accuracy. The classification results of the EEG signals under two emotional stress states is given in Table 4.

Table 4.

Emotional stress classification accuracy on EEG signals using SVM for the three-kernel function

Table 5 gives the average classification accuracy in different five channels of EEG signals under two emotional stress states with using RBF kernel in SVM and ENN classifiers.

Table 5.

The average classification accuracy in different channels using RBF kernel of SVM and ENN classifiers

Table 6 gives the average classification accuracy in different features set under two emotional stress states with using RBF kernel in SVM and ENN classifiers.

Table 6.

The average classification accuracy in different features set using RBF kernel of SVM and ENN classifiers

Advertisement

7. Discussion and conclusion

In this research, we propose an approach to classify emotional stress in the two main areas of the valance-arousal space by using bio-signals. Several researchers have shown that, it is possible to measure emotional cues using EEG measurements, which is an important condition to be able to find emotional stress states from brain activity (Chanel, 2009; Horlings, 2008; Takahashi, 2004). We chose the picture presentation test, base on the closeness of its assessment to our aims. The reason we have chosen the brain signals over the pure peripheral signals is the fact that brain signals represent behaviour directly from their source but the peripheral signals are secondary manifestations of the autonomic nervous system in response to emotional stress.

With compare to the results analysis of peripheral signals, we will notice that the breathing and SC signals are less reliable in accuracy compared to BVP and HRV signals. The results showed that, the classification accuracy with peripheral signals was 76.95% for the two categories, using SVM classifier with RBF kernel. In order to choose the best channels for EEG signals, we implemented a new cognitive model (Hosseini et al., 2010a) and eventually used signals from frontal, temporal and occipital electrodes as the most important ones. The mere use of the personal moods and the subject’s self-assessment to confirm the quality of the registered brain signals can cause many errors. As a result, we will need to use peripheral signals as a secondary trainer. In order to choose the best emotional stress state correlated EEG signals, we implemented a new emotion-related signal recognition system, which has not been studied so far (Hosseini & Khalilzadeh, 2010). We recorded peripheral signals concomitantly in order to firstly recognize the correlated emotional stress state and then label the correlated EEG signal. Recent researches on the EEG signals, revealed the chaotic nature of this signal. It is logical not to use conventional methods that assume emotion can be analyzed by linear models, because brain signals essentially have a chaotic nonlinear behaviour, we performed emotional stress state assessment using both linear and nonlinear features. Wavelet coefficients, higher order spectra and chaotic invariants like fractal dimension, approximate entropy and correlation dimension were used to extract the characteristics of the EEG signals. For most nonlinear measures a dimension should be defined to visualize the attractor in phase space, but the problem associated with all of them is that defined dimension for the phase space is not constant for all channels of recorded EEG signals or for different subjects, and depending on the conditions, the chosen dimension can be different. On the other hand, the performance of each measure can be dependent to the values of dimension, so by helping some equations and trial and error the optimum dimension for getting the best results can be discovered.

The results showed that, the correlation dimension of negative emotional stress state is less as compared to that of calm state, and be observed that Higuchi’s algorithm indicates similar trend of reduction in FD value for negative emotional stress state compared to calm state. The reduction in FD values and D2 characterizes the reduction in brain system complexity for participants with negative emotional stress state, therefore the number of the necessary dynamic equations for the description of the brain state in the negative emotional stress state decreases. A new approach to emotional stress states analysis by approximate entropy is described in this research. Approximate entropy is defined as a quantitative parameter to character the complexity (or irregularity) of EEG signals in different brain function status. The results of analysis of the nonlinear characteristics show that, if the parameters and the length of data are determined appropriately, the results can be a good representation of the brain behaviour in emotional stress states. Hence, the application of nonlinear time series analysis to EEG signals offers insight into the dynamical nature and variability of the brain signals. Therefore, those seem that nonlinear features would lead to better understanding of how emotional activities work.

In this research, for the first time in this investigational field, we had done a feature extraction using higher order spectra in emotional stress states assessment. The review of the contour plots in different channels of EEG signals as examples for in Figs. 10 and 11. This figures show that, most of the changes are amplification or diminish of the peaks or transfer of the peaks in the bifrequency plane. We concluded that HOS analysis could be an accurate tool in assessment of emotional stress states.

In this research, two of the advantages in this research, which confirm the credibility of our results, are using dichotic hearing test and using peripheral signals to label the brain signals. We have used both a static and a dynamic classifiers. The results show that, no meaningful different is not seen. Therefore, we can deduce that in short term data acquisition there is no specific dynamicity, which can be attributed to the short time intervals of 2 seconds. It is possible that by performing longer tests and using bigger intervals there is hope to identify some dynamics.

The results showed that, the importance of EEG signals for emotional stress assessment by classification as they have better time response than peripheral signals. We used 2 seconds time intervals with rectangular window without overlap, to analyze the brain signals, which resulted in a time resolution of 2 seconds in emotional stress states recognition. If we had used shorter time intervals with overlap, we could have achieved a greater but virtual time resolution, which, for example, can be useful in biofeedback applications. The problem of high dimensionality is solved by using Genetic Algorithm as a feature selection method. The results showed that, the average classification accuracy were 84.6% and 83.1% for two categories of emotional stress states using the SVM and ENN classifiers respectively. Therefore, each of two classifiers are same results in recognize of emotional stress state. In addition, the results showed that, this new fusion link, between EEG and peripheral signals are more robust in comparison to the separate signals. This is a great improvement in results compared to other similar published researches. We achieved a noticeable improvement of 6.3% and 10% in accuracy, using SVM and ENN classifiers respectively, in compared to our previous studies in the similar field (Hosseini et al., 2009).

Analyzing the results of previous researches is a difficult task, because to compare the results of the researches, which attempt to introduce emotion assessment systems as a classification problem, it is important to consider the way that emotions are elicited and the number of participants, the latter is important especially to introduce a user independent system. Due to these differences, we cannot exactly compare results with the results of the researches.

References

  1. 1. AftanasL. I.RevaN. V.VarlamovA. A.PavlovS. V.MakhnevV. P.2004Analysis of Evoked EEG Synchronization and Desynchronization in Conditions of Emotional Activation in Humans: Temporal and Topographic Characteristics. Neuroscience and Behavioral Physiology, 348859867
  2. 2. ArmonyJ. L.SchreiberD. S.CohenJ. D.Le DouxJ. E.1997Computational modeling of emotion: explorations through the anatomy and physiology of fear conditioning. Review Paper, Elsevier Science, 112834
  3. 3. CareyJ.2006Brain Facts, a Primer on the Brain and Nervous System, Society for Neuroscience, USA.
  4. 4. ChanelG.2009Emotion assessment for affective computing based on brain and peripheral signals, Ph.D. Thesis Report, University of Geneve.
  5. 5. ChanelG.KierkelsJ. J. M.SoleymaniM.PunT.2009Short-term emotion assessment in a recall paradigm, International Journal Human-Computer Studies, 67607627
  6. 6. ChangC. C.LinC. J.2009LIBSVM: a Library for Support Vector Machines. Software Available from http://www.csie.ntu.edu.tw/~cjlin/libsvm/
  7. 7. DedovicK.RenwickR.Khalili-MahaniN.EngertV.LupienS. J.PruessnerJ. C.2005The Montreal Imaging Stress Task: using functional imaging to investigate the effects of perceiving and processing psychosocial stress in the human brain, Journal Psychiatry Neurosci, 319325
  8. 8. DemuthH.BealeM.HaganM.2008Neural Network Toolbox™ 6 User’s Guide, by The MathWorks, Inc.
  9. 9. GrassbergerP.ProcacciaI.1983Characterization of strange attractors, Phys. Review Letter, 50346349
  10. 10. HauptR. L.HauptS. E.2004Practical Genetic Algorithms, John Wiley & Sons, Inc, (2th-Edition), 189190
  11. 11. HiguchiT.1988Approach to an irregular time series on the basis of the fractal theory, Physica D, 31277283
  12. 12. HinichM. J.1982Testing for Gaussianity and Linearity of a Stationary Time Series, Time Series Analysis, 169176
  13. 13. HorlingsR.2008Emotion recognition using brain activity, Delft University of Technology, Faculty of Electrical Engineering, Mathematics, and Computer Science, Man-Machine Interaction Group.
  14. 14. HosseiniS. A.2009Quantification of EEG signals for evaluation of emotional stress level, M.Sc. Thesis Report, Biomedical Engineering, Islamic Azad University-Mashhad Branch. (Thesis in Persian).
  15. 15. HosseiniS. A.KhalilzadehM. A.2010Emotional stress recognition system using EEG and psychophysiological signals: Using new labelling process of EEG signals in emotional stress state, Procssdings of IEEE, The International Conference on Biomedical Engineering and Computer Science (ICBECS), 9095Wuhan, China.
  16. 16. HosseiniS. A.KhalilzadehM. A.HomamS. M.AzarnooshM.2009Emotional stress detection using nonlinear and higher order spectra features in EEG signal, Journal of Electrical Engineering, 3921324Article in Persian).
  17. 17. HosseiniS. A.KhalilzadehM. A.Homam,‎S. M.AzarnooshM.2010aA Cognitive and Computational Model of Brain Activity during Emotional Stress, Advances in cognitive science, 122114Article in Persian).
  18. 18. HosseiniS. A.KhalilzadehM. A.Naghibi-SistaniM. B.NiazmandV.2010bHigher Order Spectra Analysis of EEG Signals in Emotional Stress State, Procssdings of IEEE The 2nd International conference on Information Technology and Computer science (ITCS), 6063Kiev, Ukraine.
  19. 19. HosseiniS. A.KhalilzadehM. A.ChangizS.2010cEmotional stress recognition system for affective computing based on bio-signals, International Journal of Biological Systems (JBS), A Special Issue on Biomedical Engineering and Applied Computing, 18101114
  20. 20. HosseiniS. A.HomamS. M.KhalilzadehM. A.NiazmandV.2010dQualitative and quantitative evaluation of brain activity in emotional stress, Iranian Journal of Neurology, 828
  21. 21. HudlickaE.2005A Computational Model of Emotion and Personality: Applications to Psychotherapy Research and Practice, Proceedings of the 10th Annual Cyber Therapy Conference: A Decade of Virtual Reality, 17Basel, Switzerland.
  22. 22. Jiu-mingL.Jing-qingL.Xue-huaY.2004Application of chaos analytic methods based on normal EEG, Proceedings of IEEE, The 3rd International Conference on Computational Electromagnetics and Its Applications, 426429
  23. 23. KimK. H.BangS. W.KimS. R.2004Emotion recognition system using short-term monitoring of physiological signals, Medical and Biological engineering and Computing, 42419427
  24. 24. KoK. E.YangH. C.SimK. B.2009Emotion Recognition using EEG Signals with Relative Power Values and Bayesian Network, International Journal of Control, Automation, and Systems, 75865870
  25. 25. LangP. J.BradleyM. M.CuthbertB. N.2005International affective Picture System (IAPS): Affective ratings of pictures and instruction manual, Technical Report A-6, University of Florida, Gainesville, FL.
  26. 26. Le DouxJ.1996The Emotional Brain, Simon & Schuster, New York.
  27. 27. Mc FarlandR. A.1985Relationship of skin temperature changes to the emotions accompanying music, Applied Psychophysiology and Biofeedback, 103255267
  28. 28. Motie-NasrabadiA. (2004). Quantitative and Qualitative Evaluation of Consciousness Variation and Depth of Hypnosis through Intelligent Processing of EEG signals, Ph.D. Thesis Report, Amirkabir University, Tehran.
  29. 29. MurugappanM.RizonM.NagarajanR.YaacobS.2009FCM clustering of Human Emotions using Wavelet based Features from EEG, International Journal of Biomedical Soft Computing and Human Sciences (IJBSCHS), 1423540
  30. 30. NikiasC. L.MendelJ. M.1993Signal Processing with Higher Order Spectra, IEEE Signal Processing Magazine, 1037
  31. 31. PincusS.M. (1991) , Approximate entropy as a measure of system complexity, Proc. Natl.Acad. Sci. USA, 882297230l
  32. 32. RamachandranV. S.2002Encyclopedia of the Human Brain, Elsevier, 4Volume Set, Academic Press.
  33. 33. RitzT.DahmeB.DuboisA. B.FolgeringH.FritzG. K.HarverA.KotsesH.LehrerP. M.RingC.SteptoeA.van-de-WoestijneK. P.2002Guidelines for mechanical lung function measurements in psychophysiology, International Journal of Psychophysiology, 395546567
  34. 34. SadockB.J. KaplanH.SadockV.A. (1998). Synopsis of Psychiatry: Behavioral Sciences/Clinical Psychiatry, Lippincott Williams & Wilkins, 8th-Edition, 1998
  35. 35. SavranA.CiftciK.ChanelG.MotaJ. C.VietL. H.SankurB.AkarunL.CaplierA.RombautM.2006Emotion Detection in the Loop from Brain Signals and Facial Images, Final Project Report, eNTERFACE’06, Dubrovnik, Croatia.
  36. 36. SchaaffK.SchultzT.2009Towards Emotion Recognition from Electroencephalographic Signals, 3rd International Conference on Affective Computing and Intelligent Interaction (ACII) and Workshops, 16
  37. 37. SchachterS.1970Some Extraordinary Facts about Obese Humans and Rats, American Psychologist, 26129144
  38. 38. SeymourB.DolanR.2008Emotion, Decision Making, and the Amygdala, Review article, Journal of Neuron, 58662671
  39. 39. SteimerT.2002The biology of fear and anxiety related behaviors. Dialogues in clinical neurosciences, 43225249
  40. 40. SwamiA.MendelJ. M.NikiasC. L.2000Higher-Order Spectral Analysis (HOSA) Toolbox for Use with Matlab, Version. Available from http://www.mathworks.com/matlabcentral/fileexchange/3013/
  41. 41. TakahashiK.2004Remarks on Emotion Recognition from Bio-Potential Signals, Proceedings of the 2nd International Conference on Autonomous Robots and Agents, 186191Palmerson North, New Zealand.
  42. 42. WanR. D.WooL. J.2004Feature Extraction and Emotion Classification Using Bio-Signal, Transactions on Engineering, Computing and Technology 23173201305-5313
  43. 43. WilhelmF. H.PfaltzM. C.GrossmanP.2006Continuous electronic data capture of physiology, behavior and experience in real life: towards ecological momentary assessment of emotion, Interacting with Computers, 182171186
  44. 44. XiangY.TsoS. K.2002Detection and Classification of flows in Concrete Structure using Bispectra and neural networks, NDT&E International, 351927
  45. 45. XiangH.2007A Computational Model and Psychological Experiment Analysis on Affective Information Processing, a doctoral dissertation submitted to the Graduate School of Engineering at the University of Tokushima.
  46. 46. ZhaiJ.BarretoA.2006Stress Detection in Computer Users Based on Digital Signal Processing of Noninvasive Physiological Variables, Proceedings of IEEE, The 28th Annual International Conference Engineering in Medicine and Biology Society (EMBS’06), 13551358New York, USA.
  47. 47. http://www.unifesp.br/dpsicobio/adap/exemplos_fotos.htm
  48. 48. http://www.thoughttechnology.com/flexinf.htm

Written By

Seyyed Abed Hosseini and Mohammad Bagher Naghibi-Sistani

Submitted: 26 October 2010 Published: 23 August 2011