## 1. Introduction

Stress and Emotion are complex phenomena that play significant roles in the quality of human life. Emotion plays a major role in motivation, perception, cognition, creativity, attention, learning and decision-making (Seymour et al., 2008). A major problem in understanding emotion is the assessment of the definition of emotions. In fact, even psychologists have problem agreeing on what is considered an emotion and how many types of emotions exist. Kleinginna gathered and analyzed 92 definitions of emotion from literature present that day. He concludes that Emotion is a complex set of interactions among subjective and objective factors, mediated by neural/hormonal systems (Horlings, 2008). In fact, Emotion is a subcategory of stress.

A lot of research has been undertaken in assessment of stress and emotion over the last years. Most of researches in the domain of stress and emotional states use peripheral signals such as respiratory rate, Skin Conductance (SC), Blood Volume Pulse (BVP) (Zhai et al., 2006) and Temperature (McFarland, 1985). Most previous research, have investigated the use of EEG and peripheral signals separately, but little attention has been paid so far to the fusion between EEG and peripheral signals (Chanel, 2009; Chanel et al., 2009; Hosseini, 2009).

In one study, Aftanas et al. (2004) that showed significant differentiation of arousal based on EEG data collected from participants watching high, intermediate and low arousal images. Chanel (2009) asked the participants to remember past emotional episodes, and obtained the accuracy of 88% using EEG for 3 categories with Support Vector Machine (SVM) classifier. Hosseini et al. (2009) used the induction visual images based acquisition protocol for recording the EEG and peripheral signals under 2 categories of emotional stress states (Calm-neutral and Negatively-exited) of participants, and obtained the accuracy of 78.3% using EEG signals with SVM classifier. Kim et al. (2004) used the combination of music and story as stimuli and there were 50 participants, to introduce a user independent system, the results showed the accuracy of 78.4% and 61% for 3 and 4 categories of different emotions respectively. Takahashi (2004) used film clips to stimulate participants with five different emotions, resulting in 42% of correctly identified patterns. Schaaff & Schultz (2009) used pictures from the International Affective Picture System (IAPS) to induce three emotional states: pleasant, neutral, and unpleasant. They obtained the accuracy of 66.7% for three classes of emotion, solely based on EEG signals.

The aim of this chapter is to produce a new fusion between EEG and peripheral signals for emotional stress recognition. Since ElectroEncephaloGram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research, it is used as the main signal. Brain waves occur during the activity of brain cells and have a frequency range of 1 to 100 Hz. Researchers have found that the following are the frequency bands of interest to interpret EEG signal: delta (1-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), and beta (13-30 Hz) and gamma (> 30 Hz) (Ko et al., 2009).

One of the recent weaknesses, lack of proper channels selected brain signals are recorded. In this study, in order to choose the proper EEG channels, a cognitive model of the brain under emotional stress has been used (Hosseini et al., 2010a).

Every standard test in stress and emotion assessment has its own advantages and disadvantages (Hosseini, 2009). A efficient acquisition protocol was designed to acquire the EEG signals in five channels and peripheral signals such as Blood Volume Pulse (BVP), Skin Conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants The visual stimuli images were selected from the subset IAPS database (Lang et al., 2005).

An important issue in every cognitive system is the correct labelling of the data. Here, labelling means the assessment of the data using a series of visual criteria used by psychologists and a proposed cognitive system for peripheral signals in order to verify the existence of a close correlation of the data and the psychological state of the subject. In this kind of research, putting the subject in the desired psychological state is very important. The process of labelling EEG signals consists of three stages: first self-assessment, second the qualitative analysis of peripheral signals and third the quantitative analysis of peripheral signals. Therefore, this new fusion link between EEG and peripheral signals are more robust in comparison to the separate signals.

## 2. Cognitive model of emotional stress

Cognitive models (also termed agent architectures) aim to emulate cognitive processing such as attention, learning, perception, and decision-making, and are used by cognitive scientists to advance understanding of the mechanisms and structures mediating cognition (Hudlicka, 2005). In the case of fear conditioning leading to emotional stress, several hypotheses have been proposed to explain how neural changes occur in the different components in a circuit leading to the observed behavioural responses. In mammalians, a part of the brain, called the limbic system, is mainly responsible for emotional processes (Xiang, 2007). We are going to describe developing a cognitive model of the limbic system based on these concepts. The main components of the limbic system involved in emotional stress processes are amygdala, orbito-frontal cortex, thalamus, sensory cortex, hypothalamus, hippocampus and some other of important areas. In this section, we are trying to briefly describe these components and their tasks. The amygdala, a small structure in the temporal lobes, plays a central role in emotion. It is generally accepted that the amygdala is crucial for the acquisition and expression of conditioned fear responses (for a review, see (LeDoux, 1996)). The amygdala and orbito-frontal cortex receive highly analyzed input from the sensory cortex. The amygdala, specifically its lateral nucleus, receives inputs from all the main sensory systems, as well as from higher-order association areas of the cortex and the hippocampus (Armony et al., 1997). Sensory information reaches the amygdala from the thalamus by the way of two parallel pathways: the direct pathways reach the amygdala quickly, but they are limited in their information content, as the thalamic cells of origin of the pathway are not very precise stimulus discriminators. The cortical pathway, on the other hand, is slower but capable of providing the amygdala with a much richer representation of the stimulus (Armony et al., 1997). The sensory cortex is the component next to the thalamus and receives its input through this component. The orbito-frontal cortex is another component, which interacts with the amygdala reciprocally. The orbito-frontal cortex also plays a role in reinforcement learning of emotions. The term prefrontal cortex refers to the very front of the brain, behind the forehead and above the eyes. It appears to play a critical role in the regulation of emotion and behaviour by anticipating the consequences of our actions. The prefrontal cortex may play an important role in delayed gratification by maintaining emotions over time and organizing behaviour toward specific goals (Xiang, 2007). The locus ceruleus contains a large proportion of the noradrenalin cell bodies found in the brain and it is a key brain stem region involved in arousal (Steimer, 2002). The Bed Nucleus of the Stria Terminalis (BNST) is considered part of the extended amygdala. It appears to be a centre for the integration of information originating from the amygdala and the hippocampus, and is clearly involved in the modulation of the neuroendocrine stress response (Steimer, 2002). The hypothalamus (ParaVentricular Nucleus (PVN) and Lateral Hypothalamus (LH)) lies below the Thalamus and it is believed to have various functions that regulate the endocrine system, the autonomous nervous system and primary behavioural surviving states (Steimer, 2002; Schachter, 1970). The Hypothalamic-Pituitary-Adrenal (HPA) axis ultimately regulates the secretion of glucocorticoids, which are adrenocortical steroids that act on target tissues throughout the body in order to preserve homeostasis during stress (Ramachandran, 2002). The hypothalamus also releases Corticotrophin-Releasing Hormone (CRH), which travels to the anterior pituitary gland, where it triggers the release of AdrenoCorticoTropic Hormone (ACTH), which, along with β-endorphin, is released into the bloodstream in response to stress. ACTH travels in the blood to the adrenal glands, where it stimulates the production and release of glucocorticoids such as cortisol. Cortisol feedback at the hypothalamus reduces CRF release. At the pituitary, it inhibits ACTH release, and at the adrenal gland, it inhibits further cortisol release. Cortisol feedback at the hippocampus inhibits CRF secretion from the hypothalamus. The release of all these chemicals causes important changes in the body’s ability to respond to threats such as increased energy, heart rate and blood sugar resulting in increased arousal and pain relief (Carey, 2006). In order to choose the best channels for EEG signals, we implemented a new cognitive model (for a review of complete model, see (Hosseini et al., 2010a; Hosseini et al., 2010d)). The detail is shown in Fig. 1. (Hosseini et al., 2010a).

## 3. Acquisition protocol

### 3.1. Stimuli

Every standard test in stress and emotional states assessment has its own advantages and disadvantages (Hosseini, 2009). Most experiments that measure emotion from EEG signals use pictures from the International Affective Picture System (IAPS). The IAPS evaluated by several American participants on two dimensions of nine points each (1-9). The use of IAPS allows better control of emotional stimuli and simplifies the experimental design (Horlings, 2008).

In this study, we chose the picture presentation test, base on the closeness of its assessment to our aims. The stimuli to elicit the target emotions (calm-neutral and negatively excited) were some of the pictures (http://www.unifesp.br/dpsicobio/adap/exemplos_fotos.htm). The valence dimension ranging from negative to positive and the arousal dimension, ranging from calm to excited. Information about both dimensions has been found to be present in EEG signals, which shows that emotion assessment from EEG signals should be possible.

The participant sits in front of a portable computer screen in a bare room relatively, the images to inform him about the specific emotional event he has to think of. Each experiment consists of 8 trials. Each stimulus consists of a block of 4 pictures, which ensures stability of the emotion over time. In addition, each picture is displayed for 3 seconds leading to a total 12 seconds per block. Prior to displaying images, a dark screen with an asterisk in the middle is shown for 10 seconds to separate each trial and to attract the participant’s attention. The detail of each trial is shown in Fig. 2.

This epoch duration was chosen because to avoid participant fatigue. In Fig. 3, each presentation cycle started with a black fixation cross, which was shown for ten seconds. After that pictures were presented for twelve seconds.

### 3.2. Subjects

Fifteen healthy volunteered subjects were right-handed males between the age of 20 and 24 years. Most subjects were students from biomedical engineering department of Islamic Azad University- Mashhad Branch. Each participant was examined by a dichotic listening test to identify the dominant hemisphere (Sadock, 1998; Hosseini, 2009). All subjects had normal or corrected vision; none of them had neurological disorders. These were done to eliminate any differences in subjects. All participants gave written informed consent. Then each participant was given a particulars questionnaire. During the pre-test, several questionnaires have been evaluated in order to check the best psychological input to start the protocol phase; this test is State-Trait Anxiety Inventory (STAI). At the end of the experiment, participants were asked to fill in a questionnaire about the experiment and give their opinions (Hosseini, 2009), because, it is possible that the emotion that a participant experiences differs from the expected value. For that reason, the participant is asked to rate his emotion on a self-assessment.

### 3.2. Procedure

We used a 10 channel Flexcom Infiniti device, with 14-bit resolution for data acquisition (http://www.thoughttechnology.com/flexinf.htm). It is connected to a PC using the USB port. An optical cable connects to device, to prevent any electrical charge from reaching the participant. The Flexcom Infiniti hardware only worked well with the accompanying software. Two programs were available, Biograph Infiniti Acquisition and ezscan. The central activity is monitored by recording EEGs. The peripheral activity is assessed by using the following sensors: a skin conductance sensor to measure sudation; a respiration belt to measure abdomen expansion; a plethysmograph to record blood volume pulse. We recorded SC by positioning two dedicated electrodes on the top of left index and middle fingers. The sample rate of the BVP and SC signals acquisition was 2048 Hz and then down-sampled to 128 Hz and respiration signal acquisition was 256 Hz and then down-sampled to 128 Hz. For reduce of calculation volume, were implemented the down sampling on BVP and SC signals. EEG was recorded using electrodes placed at 5 positions. The scalp EEG was obtained at location FP1, FP2, T3, T4 and Pz, as defined by the international 10-20 system and Ag/AgCl electrodes. In order to measure a reference signal that is (as much as possible) free from brain activity, we have two electrodes to attach to the participants earlobes. Average of A1 and A2 was used as reference. Impedance of all electrodes was kept below 5 KΩ. The sample rate of the EEG signal acquisition was 256 Hz. Each recording lasted about 3 minutes. More details of the data acquisition protocol can be found in (Hosseini, 2009).

## 4. Labeling process of EEG signals

An important issue in every cognitive system is the correct labelling of the data. In order to choose the best emotional stress correlated EEG signals, we implemented a new emotion-related signal recognition system, which has not been studied so far (Hosseini, 2009; Hosseini et al., 2010c). We recorded peripheral signals concomitantly in order to firstly recognize the correlated emotional stress state and then label the correlated EEG signals. In other words, we used the peripheral signals as a tutor for labeling system.

The process of labeling EEG signals consists of three stages: first self-assessment, second the qualitative analysis of peripheral signals and third the quantitative analysis of peripheral signals. Fig. 4 shows the different stages of the process. After the experiment, there was also a self-assessment stage, which is a good way to have an idea about the emotional stimulation “level” of the subject because emotions are known to be very subjective and dependent on previous experience (Savran et al., 2006). In this research, we will be able to get a general idea of the quality of the data, i.e. if the data are good or bad.

One kind of this data is respiration. Emotional stress processes influence respiration (Ritz et al., 2002; Wilhelm et al., 2006). Slow respiration, for example is linked to relaxation while irregular rhythm, quick variations, and cessation of respiration correspond to more aroused emotions like anger or fear. Another one is skin conductance, which measures the conductivity of the skin. Since sweat gland activity is known to be controlled by the sympathetic nervous system, Electro Dermal Activity (EDA) has become a common source of information to measure the Autonomic Nervous System. SC increases if the skin is sweaty, for example, when one is experimenting emotions such as stress. Moreover, blood pressure and Heart Rate Variability (HRV) are variables that correlate with defensive reactions, pleasantness of a stimulus, and basic emotions. We obtained Heart Rate (HR) signal using BVP signal recorded by a plethysmograph. A method to determine HR from a BVP signal is proposed in (Wan & Woo, 2004). Analysis of HRV provides an effective way to investigate the different activities of ANS, an increase of HR can be due to an increase of the sympathetic activity or a decrease of the parasympathetic activity. Two frequency bands (HR spectrum) are generally considered for HR signal, a Low Frequency (LF) band ranging from 0.05 Hz to 0.15 Hz and a High Frequency (HF) band including frequencies between 0.15 Hz and 1 Hz (Hosseini, 2009). In order to analyze the peripheral signals quantitatively, we need to pre-process them, to remove environmental noises by applying filters. The peripheral signals were filtered by moving average filters to remove noise.

We used a common set of feature values for analysis of the peripheral signals (Table 1) (Chanel et al., 2009; Hosseini, 2009). The respiration features are from time and frequency domains, the skin conductance features and the blood volume pulse features are from time domain, and the heart rate variability features are from time, frequency domains and fractal dimension.

The total number of features is: [10+9+15+6=40]. After extracting the features, we need to classify them using a classifier. There are several approaches to apply the SVM for multiclass classification (http://www.kernel-machines.org/software.html). The LibSVM MATLAB toolbox (Version 2.9) was used as an implementation of the SVM algorithms (Chang & Lin, 2009). In this study, the one-vs.-all method was implemented. Two SVMs that correspond to each of the two emotions were used. The i*th* SVM was trained with all of the training data in the i*th* class with calm labels, and the other training data with negative labels.

In the emotional stress recognition process, the feature vector was simultaneously fed into all SVMs and the output from each SVM was investigated in the decision logic algorithm to select the best emotional stress (Fig. 5). In the SVM classifier, was used a Gaussian Radial Basis function (RBF) as a kernel function. RBF projects the data to a higher dimension.

A confusion matrix will also be used to determine how the samples are classified in the different classes. A confusion matrix gives the percentage of samples belonging to class ω_{i} and classified as class ω_{j}. The accuracy can be retrieved from the confusion matrix by summing its diagonal elements P_{i,i} weighted by the prior probability P(ω_{i}) of occurrence of the class ω_{i}. The confusion matrices results of the SVM used for classification of the peripheral signals under two emotional stress states is given in Table 2.

The results show that, the classification accuracy with peripheral signals was 76.95% for the two categories, using SVM classifier with RBF kernel.

The numbers of rejected trials that are badly classified that is lower than the number of correctly classified. The percentage of rejected trials is 11%. Method at this stage it has been used to select suitable segments of EEG signal for improving accuracy of signal labeling according to emotional stress state. More details of the labeling process can be found in (Hosseini, 2009).

## 5. Analysis of EEG signals

### 5.1. Pre-processing

Before analysis, we first remove the data segment, which contains obvious eye blinking. We need to pre-process EEG signals in order to remove environmental noises and drifts. The data was filtered using a band pass filter in the frequency band of 0.5~60 Hz. Although we studed the EEG signals of up to 30 Hz, we included the 30 to 60 Hz bandwidth, because we need a double maximum frequency content when analyzing the data using HOS (Hosseini, 2009). The signals were filtered using the “filtfilt” function from the signal processing in MATLAB toolbox, which processes the input signal in both forward and reverse directions. This function allows performing a zero-phase filtering. Safety of signal phase information is very important in higher order spectra (Hosseini, 2009). In addition, a notch filter at 50 Hz was placed to discard the effect of power lines.

### 5.2. Quality aspect

First time, phase space introduced by W. Gibbs in 1901, that in this space all possible states of a system are represented, with each possible state of the system corresponding to one unique point in the phase space. In fact, all these unique points will make direction of trajectory. A sketch of the phase portrait may give qualitative information about the dynamics of the system. The method is based on the operated result numerically in the EEG dynamics system, the phase trajectory portrait is drawn out in the phase space with the time variance, and the course portrait of the state variables is drawn out with the time (Jiu-ming et al., 2004). The chaotic phenomena and the solution fraction are decided through comparison, analysis and integration. In the phase space, the close curve is corresponding to the periodical motion, while the chaotic motion is corresponding to the ever-non-close trajectory (strange attractor), which diverges randomly in some area, the corresponding figure is as following Fig. 6.

### 5.3. Feature extraction

Feature extraction is the process of extracting useful information from the signal. We use a set of feature values for brain signals. Features are extracted for each channel of EEG signals. Since brain signals essentially have a chaotic and nonlinear behaviour, we performed emotional stress state assessment using both linear and nonlinear characteristics. Nonlinear measures have received the most attention in comparison with the measures mentioned before, for example time domain, frequency domain and other linear features. The nonlinear set of features used includes fractal dimension, approximate entropy and correlation dimension of the data.

#### 5.3.1. Fractal dimension

Fractal dimension (FD) analysis is frequently used in biomedical signal processing, including EEG analysis. Higuchi’s algorithm unlike many other methods requires only short time intervals to calculate fractal dimension. This is very advantageous, because EEG signal remains stationary during short intervals and because in EEG analysis it is often necessary to consider short, transient events.

In Higuchi’s algorithm (Higuchi, 1988), *k* new time sequence are constructed from the signal *x*(1), *x*(2),…, *x*(*N*) under study:

Where *m* = 1, 2, …, *k* and *k* indicate the initial time value, and the discrete time interval between points, respectively. For each of the k time series *x*^{k}_{m}, the length *L*_{m}(*k*) is computed by:

Where *N* is the total length of the data sequence *x*, (*N*-1)/[(*N*-*m*)/*k*]*k* is a normalization factor and *k* average length is computed as the mean of the *k* lengths *L*_{m}(k) for *m* =1,2,..., *k*. This procedure is repeated for each *k* ranging from 1 to max *k*, obtaining an average length for each k. In the curve of *ln*(*L*_{m}(*k*)) versus *ln*(1/*k*), the slope of the least-squares linear best fit is the estimate of the fractal dimension.

In this research, the best results were obtained for estimating the FD of the EEG; k_{max} = 10, rectangular window size, N = 512 samples (2 seconds) and window overlap = 0%.

#### 5.3.2. Correlation dimension

Correlation dimension (D_{2}) is one of the most widely used measures of a chaotic process. In this reasearch, we used the Grassberger and Procaccia Algorithm (GPA) for estimating D_{2} (Grassberger & Procaccia, 1983). The choice of an appropriate time delay τ and embedding dimension m is important for the success of reconstructing the attractor with finite data. The idea is to construct a function C(r) that is the probability that two arbitrary points on the orbit are closer together than r, where r is the radius of the sphere in the multidimensional space. This is done by calculating the separation between every pair of N data points and sorting them into bins of width dr proportionate to r. More precisely GPA computes the correlation integral C(r) given by,

Where ||V(j)-V(i)|| is the distance between the points V(j) and V(i) and Θ(.) is the heaviside function. D_{2} is estimated as the slope of the log(C(r)) vs. log(r) graph as follows:

We calculated D_{2} with d_{sat} values varying from 2 to 10 for all the subjects. It can be seen that D_{2} saturates after the embedding dimension of 7 (Fig. 7). Therefore, we have chosen d_{sat}=8 for constructing the embedding space and estimation of the invariants (In the test, m = 8 and τ = 6).

The determination is based on calculating the relative number of pairs of points in the phase-space set that is separated by a distance less than *r*. For a self-similar attractor, the local scaling exponent is constant, and this region is called a scaling region. This scaling exponent can be used as an estimate of the correlation dimension. If the *d*_{sat}=8 plots *C*(*N*, *r*) vs. *r* on a *log*-*log* scale, the correlation dimension is given by the slope of the *log*(*C*(*r*)) vs. *log*(*r*) curve over a selected range of *r*, and the slope of this curve in the scaling region is estimated by the least slope fitting (Fig. 8).

#### 5.3.3. Approximate entropy

Pincus introduced the first idea of approximate entropy (ApEn) in 1991, and it is a useful complexity measure for biological time series data (Pincus, 1991). ApEn is originated from nonlinear dynamics. ApEn is a statistical instrument initially designed to be applied to finite length and noisy time series data, it is scale invariant and model independent, evaluates both dominant and subordinate patterns in data, and discriminates series for which clear feature recognition is difficult. Notably it detects changes in underlying episodic behaviour not reflected in peak occurrences or amplitudes. To understand the concept of ApEn better, we describe the definition step by step as follows: Let the original data be <*X*(*n*)> = *x*(1), *x*(2),…, *x*(*N*), where *N* is the total number of data points. The calculation of ApEn of signal of finite length is performed as follows. First, fix a positive integer *m* and a positive real number *r*_{f}. Next, form the signal *x* the *N*-*m*+1 vector, defined by (5).

The quantity is calculated (6).

Where the distance between the vectors and is defined as (7).

Next, the quantity is calculated as (8).

Increase the dimension to m+1. Repeat steps (1)~(4) and find

In actual operation, the number of data point is limited when the data length is N and the result obtained through the above steps is the estimate of ApEn, which can be denoted as (10)

Obviously, the value of the estimate depends on *m* and *r*_{f}. The parameter *r*_{f} corresponds to an a priori fixed distance between the neighboring trajectory points; therefore, *r*_{f} can be viewed as a filtering level and the parameter *m* is the embedding dimension determining the dimension of the phase space. As suggested by Pincus, *r*_{f} is chosen according to the signal’s standard deviation (SD); in this paper we use the values *r*_{f} =0.2 SD and *m*=2 with SD taken over the signal segment under consideration.

#### 5.3.4. Wavelet coefficients

Discrete Wavelet Transform (DWT) based feature extraction has been successfully applied with promising results in physiological pattern recognition applications (Murugappan et al., 2009). Choice of suitable wavelet and the number of levels of decomposition is very important in analysis of signals using DWT. In this study, we used Daubechies wavelet function with order db4 for extracting the statistical feature from the EEG signal (Murugappan et al., 2009). The number of levels of decomposition is chosen based on the dominant frequency components of the signal. The levels are chosen such that those parts of the signal that correlate well with the frequencies required for classification of the signal are retained in the wavelet coefficients. Since the EEG signals do not have any useful frequency components above 32 Hz, the number of levels was chosen to be 5. Thus the signal is decomposed into the details D1-D5 and one final approximation, A5. The range of various frequency bands are shown in Table 3.

The extracted wavelet coefficients provide a compact representation that shows the energy distribution of the EEG signal in time and frequency. Table 2 presents frequencies corresponding to different levels of decomposition for db4 wavelets with a sampling frequency of 256 Hz. It can be seen from table 2 that the components A5 are within the delta (0-4 Hz), D5 are within the Theta (4-8 Hz), D4 are within the alpha (8-13 Hz) and D3 are within the beta (13-30 Hz). Lower level decompositions related to higher frequencies have negligible magnitudes in a normal EEG. In order to, further diminish the dimensionality of the extracted feature vectors; statistics over the set of the wavelet coefficients was used.

Mean of the absolute values of the wavelet coefficients in each sub-band

Average power of the wavelet coefficients in each sub-band

Standard deviation of the wavelet coefficients in each sub-band

These features are extracted for each channel, so the total number of features by this method is: [3×4] = 12.

#### 5.3.4. Higher order spectra parameters

We analyzed the EEG signal using higher order spectra that are spectral representations of higher order moments or cumulants of a signal (Hosseini et al., 2010b). In this part of paper, we studied features related to the third order statistics of the signal, namely the bispectrum. The bispectrum is a complex quantity, which has both magnitude and phase. The bispectrum is the Fourier transform of the third order correlation of the signal and is given by,

Where * denotes complex conjugate, *X*(*f*) is the Fourier transform of the signal *x*(*nT*) and *E*[.] stands for the expectation operation. This method is known as direct Fast Fourier Transform (FFT) based method (Nikias & Mendel, 1993). There is also another indirect method, which is used in this study. For more details on this method please refer to (Hosseini et al., 2010b; Swami et al., 2000). If the bispectrum of a signal is zero, none of the wave components are coupled to each other.

Assuming that there is no bispectral aliasing, the bispectral of a real-valued signal is uniquely defined with the triangle *f*_{2} ≥ 0, *f*_{1} ≥ *f*_{2} and *f*_{1}+*f*_{2} ≤ *π*. For real processes, since discrete bispectrum has symmetric characteristics, it has 12 symmetry regions in the (*f*_{1}, *f*_{2}) plane (Swami et al., 2000). Some of these regions can be seen in (12):

The normalized bispectrum (or bicoherence) is defined as

Where P(*f*_{1}) is the power spectrum.

Since bispectrum and bicoherence cannot fully help signal extraction, Hinich has developed algorithms to test for non-skewness (called Gaussianity) and linearity (Hinich, 1982). The basic idea is that if the third-order cumulants of a process are zero, then its bispectrum is zero, and hence its bicoherence is zero. If the bispectrum is not zero, then the process is non-Gaussian; if the process is linear and non-Gaussian, then the bicoherence is a non-zero constant (Hosseini et al., 2010b).

The Gaussianity test (actually zero-skewness test) involves deciding whether the expected value of the bicoherence is zero, that is, *E*{*Bic*(*f*_{1},*f*_{2})}=0. The test of Gaussianity is based on the mean bicoherence power,

The squared bicoherence is chi-squared distributed (х2 distributed) with two degrees of freedom and non-centrality parameter Lambda (λ) (Swami et al., 2000). In (14) the squared bicoherence is the sum of P points in the non-redundant region, S is the estimated statistics for the Gaussianity test with chi-squared distributed and 2P degree of freedom, and Pfa is the probability of false alarm in rejecting the Gaussian hypothesis. More details can be found in (Hosseini et al., 2010b; Swami et al., 2000). In order to calculate these features, we used a 256 sample FFT with a default C parameter of 0.51. Based on these, the 2P degree of freedom will be 96. The analysis was done using the Higher Order Spectral Analysis (HOSA) toolbox (Swami et al., 2000). The bicoherence was computed using the direct FFT method in the toolbox.

For the whole bifrequency plane region, four quantities were calculated: sum of the bispectrum magnitudes, sum of the squares of the bispectrum magnitudes, sum of the bicoherence magnitudes, and sum of the squares of the bicoherence magnitudes.

Since bispectrum and bicoherence are functions of *f*_{1} and *f*_{2}, in order to define the features, we will have five frequency intervals on each axis, as can be seen in Fig. 9. We will have 15 distinct regions. Then the defined features will be analyzed in each of these 15 regions and in the whole frequency range. These and the three other features obtained from Hinich’s tests for Gaussian and linearity add up to make 7 features for each channel.

These seven features are extracted for each channel, so the total number of features by this method is: [5×4×(15+1)] + [5×3] = 335. The contour plots of the indirect estimate of the bispectrum are shown as examples for T3 channel in Figs. 10 and 11.

### 5.4. Normalization

In order to normalize the features in the limits of [-1,1], we used (15).

Here Y_{norm} is the relative amplitude.

### 5.5. Feature selection

The feature vector presented contains 82 features for a channel EEG recorded over a period of 2 seconds. This leads to the problem of dimensionality, which is solved by in this section until some of the best features should be selected. This is of interest to improve the computational speed of the classification algorithm. Several methods of selecting appropriate features exist. One of the methods described is Genetic Algorithm (GA) (Haupt et al., 2004). The emphasis on using the genetic algorithm for feature selection is to reduce the computational load on the training system while still allowing near optimal results to be found relatively quickly. The GA uses populations of 100 sizes, starting with randomly generated genomes. The probability of mutation was set to 0.01 and the probability of crossover was set to 0.4. The classification performance of the trained network using the whole dataset was returned to the GA as the value of the fitness function, Fig. 12. We attempted to detect the feature sets related to negative/calm emotion response from EEG signal.

We used genetic algorithm in assessment of all the features because a perfect feature group is not necessarily achievable by simply putting a few superior features since the data characteristics and features may have overlapping.

### 5.6. Classification

After extracting the desired good features, we still have to find the related emotional stress states in the EEG. A classifier will do this process. In this research, we have used both a static and a dynamic classifier and we will explain them.

#### 5.6.1. Support vector machine

Support vector machines are maximum margin classifiers that try to maximize the distance between the decision surface and the nearest point to this surface. Nonlinear support vector machine, maps the input space to a high dimensional feature space, and then constructs a linear optimal hyper plane in the feature space, which relates to a nonlinear hyper plane in the input space. The major problem of training a learning machine to perform supervised classification is to find a function (kernel function) that can not only capture the essential properties of the data distribution, but also prevent the over-fitting problem. We used three kernel functions including linear, polynomial and radial basis function kernels. The C parameter that regulates the trade off between training error minimization and margin maximization is empirically set to1 in this study.

#### 5.6.2. Elman Neural Network

Elman Neural Network (ENN) is a two-layer backpropagation network, with the addition of a feedback connection from the output of the hidden layer to its input. This feedback path allows Elman network to learn to recognize and generate temporal patterns, as well as spatial patterns. The Elman network has tansig neurons in its hidden (recurrent) layer, and purelin neurons in its output layer. This combination is special in that the two-layer networks with these transfer functions can approximate any function (with a finite number of discontinuities) with arbitrary accuracy. The only requirement is that the hidden layer must have enough neurons (Demuth et al., 2008). In this part, the numbers of input neurons are to number optimum features, the numbers of output neurons is 1 and empirically the numbers of hidden neurons is chosen 8. Hence, sigmoid function has been applied for the hidden and output layers, because the sigmoid function is nonlinear and differentiable. The Levenberg-Marquardt back-propagation algorithm is used for training. The Levenberg-Marquardt algorithm will have the fastest convergence compared with other training functions. The error ratio for stop training was considered 0.001.

## 6. Result

In this research, we used a 2 seconds time intervals rectangular window without overlap, corresponding to blocks of 512 samples of EEG signals for data segmentation. In classification is important that the training set contain enough samples (or instances). On the other hand, it also important that the test set contains enough samples to avoid a noisy estimate of the model performance. We used around 75% of the EEG signals for the training, and 15% of the data for testing whether the learned relationship between the data and emotional stress is correct and the last 10% was used for validating the data. The results show that, the average classification accuracy with EEG signals were 84.6% and 83.1% for the 2 categories (calm-neutral vs. negatively excited), using the SVM and ENN classifiers respectively. This is particularly true in our case since the number of emotional stimulations is limited by the duration of the protocols, which should not be too long to avoid participant fatigue as well as elicitation of undesired emotions. Cross-validation methods help to solve this problem by splitting the data in different training/test sets so that each sample will be used at least once for training and once for testing. The two well-known cross-validation methods are the k-fold and the leave-one-out. The system was tested using the 4-fold cross-validation method. This method reduces the possibility of deviations in the results due to some special distribution of training and test data, and ensures that the system is tested with different samples from those it has seen for training. By using this method, four accuracies are obtained from the four test sets so that it is possible to compute the average accuracy. The classification results of the EEG signals under two emotional stress states is given in Table 4.

Table 5 gives the average classification accuracy in different five channels of EEG signals under two emotional stress states with using RBF kernel in SVM and ENN classifiers.

Table 6 gives the average classification accuracy in different features set under two emotional stress states with using RBF kernel in SVM and ENN classifiers.

## 7. Discussion and conclusion

In this research, we propose an approach to classify emotional stress in the two main areas of the valance-arousal space by using bio-signals. Several researchers have shown that, it is possible to measure emotional cues using EEG measurements, which is an important condition to be able to find emotional stress states from brain activity (Chanel, 2009; Horlings, 2008; Takahashi, 2004). We chose the picture presentation test, base on the closeness of its assessment to our aims. The reason we have chosen the brain signals over the pure peripheral signals is the fact that brain signals represent behaviour directly from their source but the peripheral signals are secondary manifestations of the autonomic nervous system in response to emotional stress.

With compare to the results analysis of peripheral signals, we will notice that the breathing and SC signals are less reliable in accuracy compared to BVP and HRV signals. The results showed that, the classification accuracy with peripheral signals was 76.95% for the two categories, using SVM classifier with RBF kernel. In order to choose the best channels for EEG signals, we implemented a new cognitive model (Hosseini et al., 2010a) and eventually used signals from frontal, temporal and occipital electrodes as the most important ones. The mere use of the personal moods and the subject’s self-assessment to confirm the quality of the registered brain signals can cause many errors. As a result, we will need to use peripheral signals as a secondary trainer. In order to choose the best emotional stress state correlated EEG signals, we implemented a new emotion-related signal recognition system, which has not been studied so far (Hosseini & Khalilzadeh, 2010). We recorded peripheral signals concomitantly in order to firstly recognize the correlated emotional stress state and then label the correlated EEG signal. Recent researches on the EEG signals, revealed the chaotic nature of this signal. It is logical not to use conventional methods that assume emotion can be analyzed by linear models, because brain signals essentially have a chaotic nonlinear behaviour, we performed emotional stress state assessment using both linear and nonlinear features. Wavelet coefficients, higher order spectra and chaotic invariants like fractal dimension, approximate entropy and correlation dimension were used to extract the characteristics of the EEG signals. For most nonlinear measures a dimension should be defined to visualize the attractor in phase space, but the problem associated with all of them is that defined dimension for the phase space is not constant for all channels of recorded EEG signals or for different subjects, and depending on the conditions, the chosen dimension can be different. On the other hand, the performance of each measure can be dependent to the values of dimension, so by helping some equations and trial and error the optimum dimension for getting the best results can be discovered.

The results showed that, the correlation dimension of negative emotional stress state is less as compared to that of calm state, and be observed that Higuchi’s algorithm indicates similar trend of reduction in FD value for negative emotional stress state compared to calm state. The reduction in FD values and *D*_{2} characterizes the reduction in brain system complexity for participants with negative emotional stress state, therefore the number of the necessary dynamic equations for the description of the brain state in the negative emotional stress state decreases. A new approach to emotional stress states analysis by approximate entropy is described in this research. Approximate entropy is defined as a quantitative parameter to character the complexity (or irregularity) of EEG signals in different brain function status. The results of analysis of the nonlinear characteristics show that, if the parameters and the length of data are determined appropriately, the results can be a good representation of the brain behaviour in emotional stress states. Hence, the application of nonlinear time series analysis to EEG signals offers insight into the dynamical nature and variability of the brain signals. Therefore, those seem that nonlinear features would lead to better understanding of how emotional activities work.

In this research, for the first time in this investigational field, we had done a feature extraction using higher order spectra in emotional stress states assessment. The review of the contour plots in different channels of EEG signals as examples for in Figs. 10 and 11. This figures show that, most of the changes are amplification or diminish of the peaks or transfer of the peaks in the bifrequency plane. We concluded that HOS analysis could be an accurate tool in assessment of emotional stress states.

In this research, two of the advantages in this research, which confirm the credibility of our results, are using dichotic hearing test and using peripheral signals to label the brain signals. We have used both a static and a dynamic classifiers. The results show that, no meaningful different is not seen. Therefore, we can deduce that in short term data acquisition there is no specific dynamicity, which can be attributed to the short time intervals of 2 seconds. It is possible that by performing longer tests and using bigger intervals there is hope to identify some dynamics.

The results showed that, the importance of EEG signals for emotional stress assessment by classification as they have better time response than peripheral signals. We used 2 seconds time intervals with rectangular window without overlap, to analyze the brain signals, which resulted in a time resolution of 2 seconds in emotional stress states recognition. If we had used shorter time intervals with overlap, we could have achieved a greater but virtual time resolution, which, for example, can be useful in biofeedback applications. The problem of high dimensionality is solved by using Genetic Algorithm as a feature selection method. The results showed that, the average classification accuracy were 84.6% and 83.1% for two categories of emotional stress states using the SVM and ENN classifiers respectively. Therefore, each of two classifiers are same results in recognize of emotional stress state. In addition, the results showed that, this new fusion link, between EEG and peripheral signals are more robust in comparison to the separate signals. This is a great improvement in results compared to other similar published researches. We achieved a noticeable improvement of 6.3% and 10% in accuracy, using SVM and ENN classifiers respectively, in compared to our previous studies in the similar field (Hosseini et al., 2009).

Analyzing the results of previous researches is a difficult task, because to compare the results of the researches, which attempt to introduce emotion assessment systems as a classification problem, it is important to consider the way that emotions are elicited and the number of participants, the latter is important especially to introduce a user independent system. Due to these differences, we cannot exactly compare results with the results of the researches.