Open access peer-reviewed chapter

SSVEP-Based BCIs

Written By

Rajesh Singla

Submitted: 08 December 2017 Reviewed: 20 February 2018 Published: 17 October 2018

DOI: 10.5772/intechopen.75693

From the Edited Volume

Evolving BCI Therapy - Engaging Brain State Dynamics

Edited by Denis Larrivee

Chapter metrics overview

1,786 Chapter Downloads

View Full Metrics

Abstract

This chapter describes the method of flickering targets, eliciting fundamental frequency changes in the EEG signal of the subject, used to drive machine commands after interpretation of user’s intentions. The steady-state response of the changes in the EEG caused by events such as visual stimulus applied to the subject via a computer screen is called steady-state visually evoked potential (SSVEP). This feature of the EEG signal can be used to form a basis of input to assistive devices for locked in patients to improve their quality of life, as well as for performance enhancing devices for healthy subjects. The contents of this chapter describe the SSVEP stimuli; feature extraction techniques, feature classification techniques and a few applications based on SSVEP based BCI.

Keywords

  • EEG
  • SSVEP
  • BCI
  • BCI-wheelchair
  • ITR
  • evoked potential
  • EEG assistive devices

1. Introduction

1.1. Evoked potential

Evoked potentials (EP) are the electrical signals measured from the scalp after the stimulation rendered by some external stimulus. Corresponding to various stimuli, evoked potentials are distinguished as visual, auditory and somatosensory evoked potentials.

Event related potential (ERP) implies both EP and brain responses prompted by cognitive processes evolved by external stimuli or precursory mechanisms for motor action [1, 2].

1.2. Visual evoked potential

Visual evoked potentials (VEPs) are the brain activity modulations occurring in the visual cortex after encountering visual stimulus [3]. They are easy to detect as the movement of stimulus closer to the central visual field immensely enhances the amplitude of VEPs [4].

Based on following criteria, VEPs are classified into different categories [5]:

  1. Morphology of the optical stimuli

    1. VEPs caused by flash stimulation

    2. VEPs caused by graphic patterns like checkerboard lattice, gate

  2. Frequency of visual stimulation

    1. Transient VEPs (TVEPs): VEPs with visual stimulation frequency below 6 Hz

    2. Steady-state VEPs (SSVEPs): VEPs with visual stimulation frequency above 6 Hz [3, 6]

  3. Field stimulation

    1. Whole field VEPs

    2. Half field VEPs

    3. Part field VEPs

While the user needs to gaze at the screen and keep his eyes fixed on one particular point. These exogenous signals are not suitable while dealing with advanced level amyotrophic lateral sclerosis (ALS) patients or users with uncontrollable eye or neck movements [7].

1.3. Steady-state visual evoked potential

Regan experimented with long trains of stimuli that comprised of sinusoidally modulated monochromatic light [8]. Small amplitude stable VEP were generated which were entitled “steady-state” visually evoked potentials (SSVEPs) of the human visionary system. There hence, steady-state visual evoked potentials (SSVEPs) are defined as the potential elicited by the change in the visual field with the frequency higher than 6 Hz.

When a user is presented with some periodic stimuli, SSVEP is generated strongly at the occipital areas of the brain [10]. SSVEP is usually acquired from various electrode sites like Oz, O1, O2, Pz, P3, P4, and some surrounding locations to occipital region. While the most commonly used SSVEP frequency range is 4–60 Hz, the resonance phenomenon is generally observed around 10, 20, 40 and 80 Hz [11].

1.4. Feature extraction

Various features are procured from the properties of the brain signals that have discriminative information embedded in them. Various feature extraction techniques are used to extract such features when overlapped in time and space by several brain signals.

The feature extraction in SSVEP signals was often done with the study of amplitude in the target frequency [12, 13, 14], followed by independent component analysis (ICA) [16], the fast Fourier transform (FFT) [15], continuous wavelet transform (CWT) [17, 18], Hilbert-Huang transform (HHT) [19, 20] or the PSD [21] can be used.

1.4.1. Independent component analysis (ICA)

ICA follows a statistical procedure for separating a set of mixed signals into its sources without any presumptuous information regarding the nature of the signal. The only criteria that need to be followed are that the unspecified underlying sources must be statistically mutually independent. ICA can express an EEG signal as following:

x(t) = f(s(t)) + n(t)E1

where, f is some unknown mixer function, s(t) is the source vector, n(t) is the additive arbitrary noisy vector and x(t) is the resultant EEG signal. ICA mainly follows two approaches: spatial ICA that extricates out independent spatial maps and temporal ICA that extricates out independent time courses.

EEG over the visual cortex was fragmented into SSVEP signals and background noise using ICA in the study by Wang et al. [16].

1.4.2. Fast Fourier transform (FFT)

Fourier transform (FT) comprises fast Fourier transform (FFT) and fiscrete Fourier transform (DFT). Wang et al. used 256-point FFT for transforming EEG signals to corresponding frequency domain representation. This was represented in terms of five frequencies: 9, 11, 13, 15 and 17 Hz [21]. Mouli et al. considered the maximum amplitudes of the FFT as the prime parameter for differentiating various target stimuli of 7, 8, 9 and 10 Hz [22].

1.4.3. Spatial filtering

The most commonly endorsed EEG recording methods emanated monopolar signals or Laplacian filtered signals. For example, channel Oz consisted as follows:

Monopolar: y Ox M = y Ox y A 2
Laplacian: y Ox L = y Ox M y O 1 M + y O 2 M 2

Laplacian signals are extracted considering both the sides (like Cz utilizes C3 and C4) [23].

1.4.4. Continuous wavelet transform (CWT)

Wavelet transform (WT) is best suited to extract information from nonstationary signals as it extends a versatile method for representation of time-frequency of a signal [24]. CWT is basically the convolution of signal with the wavelet function [25]:

w s τ = x t ψ t dt E2

where, ψ*(t) is the complex conjugate wavelet function, x(t) is the particular function and w(s,τ) is the wavelet coefficient corresponding to frequency related with scale s and time τ of the involved wavelet function. CWT works like template matching just like matched filter where cross variance is calculated for the signal and some predefined waveform [26].

Zhang et al. established the use of CWT technique for extracting features and classifying them in SSVEP-based BCI [16]. Kumari et al. transformed the CWT coefficients into feature vectors for tracing out the site of high frequency SSVEP components [17].

1.4.5. Hilbert-Huang transform (HHT)

HHT is a self-adaptive data analysis technique comprising empirical mode decomposition (EMD) and Hilbert spectral analysis (HAS) [27]. It can opt stationary and non-stationary signals analysis. An intrinsic mode function (IMF) is an oscillator function with time-varying frequencies capable of depicting the local properties of non-stationary signals [28].

Huang et al. identified high frequency SSVEP signals using HHT in SSVEP-based BCI [18]. HHT remodeled the original signals into 11-order IMF with the help of EMD.

1.5. Feature classification

The classification algorithms designate boundaries between various targets in the feature space on the basis of feature vectors involved considering the as independent variables.

The classification methods for SSVEP signals are heterogeneous in nature, like Bayesian classifier [14], linear discriminant classifier (LDA) [29, 30, 31], support vector machine (SVM) [32, 33, 34, 35], k-nearest neighbor (k-NNC) [31, 36].

1.5.1. Bayesian classifier

Bayesian statistical classifier obtains the posterior probability P(y|x) as per prior probability of a feature vector for belonging to some particular class. The class that has got the maximum probability is the one to which the particular feature vector belongs.

P y x = P y P x y P x E3

1.5.2. Linear discriminant classifier (LDA)

LDA or Fisher’s LDA (FLDA) classifies the data into various classes using hyper planes [36]. This classifier is successfully applied in BCI community despite of high computational time involvement. This classifier traces out an optimal projection that maximizes the distance between the classes. The decision hyper plane that divides the feature space into various classes is perpendicular to the projection direction [37]. The hyper plane is expressed as:

m(x) = wTx + woE4

where, w, x and wo implies the weight vector, the input feature vector and the threshold, respectively.

1.5.3. Support vector machine (SVM)

SVM classifies the feature vectors into various classes by the concept of construction of one or more hyper planes. This classifier differs from LDA, as in this, the decision boundary or hyper plane escalates the margins that implies, the distance between the decision boundary and the training sample nearest to it [38]. While the hyper plane separates the training data set with maximal margin, it also maps them to a higher dimensional space [39]. The decision boundary followed up in SVM may be linear as well as non-linear depending upon the choice of kernel function (linear, cubic, polynomial, Gaussian or radial basis (RBF)) [40].

1.6. k-Nearest neighbor (k-NNC)

The classification principle of k-NNC is that the features belonging to different classes get flocked up in different clusters while keeping the adjacent neighbors in one cluster. It considers k metric distances between the testing dataset features and those of the nearest classes for classifying a test feature vector. Although classification with k-NNC reduces the error probability in the decision but still it is not so commonly used in BCI community [41].

Advertisement

2. SSVEP in BCIs

A BCI is an artificial intelligence system that has the ability to identify particular set of patterns in the brain signals to provide an additional output channel for the control of artificial devices like restoring motor function, robot arm, communication program, etc. [43, 44].SSVEPs based BCIs are classified into following categories:

  1. Time modulated VEP (t-VEP) BCI: In this BCI, the follow up of flash sequences of various stimuli are orthogonal in time, that is, they are strictly non overlapping or stochastic in nature.

  2. Frequency modulated VEP (f-VEP) BCI: In this BCI, stimuli are made to flash at some exclusive frequency and the potential evoked is generated with the fundamental frequency same as that of the stimuli as it harmonics.

  3. Pseudorandom code modulated (c-VEP) BCI: In this BCI, a pseudorandom sequence defines the duration of ON and OFF states of each stimulus. This mode yields highest communication speed.

2.1. Stimulus types

In SSVEP-based experiments, the user is asked to identify the target with eye-gaze. The attention of the user is supposed to be visually fixed on the target and the target is identified by feature extraction and its analysis [42]. In case of single graphic stimuli, stimulus appears and disappears at some particular rate just like displayed in Figure 1. In case of pattern reversal stimuli, at least two graphical patterns are displayed by alternative oscillations like shown in Figure 2. Such stimulus maybe of checkerboard or grating type.

Figure 1.

Single graphic stimuli.

Figure 2.

Pattern reversal stimuli.

With flashing stimulus, SSVEP appears as a sinusoidal-like waveform with fundamental frequency as that of blinking frequency of the stimulus. With graphic pattern stimulus, SSVEP appears at the reversal rate and their harmonics [8]. The SSVEP discrete frequency components stay intently constant in terms of amplitude and phase for long time [9].

2.2. Applications of SSVEP in BCI’s

2.2.1. SSVEP for BCI based wheelchair

Singla in 2014 spearheaded the research on the effects of stimuli color, of the flickering targets, on the accuracy of decision making to drive a wheelchair. In the study, SSVEPs were selected as compared to VEP because they are less vulnerable to artifacts produced by the eye blinks, eye movements as well as EMG noise [44].

SSVEP data was acquired, which originated due to four different flickering target frequencies, from the occipital region of the brain. The frequency features of the data were extracted using fast Fourier transform (FFT) and wavelet transform (WT). Three different classification methods were tried, two based on ANN with back propagation algorithm and one based on SVM with One against All (OAA) strategy. The control signals were assigned for each of the five classes detected (7, 9, 11, 13 and rest signal). Corresponding to five classes, five movement positions such as forward (F), backward (B), left (L), right (R) and stop (S) were obtained.

The SSVEP stimulus produces a response in the EEG signal, which is characterized by oscillations of the order of the stimulation frequency and sometimes at harmonics or sub harmonics of it. The visual system can be divided into three subsystems [45].

(i) Low frequency subsystem (near 10 Hz). It gives the greatest SSVEP amplitudes.

(ii) Medium frequency subsystem (16–18 Hz). It gives medium amplitude.

(iii) High frequency subsystem (40–50 Hz). It shows the smallest response.

The ability of the human eye to distinguish colors is based upon the varying sensitivity of cone cells to the light of different wavelengths [46]. There are three kinds of cone cells and are conventionally labeled as short (S), medium (M), and long (L) cones according to the wavelengths of the peaks of their spectral sensitivities. S, M and L cone cells are therefore sensitive to blue (short-wavelength), green (medium-wavelength) and red (long-wavelength) light respectively. The brain combines the information from each cone cells to give different perceptions for different colors and as a result the SSVEP strength elicited with different colors of the stimuli will be different [46]. In this work blue, green, red and violet were selected as stimuli colors to explore how different colors influence the elicited SSVEPs and the performance of SSVEP based system.

The research used, repetitive visual stimuli (RVS) with four different flickering frequencies was designed by using LabVIEW software (National Instrument Inc., USA). The front panel of RVS is shown in Figure 3. RVS with violet, red, green and blue flickering bars were designed as four different sets. The back ground color of the RVS was selected as black. The visual stimuli were square (4 × 4 cm) in shape and were placed on the four corners of the LCD screen. Four frequencies 7, 9, 11 and 13 Hz, i.e., in the low frequency range were selected by considering 60 Hz refreshing rate of LCD monitor [45]. In order to select any particular stimuli the four visual stimuli were separated in pair of two each, i.e., 7, 11 and 9, 13. Further in an interval of 2 s if eye blink once then first pair is selected, i.e., 7, 11 and if eye is blinked twice then the next pair is selected, i.e., 9, 13. Once a pair of stimuli is selected then again in next interval of 2 s if eye blink once then upper stimuli is selected and if it is blinked twice then the lower stimuli is selected in that pair of stimuli.

Figure 3.

Visual stimuli with four different flickering frequencies.

The research used, repetitive visual stimuli (RVS) with four different flickering frequencies was designed by using LabVIEW software (National Instrument Inc., USA). The front panel of RVS is shown in Figure 3. RVS with violet, red, green and blue flickering bars were designed as four different sets. The back ground color of the RVS was selected as black. The visual stimuli were square (4 × 4 cm) in shape and were placed on the four corners of the LCD screen. Four frequencies 7, 9, 11 and 13 Hz, i.e., in the low frequency range were selected by considering 60 Hz refreshing rate of LCD monitor [45].

In order to select any particular stimuli the four visual stimuli were separated in pair of two each, i.e., 7, 11 and 9, 13. Further in an interval of 2 s if eye blink once then first pair is selected, i.e., 7, 11 and if eye is blinked twice then the next pair is selected, i.e., 9, 13. Once a pair of stimuli is selected then again in next interval of 2 s if eye blink once then upper stimuli is selected and if it is blinked twice then the lower stimuli is selected in that pair of stimuli.

The EEG signals recorded from each channel were digitized and segmented into 1-s time window in every 0.25 s. The coefficients of first (fundamental frequency) and second harmonic of all the four target frequencies were considered as the feature vector for classification. It can be seen from Table 1 that for SSVEP input of 7 Hz, maximum values of amplitude exists at 7, followed by 14.

7 14 9 18 11 22 13 26 Stimulus frequency (Hz)
27.62 8.01 9.2 5.25 5.50 1.81 4.61 1.40 7
18.21 7.40 6.97 1.42 7.92 2.34 0.99 2.38 7
2.65 4.17 23.02 9.91 9.2 1.15 1.00 2.22 9
3.57 6.02 20.4 7.83 4.04 2.52 0.70 1.13 9
11.72 3.62 2.25 2.92 19.91 5.20 3.91 2.24 11
6.83 4.60 4.7 2.40 14.22 3.40 1.42 1.40 11
3.27 6.82 11.83 4.85 9.19 2.02 16.63 4.83 13
8.81 3.82 12.7 5.25 3.62 0.91 14.22 5.66 13
4.75 6.60 5.00 1.09 2.55 1.42 6.48 1.53 Relax
2.44 3.14 5.06 2.34 3.62 1.65 6.36 3.11 Relax

Table 1.

Samples of extracted feature components of different frequencies and relax state for two subjects.

In case of ANN, there were total eight parameters (first and second harmonics of all the four frequencies) so the input vector contains eight rows. Another set of Q target vectors (the correct output vectors in four digits for each of the input vectors) formed a second matrix. They developed wheelchair prototype to control in forward, backward, left, right and stop positions. The schematic representation of the BCI wheelchair control is shown in Figure 4. The wheelchair prototype is shown in Figure 5. Motor driver IC, L293D (www.instructables.com) was used. By changing the polarity of the signal given to the motors, it moves each of the motors in both forward and backward directions [32].

Figure 4.

Schematic representation of BCI-based wheelchair control.

Figure 5.

Wheelchair prototype for SSVEP-based BCI control.

2.2.2. SSVEP based BCI as independent application for locked-in syndrome

Lesenfants et al. in [47] conducted studies with a basic aim of developing independent SSVEP based BCI applications for locked in patients. Lesenfants et al. used the covert attention of healthy as well as locked-in patients by developing an independent, covert two-class paradigm of flashing targets. The study was divided over two groups of subjects. Group A consisted of 12 healthy subjects and Group B consisted of 12 healthy and 6 Locked-in Syndrome (LIS) patients. For both the groups 12 channels of EEG were recorded (P3, P1, P2, P4, PO7, PO3, POz, PO4, PO8, O1, Oz, and O2).

The visual stimulation was delivered via a custom made stimulus device, which had two subsystems: a control unit and a stimulation panel, based on the paradigm introduced in [48]. The panel, placed at 30 cm from subject’s head, was a 7 × 7 cm2 “interlaced square” made of red and yellow 1 × 1 cm2 light emitting diode (LED) - squares with a white fixation cross in the middle (Figure 6).

Figure 6.

Electronic visual stimulation unit.

The yellow squares (represented by white squares here) flicker at the frequency of 10 Hz. The red squares (represented by grey squares here) flash at 14 Hz.

The interlaced square pattern showed a 10% improvement in accuracy in comparison with a “line” pattern [49]. The control unit was designed to precisely control the red and yellow flickering frequencies independently between 1 and 99 Hz by microcontroller based circuit. The yellow and red squares were programmed to flicker at 10 and 14 Hz, respectively. The pattern was composed of two 2 × 2 cm2 blocks made of 1 × 1 cm2 LED squares separated by 12 cm with a white fixation cross in between (Figure 7).

Figure 7.

Overt block pattern.

The subjects were asked 33 yes/no questions (e.g., “is your name Paul?”). To answer “yes,” the subjects had to focus their attention over yellow flashes for 7 s or over the red for “no.” Epochs of 7 s were used as a unique window, where after four different feature extraction algorithms like DFT, multitaper spectral analysis (PMTM) [53, 54], CCA, lock-in analyser system (LAS) [49, 50, 51]. A automatic channel selection algorithm (ACSA) based on distinction sensitive learning vector quantization (DSLVQ) [52] selected an optimal channel set specific to each subject out of the 12 available channels. Classification was performed using LDA or a SVM (linear kernel), and assessed with a 10 × 10 fold cross validation.

PMTM obtained maximum accuracy of 77.0 ± 3.4% averaged over subject population, while LAS produced a similar mean accuracy of 74.4 ± 3.2% (Tables 2 and 3). DFT and CCA gave worse results as compared to PMTM and LAS (respectively, 69.4 ± 3.4% and 58.4 ± 3.9%).

Subject (Group A) Nharm = 1 Nharm = 2 Nharm = 3
AC ACSA AC ACSA AC ACSA
Subject 1 78.4 ± 3.4 85.7 ± 1.8 73.9 ± 4.7 94.3 ± 1.8 94.4 ± 2.2
Subject 2 92.3 ± 2.4 94.8 ± 1.0 76.5 ± 4.9 91.3 ± 1.6 92.6 ± 1.1
Subject 3 78.4 ± 3.4 84.4 ± 2.1 73.9 ± 4.7 84.9 ± 2.6 80.6 ± 3.0
Subject 4 67.8 ± 3.5 76.0 ± 2.3 51.7 ± 4.7 76.7 ± 3.7 78.6 ± 3.2
Subject 5 62.4 ± 3.9 78.9 ± 2.3 51.6 ± 4.8 77.4 ± 2.4 70.5 ± 3.2
Subject 6 71.6 ± 3.8 85.8 ± 2.6 62.6 ± 5.0 81.9 ± 3.3 85.6 ± 2.9
Subject 7 89.5 ± 2.4 94.5 ± 1.4 76.0 ± 3.9 92.8 ± 2.4 92.6 ± 1.9
Subject 8 82.7 ± 3.3 89.2 ± 2.3 77.3 ± 4.3 94.4 ± 1.9 87.2 ± 1.7
Subject 9 91.1 ± 2.3 94.6 ± 1.6 84.0 ± 4.3 91.7 ± 1.1 91.8 ± 1.3
Subject 10 56.6 ± 4.4 63.5 ± 1.8 58.3 ± 4.7 67.5 ± 3.0 68.4 ± 2.4
Subject 11 57.0 ± 3.8 43.1 ± 5.0
Subject 12 44.5 ± 3.8 43.2 ± 5.8
Total 77.0 ± 3.4 84.7 ± 2.0 68.6 ± 4.6 85.3 ± 2.5 84.2 ± 2.4

Table 2.

Mean and standard deviation of classification accuracy (in percent) obtained with the Thomson multitaper method (PMTM) for different numbers of harmonics with (ACSA) and without (AC) the use of automatic channel selection algorithm.

Subject (Group A) Nharm = 1 Nharm = 2 Nharm = 3
AC ACSA AC ACSA AC ACSA
Subject 1 76.7 ± 3.1 85.0 ± 1.9 73.2 ± 4.9 93.9 ± 1.8 94.7 ± 1.2
Subject 2 86.5 ± 2.1 93.5 ± 0.8 74.8 ± 5.2 95.4 ± 0.9 95.0 ± 1.7
Subject 3 76.7 ± 3.1 82.5 ± 2.7 73.2 ± 4.9 79.4 ± 2.7 74.6 ± 2.6
Subject 4 61.0 ± 4.2 73.6 ± 3.5 58.3 ± 4.5 73.6 ± 2.2 77.6 ± 1.9
Subject 5 69.7 ± 2.9 80.2 ± 2.4 54.2 ± 4.6 76.1 ± 2.8 72.8 ± 1.9
Subject 6 61.4 ± 3.6 76.2 ± 2.0 60.8 ± 4.7 79.0 ± 2.2 77.7 ± 3.3
Subject 7 85.2 ± 2.8 90.0 ± 2.2 76.5 ± 4.7 91.2 ± 2.4 94.0 ± 2.4
Subject 8 83.0 ± 2.9 87.4 ± 2.2 75.9 ± 4.4 94.3 ± 2.2 91.5 ± 2.6
Subject 9 90.5 ± 2.4 92.9 ± 1.7 75.9 ± 4.2 90.9 ± 2.1 91.5 ± 1.8
Subject 10 53.7 ± 3.8 70.1 ± 2.4 61.7 ± 4.7 70.9 ± 3.0 71.4 ± 2.9
Subject 11 57.8 ± 4.3 48.5 ± 5.1
Subject 12 49.7 ± 4.3 54.0 ± 5.0
Total 74.4 ± 3.2 83.1 ± 2.8 68.4 ± 4.7 84.5 ± 2.3 84.1 ± 2.3

Table 3.

Mean and standard deviation of classification accuracy (in per cent) obtained with the lock-in analyzer system (LAS) for different numbers of harmonics with (ACSA) and without (AC) the use of automatic channel selection algorithm.

Another comparison was done with the results obtained from the feature extraction methods using the ACSA as well as a single harmonic. PMTM and LAS produced significantly greater accuracy than DFT and CCA, with an accuracy of 84.7 ± 2.0 and 83.1 ± 2.3%, respectively. DFT obtained a 79.3 ± 2.7% accuracy and CCA was able to attain 72.4 ± 1.6% but in only five out of the 10 subjects. The performance with and without ACSA could therefore not be compared with CCA. For a single harmonic, a significant mean accuracy increase of 7.8% for PMTM, 7.9% for LAS and 7.6% for DFT was obtained.

2.2.3. SSVEP based virtual gaming application

Martišius and Damaševičius in 2016 [55] proposed an SSVEP based BCI gaming system. The researchers developed a 3-class BCI system based on SSVEP and emotive EPOC Headset. The game involved target shooting developed in the OpenVIBE environment which provided the user feedback. Emotive EPOC, a 16 electrode based gaming headset, was used in combination with the SSVEP paradigm. Raw EEG data from the head set was acquired with internal sampling of 2048 Hz. Signals from the O1, O2, P7, and P8 were taken.

At first, data was split into three groups, according to their corresponding class labels, LEFT, RIGHT, and CENTER. Each group of signals was subjected to band-pass filter centered on the target frequency of interest: for the LEFT class, 29.5–30.5 Hz; CENTER, 19.5–20.5 Hz; RIGHT, 11.5–12.5 Hz.

There have been studies [46] that analyzed how different colors of the targets influence classification quality. The user was presented with an LCD display, containing three blinking targets on a black background and a yellow arrow. On cue, the targets start blinking at different frequencies as shown in Figure 8(a).

Figure 8.

SSVEP BCI game interface: (a) training and (b) playing.

After classifier training, subjects were invited to participate in the game experiment. During the game, the subjects were presented with an interface from Figure 8(b). The “spaceship” with two “engines,” represented by two rectangles, and a “cannon,” represented by the triangle. The subject could rotate the spaceship by focusing his/her attention on one of the rectangular targets.

By focusing attention on the middle triangle, the user was able to fire the spaceship cannon. The aim of the game is to rotate the spaceship and fire its canon to hit the red target. Once the target was hit, it disappeared to reappear in another position.

An evaluation of the system was performed using two subjects, named S1 and S2, unfamiliar with the BCI technology. The first algorithm used was wave atom transform (WAT) coefficients and the second algorithm used the band power (BP) in the stimulation frequency bands.

The accuracy was measured for each subject, while performing classification with 4 different classifiers (LDA, sparse LDA (sLDA), SVM with linear kernel, and SVM with RBF kernel (with parameter values, gamma = 10)). The results are depicted in Table 4.

Classifier Features Accuracy, % F1 Score
S1 S2 S1 S2
LDA WAT 71.5 78.2 0.64 0.67
BP 66.2 73.2 0.56 0.62
sLDA WAT 70.6 77.4 0.64 0.68
BP 68.4 73.5 0.59 0.61
SVM, linear kernel WAT 75.5 79.3 0.64 0.68
BP 74.3 75.1 0.64 071
SVM, RBF kernel WAT 78.7 82.2 0.68 0.71
BP 74.0 77.4 0.63 0.67

Table 4.

Comparison of classification accuracy.

S1: subject number 1, S2: subject number 2, LDA: linear discriminant analysis, sLDA: sparse LDA, SVM: support vector machine, RBF: radial basis function, WAT: wave atom transform, and BP: band power.

2.2.4. SSVEP based communicator/speller enhancement

Nakanishi et al. in [56] designed a high speed speller based on SSVEP stimulus. The study was aimed at exploring the feasibility of mixed frequency and phase coding to form a high speed speller using a TFT monitor. A frequency and phase approximation approach was deployed to remove the limitation of the number of targets caused by the monitor refresh rate, resulting in a speller comprising 32 flickers specified by eight frequencies (8–15 Hz at an interval of 1 Hz) and four phases (0, 90, 180, and 270°).

Wang et al. [57] proposed an approach that generates visual flickers at a flexible frequency by approximating the frequency with variable number of frames in a stimulation cycle. For instance, a flicker at 11 Hz under a 60 Hz refresh rate can be realized by bridging five and six frames in a stimulation cycle as “1110001110011100011100111…” Based on this technique we can generate flicker frequencies up to 50% of the screen refresh rate, hence increasing the number of stimuli that can be presented. Generally, to render a visual flicker at frequency f with an initial phase φ, a stimulus sequence s(f, φ,i) can be generated by:

s f , φ , i = square 2 πf i refesh rate + φ E5

where the function square [] generates a square wave of 50% duty cycle with levels 0 and 1, and i indicates the frame index. Nakanishi et al. used quad-phase coded flickering signals at phases or 0, 90, 180, and 270, for frequencies 8–15 Hz with an interval of 1 Hz, hence providing 32 unique targets instead of just 8 as indicated in Figure 8.

The subjects were instructed to gaze at one out of the 32 visual stimuli (a target stimulus) for 4 s, and the other 31 targets were indicated in a random order in a run. At the beginning of each trial, a red rectangle marker (Figure 9) appeared for half a second highlighting the target stimulus. Subjects were asked to shift their gaze to the target within the same duration. After which, all the stimuli started to flicker simultaneously for 4 seconds. Seven runs were carried out for each subject EEG data were recorded by 16 electrodes over the parietal and occipital areas (FPz, F3, F4, Fz, Cz, P1, P2, Pz, PO3, PO4, PO7, PO8, POz, O1, O2, and Oz).

Figure 9.

Presentation of the 32-target visual stimuli using mixed frequency and phase coding.

The entire data epochs were correlated using common average reference (CAR) and then subjected to a band-pass filter with cut off frequencies 7–50 Hz with an infinite impulse response (IIR) filter. Zero-phase forward and reverse IIR filtering were implemented.

Canonical correlation analysis (CCA) was used for target identification which used the reference from the SSVEP training data ( X ̂ k ) to identify the user’s intention. The study developed an ensemble classifier that correlates the test (X) and training ( X ̂ k ) set signals with sine-cosine reference signals Y. A correlation vector ρ is defined as follows:

ρ = ρ 1 ρ 2 ρ 3 ρ 4 = ρ ρ X T W X X ̂ , X ̂ T W X X ̂ ρ X T W XY , X ̂ T W XY ρ X T W X ̂ Y , X ̂ T W X ̂ Y E6

To validate the efficiency of the combined method, this study compared classification performance of the following five methods: (M1) a standard CCA-based method; (M2) a correlation analysis using a spatial filter derived from test set and training reference signals; (M3) a correlation analysis using a spatial filter derived from test set and since-cosine reference signals; (M4) a correlation analysis using a spatial filter derived from training reference signals and sine-cosine reference signals; and (M5) the combined method using the ensemble classifier described in Eq. (6).

Figure 10 shows the averaged accuracy (Figure 10(a)) and ITR (Figure 10(b)) across all subjects for the offline experiments. Results for different CCA-based methods were calculated with different data lengths from 1 to 4 s. It is evident that the four methods (M2, M3, M4, and M5) outperformed M1 under all conditions with different data lengths.

Figure 10.

(a) Target identification accuracy, (b) ITRs as functions of data length (from 1 to 4 s) calculated by different methods.

2.3. Information transfer rate (ITR)

The dynamic performance of any BCI can be evaluated by the information transfer rate (ITR) as introduced in [58]. This parameter depends upon three factors regarding the BCI

  • Speed

  • Accuracy

  • Number of unique commands

ITR (B) is defined as

B = V log 2 N + P log 2 P + 1 P log 2 1 P N 1 E7

where, V = application speed in trials per second.

P = classifier accuracy, i.e., how accurately the thoughts are translated into command.

N = number of mental tasks used in the BCI application under consideration.

Advertisement

3. Conclusion

SSVEP proves to be the most widely used paradigm for BCI used for various different application for healthy as well as locked in patients due to the fact that SSVEP-BCI’s require minimum user training. This is because the subject does not have to regulate his/her own brain activity to provide controlling input for the task at hand. The subjects have to merely focus their attention towards the flickering targets which is then converted to machine command by a trained classifier.

The accuracy of SSVEP based BCI’s is fairly high for most subjects with substantial amount of visual capabilities. However some subjects were not able to produce a significant change in the EEG with respect to the SSVEP stimuli. This condition is termed as BCI illiteracy [59]. This phenomena cause the failure of BCI for such subjects as the task is not performed due to minimal EEG activity. To counter this problem a novel approach of hybrid brain computer interfacing (hBCI) was proposed [60, 61]. The hBCI combines a standard BCI paradigm (SSVEP, P300, slow cortical potential (SCP) or event related synchronisation/de-synchronisation (ERS/ERD)), with another BCI signal or some other physiological signal. hBCI’s are an emerging area of research where all possible combinations are being explored to increase system accuracy as well as eliminate the phenomena of BCI illiteracy. The hBCI’s also address the problem of subject fatigue due to fixing of gaze at flickering targets for a longer duration, this fatigue is known to reduce the accuracy of the BCI due increase in the number of False Positive (FP) outcomes.

References

  1. 1. Kolodziej M, Majkowski A, Rak R. A new method of EEG classification for BCI with feature extraction based on higher order statistics of wavelet components and selection with generic algorithms. In: Adaptive and Natural Computing Algorithms. Lecture Notes in Computer Science. Warsaw, Poland: Warsaw University of Technology; 2011;6593:280-289
  2. 2. Donchin E, Ritter W, McCallu C. Cognitive Psychophysiology: The Endogenous Components of the ERP. Brain-Event Related Potentials in Man. New York: Academic; 1978
  3. 3. Regan D. Human Brain Electrophysiology: Evoked Potentials and Evoked Magnetic Fields in Science and Medicine. New York, NY, USA: Elsevier; 1989
  4. 4. Yijun W, Ruiping W, Xiaorong G, Bo H, Shangkai G. A practical VEP-based brain-computer interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2006;14:234-240
  5. 5. Jinghai Y, Derong J, Jianfeng H. Design and application of brain-computer InterfaceWeb browser based on VEP. In: Proceedings of the International Conference on Future Biomedical Information Engineering (FBIE’09); Sanya, China; 13-14 December, 2009. pp. 77-80
  6. 6. Xiaorong G, Dingfeng X, Ming C, Shangkai G. A BCI-based environment controller for the motion-disabled. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2003;11:137-140
  7. 7. Dan Z, Alexander M, Xiaorong G, Bo H, Andreas KE, Shangkai G. An independent brain-computer interface using convert non-spatial visual selective attention. Journal of Neural Engineering. 2010;7:016010
  8. 8. Regan D. Some characteristics of average steady-state and transient responses evoked by modulated light. Electroencehalography & Clinical Neurophysiology. 1966;20(3):238-248
  9. 9. Zhu D, Bieger J, Garcia Molina G, Aarts RM. A survey of stimulation methods used in SSVEP-based BCIs. Computational Intelligence and Neuroscience. 2010;2010:1-12. DOI: 10.1155/2010/702357
  10. 10. Jasper HH. The ten twenty electrode system of the international federation. Electroencephalography and Clinical Neurophysiology. 1958;10:371-375
  11. 11. Herrmann CS, Human EEG. Responses to 1-100 Hz flicker: Resonance phenomenon in visual cortex and their potential correlation to cognitive phenomena. Experimental Brain Research. 2001;137(3-4):346-353
  12. 12. Mandel C, Lüth L, Laue T, Röfer T, Gräser A, Krieg-Brückner B. Navigating a smart wheelchair with a brain-computer interface interpreting steady-state visual evoked potentials. In: IEEE/RSJ International Conference on Intelligent Robotics Systems (IROS); 2009. pp. 1118-1125
  13. 13. Muller SMT, Bastos TF, Filho MS. Proposal of a SSVEP-BCI to command robotic wheelchair. Journal of Control, Automation and Electrical Systems. 2013;24:97-105
  14. 14. Ng DWK, Soh YW, Goh SY. Development of an autonomous BCI wheelchair. In: IEEE Symposium on Computational Intelligence Brain Computer Interfaces; 2014. pp. 1-4
  15. 15. Wang Y, Zhang Z, Gao X. Lead selection for SSVEP-based brain-computer interface. In: International Conference of the IEEE Engineering in Medicine & Biology Society, IEEE Engineering in Medicine & Biology Society Conference; 2004. pp. 4507-4510
  16. 16. Zhang Z, Li X, Deng Z. A CWT-based SSVEP classification method for brain-computer interface system. In: International Conference on Intelligent Control and Information Processing. IEEE; 2010. pp. 43-48
  17. 17. Kumari M, Somani SB. Enhancing the classification accuracy of SSVEP based BCI using CWT method along with ANN. International Journal of Advanced Research in Engineering & Management. 2015;01:81-89
  18. 18. Huang M, Wu P, Liu Y. Application and contrast in brain-computer interface between Hilbert-Huang transform and wavelet transform. In: international Conference for Young Computer Scientists. IEEE Computer Society; 2008. pp. 1706-1710
  19. 19. Ruan XG, Kuc K, Li M. Feature extraction of SSVEP-based brain-computer interface with ICA and HHT method. In: Intelligent Control and Automation. IEEE; 2014. pp. 2418-2423
  20. 20. Diez PF, SMT M, Mut VA, Laciar E, Avila E, Basto-Filho TF, Sarcinelli-Filho MM. Commanding a robotic wheelchair with a high-frequency steady-state visual evoked potential based brain-computer interface medical engineering and physics. Elsevier; 2013;35(8):1155-1164. DOI: 35-1155-64
  21. 21. Wang H, Lu T, Huang Z. Remote control of an electric car with SSVEP-based BCI. In: IEEE International Conference on Information Theory and Information Security. IEEE; 2010. pp. 837-840
  22. 22. Mouli S, Palaniappan R, Sillitoe IP. Performance analysis of multi-frequency SSVEP-BCI using clear and frosted colour LED stimuli. In: IEEE International Conference on Bioinformatics and Bioengineering; 2013. pp. 1-4
  23. 23. Yamaguchi T, Omori K, Irie J, Inoue K. Feature extraction from EEG signals in SSVEP spelling system. In: SICE Annual Conference; 2010. pp. 58-62
  24. 24. Ghanbari AA, Kousarrizi MRN, Teshnehlab M, Aliyari M. Wavelet and Hilbert transform-based Brain Computer Interface. In: Proceedings of the International Conference on Advances in Computational Tools for Engineering Applications (ACTEA ‘09); Beirut, Lebanon; July 2009. pp. 438-442
  25. 25. Samar VJ, Bopardikar A, Rao R, Swartz K. Wavelet analysis of neuroelectric waveforms: A conceptual tutorial. Brain and Language. 1999;66:7-60
  26. 26. Krusienski DJ, Schalk G, McFarland DJ, Wolpaw JR. A mu-rhytm matched filter for continuous control of a brain-computer interface. IEEE Transactions on Biomedical Engineering. 2007;54:273-280
  27. 27. Huang NE, Long SR, Shen Z. The mechanism for frequency downshift in nonlinear wave evolution. Advances in Applied Mechanics. 1996;32(08):59-117
  28. 28. Lee PL, Chnag HC, Hseih TY. A brain-wave-Actuauted small robot car using ensemble empirical mode decomposition-based approach. IEEE Transactions on Systems Man and Cybernetics – Part A Systems and Humans. 2012;42(5):1053-1064
  29. 29. Chu Y, Zhao X, Han J. SSVEP based brain-computer interface controlled functional electrical stimulation system for upper extremity rehabilitation. In: IEEE International Conference on Robotics and Biometrics. IEEE; 2015. pp. 2244-2249
  30. 30. Bi L, Li Y, Jie K. A new SSVEP brain-computer interface based on a head up display, In: ICME International Conference on Complex Medical Engineering; 2013. pp. 201-204
  31. 31. Oikonomou VP, Liaros G, Georgiadis K. Comparative Evaluation of State-of-the-Art Algorithms for SSVEP-Based BCIs. Beijing, China: International Conference, BI 2017; 2016
  32. 32. Singla R, Haseena BA. BCI based wheelchair control using steady state visual evoked potentials and support vector machines. International Journal of Soft Computing Engineering. Beijing, China: International Conference; 2013;3(3):46-52
  33. 33. Bi L, Fan X, Jie K. Using a head-up display-based steady-state visually evoked potential brain-computer Interface to control a stimulated vehicle. IEEE Transactions on Intelligent Transportation Systems. 2014;15(3):959-966
  34. 34. Sakurada T, Kawase T, Takano K. A BMI-based occupational therapy assist suit: Asynchronous control by SSVEP. Frontiers in Neuroscience;213(7):172
  35. 35. Jian HL, Tang KT. Improving classification accuracy of SSVEP based BCI using RBF SVM with signal quality evaluation. In: International Symposium on Intelligent Signal Processing and Communication Systems. IEEE; 2014
  36. 36. Ko LW, Lin SC, Liang WG. Development of SSVEP-based BCI using common frequency pattern to enhance system performance. In: 2014 IEEE Symposium on Computational Intelligence in Brain Computer Interfaces (CIBCI); Orlando, FL, USA: IEEE; 2014, pp. 30-35
  37. 37. Duda RO, Hart PE, Stork DG. Pattern classification. In: En Broeck the Statistical Mechanics of Learning Rsity. 2nd ed. 2000
  38. 38. Burges CJC. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery. 1998;2:121-167
  39. 39. Xiang L, Dezhong Y, Wu D, Chaoyi L. Combining spatial filters for the classification of single-trial EEG in a finger movement task. IEEE Transactions on Biomedical Engineering. 2007;54:821-831
  40. 40. Schlogl A, Lee F, Bischof H, Pfurtscheller G. Characterization of four-class motor imagery EEG data for the BCI-competition 2005. Journal of Neural Engineering. 2005;2:L14
  41. 41. Borisoff JF, Mason SG, Bashashati A, Birch GE. Brain-computer interface design for asynchronous control applications: Improvements to the LF-ASD asynchronous brain switch. IEEE Transactions on Biomedical Engineering. 2004;51:985-992
  42. 42. Luis-Fernando NA, Jaime GG. Brain computer interfaces: A review. Sensors. 2012;12(2):1211-1279
  43. 43. Kubler A, Muller KR. Towards Brain-Computer Interfacing: An Introduction to Brain-Computer Interfacing. Cambridge, MA, USA: MIT Press; 2007. pp. 1-25. ISBN: 9870-262-04244-4
  44. 44. Nicolas-Alonso LF, Gomez-Gil J. Brain computer interfaces, a review. Basel, Switzerland: International Journal of Sensors. 2012;12(2):1211-1279. www.mdpi.com/journal/sensors
  45. 45. Wang YJ, Wang RP, Gao XR, Hong B, Gao SK. A practical VEP-based brain-computer interface. IEEE Transactions on Neural Systems and Rehabiliation Engineering. 2006;14(2):234-240
  46. 46. Cao T, Wan F, Mak PU, Mak P, Vai MI, Hu Y. Flashing color on the performance of SSVEP-based brain-computer interfaces. In: 34th Annual International Conference of the IEEE EMBS; California, USA; 2012. pp. 1819-1822
  47. 47. Lesenfants D, Habbal D, Lugo Z, Lebeau M, Horki P, Amico E, Pokorny C, Gomez F, Soddu A, Muller-Putz G, Laureys S, Noirhomme Q. An independent SSVEP-based brain–computer interface in locked-in syndrome. Journal of Neural Engineering. 2014. DOI: 10.1088/1741-2560/11/3/035002
  48. 48. Lesenfants D, Partoune N, Soddu A, Lehembre R, M¨uller-Putz GR, Laureys S, Noirhomme Q. Design of a covert SSVEP-based BCI. In: Proceedings of 5th International Brain-Computer Interface Conference; Graz, Austria; 2011. pp. 216-219
  49. 49. Zhang D, Maye A, Gao X, Hong B, Engel AK, Gao S. An independent brain-computer interface using covert non-spatial visual selective attention. Journal of Neural Engineering. 2010;7:16010-16021
  50. 50. Muller-Putz GR, Scherer R, Brauneis C, Pfurtscheller G. Steady-state visual evoked potential (SSVEP)-based communication: Impact of harmonic frequency components. Journal of Neural Engineering. 2005;2:123-130
  51. 51. Muller-Putz GR, Eder E, Wriessnegger SC, Pfurtscheller G. Comparison of dft and lock-inamplifier features and search for optimal electrode positions in SSVEP-based BCI. Journal of Neuroscience Methods. 2008;168:174-181
  52. 52. Pregenzer M, Pfurtscheller G, Flotzinger D. Automated feature selection with a distinctionsensitive learning vector quantizer. Neurocomputing. 1996;11:19-29
  53. 53. Thomson DJ. Spectrum estimation and harmonic analysis. Proceedings of the IEEE. 1982;70:1055-1096
  54. 54. Hoogenboom N, Schoffelen JM, Oostenveld R, Parkes LM, Fries P. Localizing human visual gamma-band activity in frequency, time and space. NeuroImage. 2006;29:764-773
  55. 55. Martišius I, Damaševičius R. A prototype SSVEP based real time BCI gaming system. Computational Intelligence and Neuroscience. 2016;2016:3861425
  56. 56. Nakanishi M, Wang Y, Wang YT, Mitsukura Y, Jung TP. A high-speed brain speller using steady-state visual evoked potentials. International Journal of Neural Systems. 2014;24:1450019
  57. 57. Wang Y, Wang YT, Jung TP. Visual stimulus design for high-rate SSVEP BCI. Electronics Letters. 2010;46(15):1057-1058. DOI: 1057-1058
  58. 58. Wolpaw R, Ramoser H, McFarland DJ, Pfurtscheller G. EEG-based communication: Improved accuracy by response verification. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 1998;6(3):326-333
  59. 59. Fazel-Rezai R, Allison BZ, Guger C, Sellers EW, Kleih SC, Kubler A. P300 brain computer interface: Current challenges and emerging trends. Frontiers in Neuroengineering. 2012;5:1-14
  60. 60. Allison BZ, Brunner C, Kaiser V, Müller-Putz GR, Neuper C, Pfurtscheller G. Toward a hybrid brain–computer interface based on imagined movement and visual attention. Journal of Neural Engineering. 2010;7(2):026007-026016
  61. 61. Allison BZ, Luth T, Valbuena D, Teymourian A, Volosyak I, Graser A. BCI demograph-ICS: How many (and what kinds of) people can use an SSVEP BCI. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2010;18(2):107-116

Written By

Rajesh Singla

Submitted: 08 December 2017 Reviewed: 20 February 2018 Published: 17 October 2018