EEG Based Brain-Machine Interfacing: Navigation of Mobile Robotic Device

During the last decade, rapid development of sophisticated methods for brain signal recordings along with availability of various efficient computational resources and the improving knowledge about brain dysfunctions have turned many researchers’ interest in using large scale neurophysiological recordings for therapeutic and replacement strategies (Mason & Birch, 2003; Millan et al., 2003). Many patients with physiological disorders such as Amyotrophic Lateral Sclerosis (ALS) or injuries such as high-level spinal cord injury suffer from disruption of the communication path between the brain and the body. People with severe motor disabilities may lose much of their voluntary muscle control. The disabled people with the above mentioned problems are forced to accept a reduced quality of life, resulting in dependence on caretakers and escalating social costs (Vaughan et al., 2003). Most of the existing assistive technology devices for these patients are not usable because these devices are dependent on motor activities from specific parts of the body. Alternative control paradigms for these individuals are thus desirable (Fatourechi, 2008). The electrophysiological signals generated from the brain can be used to command different devices, provided that the person who will control the device should also be able to control the generation of these signals. Studies showed that with sufficient training, people can control the generation of certain brain signals (Ohno et al., 2006). Having generated these signals, they can be conditioned and processed to perform the specific work for which they are generated. In other words, the interface can be made able to adapt and understand the meaning of these signals and work accordingly. If this type of Brain-Machine Interface (BMI) is successfully implemented, they can be used in developing sophisticated assistive devices (such as, a robotic wheelchair) to carry the people with motor dysfunction (Ferreira et al., 2008). Previous works in development of BMI show that the signal acquisition and processing are getting complicated with the growing availability of more sophisticated recording devices (Cheein & Postigo, 2005; Ferreira et al., 2008; Moon et al., 2005; Mourino, 2003; Rani & Sarkar, 2005). To overcome these complexities rather simple method is required to couple easily recordable neuronal signals with the robotic device (Mahmud et al., 2010; 2009). This chapter illustrates a simple BMI system using EEG signals recorded through conventional EEG 6


Introduction
During the last decade, rapid development of sophisticated methods for brain signal recordings along with availability of various efficient computational resources and the improving knowledge about brain dysfunctions have turned many researchers' interest in using large scale neurophysiological recordings for therapeutic and replacement strategies (Mason & Birch, 2003;Millán et al., 2003). Many patients with physiological disorders such as Amyotrophic Lateral Sclerosis (ALS) or injuries such as high-level spinal cord injury suffer from disruption of the communication path between the brain and the body. People with severe motor disabilities may lose much of their voluntary muscle control. The disabled people with the above mentioned problems are forced to accept a reduced quality of life, resulting in dependence on caretakers and escalating social costs (Vaughan et al., 2003). Most of the existing assistive technology devices for these patients are not usable because these devices are dependent on motor activities from specific parts of the body. Alternative control paradigms for these individuals are thus desirable (Fatourechi, 2008). The electrophysiological signals generated from the brain can be used to command different devices, provided that the person who will control the device should also be able to control the generation of these signals. Studies showed that with sufficient training, people can control the generation of certain brain signals (Ohno et al., 2006). Having generated these signals, they can be conditioned and processed to perform the specific work for which they are generated. In other words, the interface can be made able to adapt and understand the meaning of these signals and work accordingly. If this type of Brain-Machine Interface (BMI) is successfully implemented, they can be used in developing sophisticated assistive devices (such as, a robotic wheelchair) to carry the people with motor dysfunction (Ferreira et al., 2008). Previous works in development of BMI show that the signal acquisition and processing are getting complicated with the growing availability of more sophisticated recording devices (Cheein & Postigo, 2005;Ferreira et al., 2008;Moon et al., 2005;Mourino, 2003;Rani & Sarkar, 2005). To overcome these complexities rather simple method is required to couple easily recordable neuronal signals with the robotic device (Mahmud et al., 2010;. This chapter illustrates a simple BMI system using EEG signals recorded through conventional EEG 6 www.intechopen.com acquisition devices and use these signals in commanding a robotic device's navigation. The figure 1 shows the schematic diagram of the proposed BMI. Fig. 1. The schematic diagram of the proposed Brain-Machine interface system where the devices can be controlled only with conditioned brain signals. The BMI model was implemented using a two-layer approach. The first layer (also referred as upper layer) dealt with the signal acquisition and generation of control signals from the acquired EEG signals. The second layer (also referred as lower layer) contained the commanding and controlling modules of the robotic device, where the calculated binary control signals (BCS) were fed into the robotic device and its navigation was controlled. The two-layered approach separated the signal acquisition and processing phase from the robotic device commanding and controlling phase. This separation allowed a higher level of abstraction in programming sophisticated signal processing and analysis algorithms to calculate the BCS. Also, being separated from the signal processing and analysis layer, the robotic device interfacing was further simplified just to respond to the BCS to command and control the robotic device, thus enhancing reusability of the method. The figure 2 shows the schematic flowchart outlining the major steps of the BMI method. The first and second boxes from the top belong to the upper layer performing signal acquisition and processing and the third and fourth boxes belong to the lower layer doing the communication with the robotic device for commanding and controlling. Considering the two-layered design of the BMI, the model was successfully tested with EEG signals generated by two distinctive actions and these two approaches will be elaborated in this chapter. The two distinctive event information extracted from the EEG in generating the BCS were: • ERD (Event Related De-synchronization) and ERS (Event Related Synchronization) of the EEG signal (Pfurtscheller & daSilva, 1999).
• The event related evoked response (as in case of saccadic eye movement) is used as a second approach (Ohno et al., 2006).
The EEG signals generated by the above mentioned phenomena were used in calculating the BCS. Once the BCS was calculated, it was transmitted into another computer to command and control the mobile robot accordingly.
In the upper layer, the EEG signal acquisition was performed using a framework built in Matlab Simulink (http://www.mathworks.com) and the signal processing was performed The curly braces on the right side categorize the steps based on the tools used in implementing those steps for the interfacing system. The curly braces on the left side show the two-layers and their components.
using Matlab scripting. In the lower layer, the interfacing with the robotic device was done using an open-source program called 'IQR' (Bernardet et al., 2002;Bernardet & Verschure, 2010) capable of mimicking neuronal network behavior based on the BCS. After successful completion of signal acquisition, processing, and interfacing, the mobile robot was able to navigate through a predefined path showing the bright possibility of this method is designing assistive devices for the disabled.

The Electroencephalogram (EEG)
The Electroencephalogram (EEG) signals are widely used in the medical field for diagnosing diseases and studying brain functions. They are generated by firing of neuronal populations in the brain that propagates through the cortex. These propagated signals are recorded along the scalp using standard Ag-AgCl electrodes. A mapping of these electrode positions according to the 10-20 international system is shown in the figure 3. In general, EEG refers to the recording of the brain's spontaneous electrical activity over a short period of time, usually 20-40 minutes. Through the EEG technique evoked potentials (EP) and event-related potentials (ERPs) are recorded. The EP involves averaging the EEG activity time-locked to the presentation of a stimulus of some sort (visual, somatosensory, or auditory). Whereas, ERPs refer to averaged EEG responses that are time-locked to more complex processing of stimuli and used in cognitive science, cognitive psychology, and psychophysiological research. Necessary information related to a part of the brain can be obtained from the signals recorded by the corresponding electrodes placed in that part. For example, the visual cortex is located at the caudal portion of the brain (occipital lobe), thus, the signals recorded by the electrodes placed at the occipital lobe will provide information regarding the brain activity of the occipital lobe caused by visual stimuli. The spontaneous activity of the brain produces the signal with amplitude usually under 100 µV and a frequency ranging from little above the DC voltage up to 100 Hz. The EEG signal is recorded from the scalp and is extracellular in nature consisting of a few individual signals with different frequency bands, namely -delta (δ) with a frequency range of 0-3 Hz, theta (θ) ranges from 4-7 Hz, alpha (α) varies from 8-12 Hz, beta (β) is from 12-30 Hz, and gamma (γ) has a band of 34-100 Hz. These different signals have their own clinical implications in disease diagnosis (Niedermeyer & Lopes da Silva, 2005).

Event information selection from the EEG
As mentioned earlier, in this work we used two approaches to extract two different signal events from the recorded EEG signals to generate the BCS: • In the first approach, the EEG signals were processed to extract the ERD and ERS, which appear in the α band (8 to 12 Hz) of the EEG spectrum. These ERS and ERD are event-related phenomena corresponding to a decrease (ERD) or an increase (ERS) in the signal power. These phenomena can be easily visualized by observing the alpha band power of the signal. The ERD and ERS patterns are usually associated to a decrease or to an increase, respectively, in the level of synchrony of the underlying neuronal populations. During the ERD and the ERS, the EEG signal changes sharply which can be detected in the α band of the EEG spectrum.
• In the second approach, the EEG signals were processed to extract the information related to the saccadic eye movement in the signals recorded from the occipital region of the scalp (indicated by 'O1' and 'O2' in the surface view of the electrode mapping, figure 3). Based on the work done by K. Ohno (Ohno et al., 2006), it is evident that the saccadic eye movement is well represented in the EEG during the memory tasks prior to the actual eye movement. These phenomena of changing EEG can be visualized by observing the signals recorded from the 'O1' and 'O2' electrodes (see figure 11 for the acquired signal).
Depending on the direction of the saccade, the recording from the contralateral electrode changes sharply in the EEG which can be detected easily in the EEG spectrum.
These sharp changes were transformed into BCS and were sent to the robotic device for its navigation. In case of the saccade evoked EEG, refresh rate and discrimination between left and right decisions were important, but due to our work's scope (proof-of-principle) these aspects were not considered. The point to be noted here is that, the approaches discussed here (the ERS and ERD and / or the EEG evoked by saccadic eye movement) are proof-of-principles of a simple BMI model. The choice of using these type of signals was an arbitrary decision to show the model's workability and thus do not impose a restriction for this BMI to use EEG signals generated by any other phenomena that can be transformed or conditioned using sophisticated signal processing tools to a BCS. Also, it is possible to model imaginary movement in the EEG by means of Hidden Markov Models and coherence (Souza et al., 2010) which can also be used in this BMI.

The e-puk mobile robot
In the BMI system, we used the e-puck (Mondada et al., 2009) mobile robot to demonstrate the system's workability. The e-puck is an educational desktop mobile robot developed at the Ecole Polytechnique Fédérale de Lausanne (EPFL) for a broad exploitation in teaching activities. The basic configuration of an e-puk contains: • A microcontroller • A set of sensors and actuators: -Eight infrared (IR) proximity sensors -A 3D accelerometer -Three microphones -A color CMOS camera -Two stepper motors -Aspeaker -Eight red light emitting diodes (LED) -A set of green LEDs -A red front LED placed beside the camera • A user interface containing: -Two LEDs show the status of the battery -An interfacing connector to an in-circuit debugger -An infrared remote control receiver -A classic RS232 serial interface -A Bluetooth radio link -A reset button

• Mechanics
The specification provided above is the basic configuration of the e-puk. There are possibilities to extend the e-puk to perform many other tasks. Detailed information about e-puk's design and devices can be found at http://www.e-puck.org/index.php. Figure 4 shows the e-puk robot's electronic design outline and the figure 5 depicts the mechanical architecture.

Overview of the BMI system
The figure 6 shows the schematic cartoon of the BMI system with each numbered element representing an interface in the process of communicating to the mobile robotic device. The number 1 in the diagram represents the EEG recording cap with the electrodes (figure 7, (a)); the signals acquired by the electrodes were sent to number 2, which represents the electrode connector and/or multiplexer ( figure 7, (b)). The multiplexed signal from each channel was then sent to the preamplifier, (number 3 in the diagram, figure 7, (c)) where the signals were amplified with a predefined gain and transferred to the computer for digitization at 256 Hz (number 4 in the diagram) and further process them to extract the relevant information (either ERS/ERD or events caused by saccadic eye movement) from the raw EEG signals. The number 5 shows the user-diagram protocol (UDP) transfer of the conditioned signal to another computer (represented by number 6) running on Linux for generating the BCS to be sent to the mobile robotic device. Finally these command signals were sent to the robotic device using Bluetooth (number 7). The numbers 1 to 5 shows the processing interfaces at the upper layer and 6 to 8 are the interfaces at the lower layer (see figure 2).

Data acquisition
The EEG signals were acquired using a four channel commercial EEG recording device, g ® .MOBIlab, manufactured by the g.tec medical engineering GmbH, Austria (http://www.gtec.at/) as seen in figure 7. Two different configurations of channels were used in selecting the two different signal features as listed below: • In case of ERS/ERD: two of the four channels were used in recording simultaneous signals from two healthy subjects skulls. One of the electrodes was placed in the 'O1' to acquire the EEG activity with reference to the other one placed at the 'F' or 'F P '. The acquired EEG signals were then processed to extract the ERS/ERD complex.
• In case of saccade related evoked potential: three out of the four channels were used during the recording. Two of them were used in recording simultaneous signals from 'O1' and 'O2' and the third electrode was used as a reference, placed at 'F' or 'F P ' position. EEG signals recorded using the 'O1' and 'O2' electrodes were processed to extract the saccade related event information.
The manufacturing body provided a module and a framework to acquire and further process the EEG signals to extract feature information. This module was designed using Matlab Simulink with a possibility to incorporate S-functions written in Matlab Script for signal processing. The module worked as an interface between the recording device and the computer used for EEG recording. It also provided a framework which was extended to suit the necessity of on-line signal processing based on the applications. Therefore, the module was reengineered to analyze and process the EEG signals acquired from the occipital region of the subject. After having read the data, they were amplified with a certain gain and sent on-the-fly to the computer for on-line processing. The figure 8 shows the schematic diagram of the signal acquisition and processing module. The signals detected by the recording electrodes were fed into a preamplifier, where the signals from different channels were separated and then amplified by a predefined gain (shown as right faced triangles). These amplified signals from both the channels were then fed into the computer at a sampling rate of 256 Hz through a universal serial bus (USB) port for further processing.

Processing of EEG signals and BCS generation at the upper layer
The processing of raw EEG signals acquired by the electrodes for the generation of BCS was performed in the upper layer of the BMI flow (see figure 2). The following subsections describe the pre-, post-processing, and extraction of event related information from the EEG.

Pre-and post-processing
After the detection of the EEG signals by the electrodes and before extracting the signal events (ERS/ERD or event caused by saccadic eye movements) to generate the BCS, artifact removal was performed on-the-fly to remove artifacts related to eye blink, cardiac rhythms, and body movements from the signals using processes proposed in Haas et al. (2003); Rohalova et al. (2001). The power line noise (50-60 Hz components) was removed using a stop-band filter of 50 and 60 Hz. As our targeted events changed sharply from the baseline, we implemented a dynamic threshold based on the EEG signal's standard deviation to detect the occurrence of an event.
A detected event denoted a high (i.e., 1) signal in the BCS, and the absence of an event was represented by a low (i.e., 0) signal. In figure 8, the green rectangular boxes at the center represent the signal processing steps mentioned above. These processed individual channels were then multiplexed and sent to another computer running on Linux through an UDP port using a crossover ethernet cable for interfacing with the robot to control it.

Processing for ERS/ERD
Generally, presence of a visual stimuli, i.e., when the eyes are open, the signal power in the alpha band decreases, characterizing an ERD, and when the eyes are closed (providing with a few or even no visual stimuli), the visual cortex is relaxed, the signal power increases denoting an ERS. This type of potential is believed to arise from the oscillation of postsynaptic potentials in the neocortex. Functionally, alpha wave has been interpreted as an idling rhythm that diminishes when eyes are opened or during mental activity. It is not clear whether alpha rhythms are pure noise or the product of chaotic processes, nor whether there are distinct generators of alpha activity (Laufs et al., 2003). The detected and artifact removed EEG signals were continuously scanned for occurrence of ERD/ERS to isolate them with a high signal-to-noise-ratio (SNR). An important issue in the EEG based signal processing was that, the levels of EEG change very often, therefore, a calibration step was required to detect the basic ERS/ERD level or baseline before sending the signals to the robotic device. To extract only the alpha wave, the recorded EEG signals were filtered through an on-line band-pass filter with pass-band and stop-band cutoff frequencies as 8 Hz and 12 Hz respectively. Through this filtering all kind of artifacts were removed from the EEG leaving only the clean alpha wave containing the ERD and the ERS. Finally, using the dynamic threshold the BCS was generated.

Processing for saccadic events
To extract the events from the EEG signals related to the saccadic activity, both the channels signals were filtered using a band-pass filter with the low-pass and high-pass cut-off frequencies as 15 Hz and 100 Hz, respectively. This filtering eliminated the possibility of recording the alpha wave generated by the occipital region of the brain. Then the signals were scanned for the occurrence of the sharp change in their amplitude using the dynamic threshold mentioned earlier. The system was trained before the actual experiment due to the signal variability from person to person. This threshold got renewed every 100 ms by calculating the ratio between the usual electrical activity and the existing threshold.

Interfacing with the robotic device at the lower layer
At the lower layer of the BMI system, interfacing with the robotic device was performed through processing of the BCS (see figure 2). To process the BCS and to command the robotic device an open source software named 'IQR' (Bernardet et al., 2002;Bernardet & Verschure, 2010) was used. The IQR provides a flexible platform for simulating large scale neural networks and developing robust applications to interface with robotic devices. Using this software it is possible to design large scale neuronal populations, interconnect these populations, and control their activation or inactivation by providing excitatory or inhibitory synapses. To achieve our goal of steering wheels of the robotic device (number 8 in schematic diagram of figure 6) using the BCS (i.e., the extracted information from the EEG, ERS/ERD or saccadic events), three modules were developed in the IQR as shown in figure 9 (a), (b), and (c). As seen in figure 9 (a), two main processes were designed to receive the BCS coming from the UDP port ('simulink IN' process) and sending a command to the mobile robot ('Robot' process). These processes (figure 9 (b) and (c)) were carefully designed, each with a number of neuronal populations creating a neuronal network to perform the desired task . The 'Simulink IN' process received the binary control signals from each EEG channel in the upper layer (indicated by the 'M' and an inward arrow to the square boxes in figure 9 (c)); on the other hand, the 'Robot' process was designed to control the movement of the robot's wheels (the communication with the mobile robotic device is indicated by the outward arrow and 'M' at the bottom square box of figure 9 (b)). Due to the binary control signals' binary nature they were used to act as synapse, either excitatory or inhibitory, to make a population of neurons active or inactive. The 'Simulink IN' process contained two neuronal populations: 'Channel 1' and 'Channel 2'. 'Channel 1' neuronal population received the input from one EEG channel and 'Channel 2' from the other. The main purpose of this process was to continuously scan the BCS and send the relevant synaptic signal (high in case of a 1 and no synapse in case of a 0) to the 'Robot' process to communicate with the mobile robotic device. The 'Robot' process was designed to communicate and command the robotic device. This was done by five neuronal populations (as seen in figure 9 (b)). These neuronal populations were connected, as required, using excitatory and inhibitory connections to form a network capable of driving the robotic device. The five populations were to perform their predefined tasks based on the input they received from the 'Simulink IN' process. The name and purpose of the neuronal populations are: • Motor: This neuronal population represented and communicated with the motor of the robotic device. The robotic device was mainly commanded using this neuronal population.
• Left Wheel: This neuronal population received input from the 'Channel 1' and was connected to three other populations. It was mainly responsible to make the left wheel to move forward. (c) Fig. 9. IQR modules for generating and commanding the robotic device. The square boxes represent neuronal populations and the lines represent connectivity between two populations. The excitatory and inhibitory connections are shown using red and blue lines, respectively.
• Right Wheel: Contrarily, this neuronal population received input from the 'Channel 2' and was mainly responsible for the right wheel's forward movement. This was also connected to three other neuronal populations.
• FWD: This neuronal population represented the combined movement of the left and the right wheels. When a synapse was made to the 'motor' neuronal population, both the wheels would move forward.
• Robot's Will: This neuronal population represented the robot's will to move, thus, always provided high synapses to the population it was connected to (i.e., 'FWD').
At rest (when there was no input), the 'Robot's will' neuronal population constantly generated excitatory synapse to the 'Motor' through the 'FWD' neuronal population that kept the robotic device moving forward. However, to enable turning of the robotic device, the input control signals from the channels were fed to the neuronal populations representing the wheels. These two populations were connected to the 'Motor' which controlled the movement of the individual wheels. Each time a neuronal population corresponding to a wheel fired: an excitatory synapse was sent to the 'Motor', an inhibitory synapse to the other wheel inactivating that wheel's activity, and another inhibitory synapse to the 'FWD' neuronal population making the physical stepper motor to stop working for the previous command and to get ready to process the new command.  Fig. 10. Flowchart of the IQR modules' communication for the robotic device's navigation.
The Figure 10 depicts the flowchart of the IQR modules' communication strategy for the robotic device's navigation. As explained earlier, at the beginning the robotic device kept on moving in the forward direction and waited for a command. Once a neuronal population corresponding to a wheel received a synaptic input, based on the channel from where it was generated, the wheel corresponding to the other channel was stopped and the forward driving motor was initiated for the wheel corresponding to the control signal generating channel. This operation caused the robot to take a turn (right or left) based on the received control signal. During the turn the robot constantly scanned for continuation of high control signal (i.e., a stream of continuous 1s) and once the control signal was low (i.e., 0), the stopped wheel was restarted thus continuing the forward movement. For instance, let us assume that the 'Left wheel' neuronal population received a synaptic input. This inactivated the 'Right wheel' and the 'FWD' neuronal populations through inhibitory synapses, and activated the 'Motor' neuronal population by an excitatory synapse; as a result, the right wheel was stopped, but the left wheel kept on rotating allowing the robotic device to turn right. Therefore, a synapse from a wheel was required to stop the other wheel to steer the robotic device to follow a predefined course by taking a left and a right turn.

Discussion
The BMI model was thoroughly tested with EEG signals containing both the event evoked responses (ERD/ERS and events caused by saccadic eye movement). As an example of the BMI system's workability, here we demonstrate it using the event information caused by the saccadic eye movements. The ERD/ERS based interfacing can be found in a previous publication (Mahmud et al., 2009). Fig. 11. Signals recorded by EOG and 'O1', 'O2' electrodes of the EEG from one subject while performing the saccadic movement during an experiment.
Healthy subjects were plugged in with the necessary equipments of EEG signal detection (see figure 7). The recorded and artifact removed EEG signals related to the saccadic eye movement is shown in figure 11. The saccadic movement direction of the subject was clearly reflected through sharp changes of amplitudes in the recorded signals from the contralateral electrodes. The dynamic threshold detected this sharp change and generated the controls signals which in turn triggered the activation / inactivation of particular neuronal populations by providing an excitatory / inhibitory synapses for taking a turn. The excitatory synapse from the forward movement neuronal population kept the robot moving forward; to turn the robotic device left, the left wheel was stopped through an inhibitory synapse generated by the right wheel neuronal population, and vice-versa for a right turn. A combination of these saccadic movements can guide the robot to follow a predefined course, as seen in the figure 12. The time course analysis of the robot's movement was not performed during the experiments. However, the robot could move at a speed of 13.816 cm/s, and in an ideal situation it could travel a 1 m straight path in around 8 s 1 .

Conclusion
This Brain-machine interface based on the EEG demonstrated here is a proof-of-principle to reduce the complexity that is gradually prevailing upon the very potential field of rehabilitation. By applying this technique it is possible to provide mobility to the people with motor dysfunction. The system was tested thoroughly using ERD/ERS events of the α and the events caused by the saccadic eye movement. However, the usage of saccade evoked EEG signals in this work is just to demonstrate the proposed model's workability. It is very much possible to adapt and extend the BMI model to use any other type of signal (voluntary or involuntary) through proper signal processing techniques capable of transforming the input signal to a binary decision signal. The work in progress is to extend this technique to control assistive robotic devices (e.g., robotic wheelchair) for the disabled. The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations.