Open access peer-reviewed chapter

EEG Control of a Robotic Wheelchair

Written By

Ashok Kumar Chaudhary, Vinay Gupta, Kumar Gaurav, Tharun Kumar Reddy and Laxmidhar Behera

Reviewed: 24 February 2023 Published: 15 April 2023

DOI: 10.5772/intechopen.110679

From the Edited Volume

Human-Robot Interaction - Perspectives and Applications

Edited by Ramana Vinjamuri

Chapter metrics overview

193 Chapter Downloads

View Full Metrics

Abstract

The Brain-Computer Interface (BCI) technology has been widely used in clinical research; however, its adoption in consumer devices has been hindered by high costs, poor reliability and limited autonomy. In this study, we introduce a low-cost, open-source hardware-based, consumer-grade product that brings BCI technologies closer to the elderly and motor-impaired individuals. Specifically, we developed an autonomous motorized wheelchair with BCI-based input capabilities. The system employs the ROS-backend navigation stack, which integrates RTAB-MAP for mapping, localization, and visual odometry, as well as A* global and DWA local path planning algorithms for seamless indoor autonomous operations. Data acquisition is accomplished using OpenBCI 16-channel EEG sensors, while Ensemble-Subspace KNN machine learning model is utilized for intent prediction, particularly goal selection. The system offers active obstacle avoidance and mapping in all environments, while a hybrid BCI Motor Imagery based control is implemented in a known mapped environment. This prototype offers remarkable autonomy while ensuring user safety and granting unparalleled independent mobility to the motor-impaired and elderly.

Keywords

  • brain-computer interface
  • motor-impaired
  • wheelchair
  • ensemble-subspace KNN

1. Introduction

The development of Brain-computer Interface (BCI) technology has led to a wide range of scientific and practical applications since its inception in the 1970s. One of the key areas of focus for BCI technology is in the field of wheelchair systems, where the ease of use and efficiency for the user is of paramount importance. This means that the system should be designed in such a way that it is simple for the user to operate the wheelchair and achieve their desired objectives. In particular, EEG-based brain-computer interfaces (BCI) are particularly well-suited for this application as they offer a high degree of convenience and efficiency for the user. However, previous wheelchair systems based on the P300 [1, 2] and those based on steady state visual evoked potentials (SSVEP) [3] did not provide the user with the same level of convenience because they required them to continuously watch a screen in order to decipher commands from EEG signals. The user’s field of vision is restricted and fixed on the BCI feedback, making it difficult for them to handle some situations. A wheelchair system based on motor imagery has been suggested in this situation [4, 5]. One of the wheelchair systems has been introduced with fixed direction steering of the wheelchair [6]. But had several shortcomings i.e. one of which was the design approach. It was a non-autonomous and requires constant attention. It does not have synchronous localization and mapping (SLAM) feature. It suffers low information transfer rate. So, to overcome the above shortcoming LiDAR based SLAM was introduced [7]. It is based on steady state visually evoked potentials (SSVEP). But the shortcomings were with shared autonomy and also suffers low information transfer rate and cost ineffective too. A new wheelchair system utilizing motor imagery [8]. was recently introduced, offering a significant enhancement in functionality compared to previous models. However, even more advanced technology was later developed, incorporating both motor imagery and P300 technology to provide even greater enhancement [9] in performance. This system offers the added feature of laser range finder and encoder-based localization, as well as autonomous capabilities. Despite these advancements, the system still lacks the ability to perform inferior 2D mapping and localization, and is cost-ineffective. Additionally, a SSVEP-based direction and angle control system for wheelchair design [10] has been proposed, utilizing visual landmarks for feedback. However, this system is not fully autonomous and also lacks the ability to perform inferior mapping and localization techniques. Additionally, it is cost-ineffective due to its low information transfer rate. Overall, while there have been significant advancements in wheelchair technology, there is still room for improvement in terms of autonomy and cost-effectiveness. A wheelchair based on eye-blink steering control is proposed [11]. An innovative approach but does not solve the issue natural eyeblink signals. Since it is non- autonomous, higher magnitude natural eyeblink signals leads to undesirable motor control of the wheelchair. To overcome this, eye-blinking method is integrated with electroencephalogram (EEG) to control wheelchair movement [12], but sometimes it becomes very difficult and for a person to continuously blink their eyes. As the technology progress, a wheelchair was designed to drive in four directions [13]. It was mainly proposed using time-frequency domain analysis of EEG signals using Neural Network. It is only a proposed prototype, not a full-size wheelchair and not suitable for actual real time implementation. Then a recent wheelchair is introduced based on computer vision-based navigation [14]. It uses tags to localize itself incorporating computer vision, but suffers low maneuverability and unable to handle dynamics obstacles accurately in real time. An omni-directional moveable wheelchair design is proposed based on Mecanum wheel [15]. It is based on SSVEP and Alpha-wave based asynchronous control but the major disadvantage is non-autonomous and requires constant attention for the control mechanism.

The literature reviewed above presents a comprehensive comparison of the various approaches that have been proposed for brain-controlled robotic wheelchairs. It is evident that most of the earlier methods relied heavily on Motor Imagery or P300 for classifying navigation commands. A significant number of researchers employed SSVEP to select from a fixed set of predefined commands. Additionally, some studies captured EEG signals corresponding to eye blinks for prediction purposes. However, when it comes to the chair’s autonomy, most early researchers attempted to limit themselves to a fixed set of controls, while others attempted fully autonomous navigation. However, the techniques they used were not as advanced as those available today, making them unsuitable for current use. The proposed method, on the other hand, has successfully overcome the challenges of signal acquisition, goal prediction, mapping, localization, and autonomous navigation. In conclusion, the proposed method is far superior to earlier approaches in terms of its ability to effectively control a brain-controlled robotic wheelchair.

The proposed method for design, development of a bio-signal enabled robotic wheelchair for motor-disabled and elderly care includes following novelties over other existing methods.

  1. Proposed design of a low-cost bio-signal enabled robotic wheelchair is with close attention to hardware and electronic safety features, designed specifically for human use.

  2. The wheelchair’s built-in control system enables rapid and precise point-to-point positioning with little manual control through joystick/directional pad.

  3. Maps of the wheelchair environment can be created using the newly introduced mapping module, which can also test working memory for matching point clouds. The wheelchair can localize itself and retain extremely precise odometry of its movement once it has been identified or mapped.

  4. Through manual objective selection made available by a GUI touch screen module, point-to-point path planning is feasible in all realized or imported maps.

  5. Selected endpoints will be identified as accessible goals within known surroundings for the user to choose via BCI-based control.

  6. With these extra features over a standard motorized wheelchair, the elderly and others with motor impairments can independently do daily duties and participate in social activities.

Advertisement

2. Proposed methodology for bio-signal enabled control system

To achieve our objective, we have devised a two-part solution. The first part involves acquiring the goal state through a GUI-based display module (Figure 1). This module displays a map obtained from the working memory and provides the user with various goal point options. For instance, in a hospital scenario, these options may include patient rooms, gardens, toilets, cafeterias, etc. The user selects a goal point, which triggers a neural depolarization pattern within the cortex. We use a 16-channel OpenBCI cap to record this pattern. By analyzing the time-series data generated by this pattern, we use Motor Imagery (MI)-based prediction and machine learning to estimate the probability of reaching the selected goal.

Figure 1.

Graphical user interface (GUI) of wheelchair display.

The navigation module is responsible for devising a trajectory towards the target state while ensuring the safety of the user and avoiding any potential static or dynamic obstacles. Initially, the module focuses on obtaining drift-free odometry by employing a multi-sensor system, including LiDAR, depth camera, IMU, and wheel odometry data that pass through an Extended Kalman Filter (EKF) to ensure accurate odometry. Subsequently, RTAB-Map maintains a live map within the Odom frame of the robot to enable obstacle avoidance via the local planner. The global planner formulates an optimal path from the start state to the goal state based on the static map. Subsequently, the local planner leverages this live map and the navigation waypoints to maneuver in the immediate vicinity while considering the kinematic constraints of the wheelchair, preserving a pre-set safety buffer, and accounting for the motion of moving obstacles. The local planner constantly updates the path to the goal, adhering to imposed environmental and kinematic constraints to ensure complete safety for the occupant until they reach their destination. Figure 2 shows all the components for bio-signal enabled control system.

Figure 2.

Block diagram for bio-signal enabled (BSE) control system.

2.1 Hardware control module (HCM)

The hardware infrastructure of a brain-computer interface (BCI) wheelchair is a critical determinant of its operational and practical efficiency. It encompasses the sensors and electrodes that capture brain signals, the control mechanism that deciphers these signals and translates them into motion instructions, and the actuators that execute the movement of the wheelchair. The absence of a properly functioning hardware infrastructure could impede the BCI wheelchair’s ability to respond to user input and hinder its mobility capabilities. Furthermore, the hardware infrastructure must be robust and dependable to guarantee user safety and wellbeing. Thus, it is essential to utilize high-quality components and to regularly service and upgrade the hardware infrastructure to maintain its ongoing functionality.

2.1.1 Frame dimension and sensor

The wheelchair has a frame made of iron, with a dimension of 114×64×93.5 cm. It is equipped with two motors and a differential drive system, with a wheelbase of 13 inches in diameter. The maximum weight it can carry is 100 kilograms, and it weighs 45 kilograms without any sensors. The wheelchair has a battery capacity of 24 V, and it can travel up to 20 km on a full charge. Overall, these hardware specifications make the wheelchair sturdy, reliable, and suitable for providing assistance to individuals with mobility impairments. The wheelchair has been modified in order to house the following sensors shown in Figure 3.

  1. LiDAR Sensor: A 2D Light Detection and Ranging (LiDAR) sensor, Slamtec RPLIDARA1 is being used to get the approximate location information of static and dynamic obstacles with respect to the wheelchair. It has a range of 12 m with a depth resolution of 0.1 mm. The sampling frequency is 4000 Hz at normal mode and 8000 Hz at boost mode. The horizontal field of view is 360°.

  2. RGB-D Camera: The wheelchair is equipped with an RGB-D Camera, Intel Realsense D435i for getting a 3D point cloud of the environment. RGB-D camera is being used over a 3D LiDAR for getting the point cloud because an equivalent 3D lidar cost 20 times more. It has a resolution of 1280×720 pixels at 30 frames per second with an operating range of 0.11–10 m. The horizontal depth field of view is 85.2°, and the verticle is 58°.

  3. IMU: SparkFun 9DoF Razor IMU M0 is used as Inertial Measurement Unit. The MPU-9250 in the 9DoF Razor is equipped with three 3-axis sensors: an accelerometer, a gyroscope, and a magnetometer, which allow it to detect linear acceleration, angular rotation velocity, and magnetic field vectors.

  4. Encoder: Autonics’ E30S4 Incremental Rotatory Encoder is used to get the motor feedback. Max response frequency is 300 kHz and max. Allowable revolution is 5000 rpm.

  5. BCI Cap: An OPENBCI Ultracortex mark 4 EEG headset is used for signal acquisition, and an OPENBCI Cyton biosensing board to sample and feed the EEG signals over Bluetooth to the main computing device. It is capable of collecting up to 16 channels of EEG data from up to 35 distinct 10–20 sites.

Figure 3.

Various electronic sensors: (a) LiDAR, (b) RGB-D camera, (c) IMU (d) encoder and (e) EEG headset.

2.1.2 Electronics control module (ECM)

  • Motors: We are using two Robodo MY1016ZL 250 W PMDC motors as the primary drive motors for the wheelchair. Its operating voltage is 24 V, rated torque is 12.7 Nm, and rated speed is 120 RPM.

  • Microcontroller: Arduino Mega is used as a microcontroller board to control motors and get encoder data. It uses a powerful and power-efficient 8-bit chip, ATmega2560, capable of running instruction at 16 MHz. It has 100 GPIO pins, 4-UART, 5-SPI, I2C for interfacing various sensors and actuators, connected to the network through USB-Serial.

  • Motor Driver: To control the speed and the direction of the brushed DC motors, a motor driver of the appropriate specification is needed. We are using Cytron’s MDDS30 Smart Drive, which accepts the PWM control signals from the microcontroller. It is rated for motors having a rated current of 30 A, the peak current of 80 A with operating voltage from 7-35VDC. It also comes with regenerative braking technology to charge the battery when one applies the brake.

  • Power Distribution Module: Different sensors and peripherals need different voltage values and have additional current requirements. Jetson Nano works on 5 V with a peak current of 4 A, while Intel NUC requires 19.5 V. We developed a power distribution module that can provide the stable and required power for all the components to facilitate this power distribution module that can provide the stable and required power for all the components to facilitate this.

  • Display: To output different destination goals and feedback, we incorporated a 7-inch HDMI touch display connected to the NUC.

The electronics components depicted in Figure 4 include state-of-the-art sensors and controllers that are designed to respond to the unique needs of the elderly disabled population. These devices work together seamlessly to provide the necessary support and assistance required for a range of mobility tasks.

Figure 4.

Various electronics component placement.

2.2 Communication control module (CCM)

The system has been designed with a modular approach to actively distribute workloads among sensors, actuators, display devices, and compute modules. This design not only makes the system easy to troubleshoot and repair, but also allows for efficient communication among these components. The Figure 5 provided illustrates the overall sensor interface and network communication infrastructure that has been implemented. Furthermore, the modular design of the system allows for effective load balancing, making it more reliable and efficient. Overall, the system is designed in a way to make it simple to debug and repair, and the sensor interface and network communication infrastructure are clearly illustrated in the given Figure 5, allowing for easy understanding of how the different components communicate with one another. The display flashes probable destinations where the wheelchair can navigate. The destination goal is captured from the brain by the BCI headset. It transfers this information to the Intel NUC wirelessly through Bluetooth, where path planning algorithms use it to plan the path. The feedback from sensor data fusion from the camera attached to NUC, LiDAR and IMU hooked to Jetson Nano 2 and encoders attached to Jetson Nano 1 are used by the system to generate forward and angular velocity for the wheelchair. These velocities are sent to the microcontroller connected to Jetson Nano 1 via serial interface, converting to individual wheel velocities set by the motor driver. All three computing devices are connected through a router via ethernet. Figure 5 shows the sensor interface and network communication module.

Figure 5.

Sensor interface and network communication.

Advertisement

3. Movement control system

The motor control design for a brain control wheelchair involves using signals from the brain to control the movement of the wheelchair. It includes safety features such as obstacle detection and emergency stop mechanisms. The control algorithms used to interpret the brain signals and control the motors are crucial for smooth and responsive movement. The design also focuses on making the wheelchair easy to operate and user-friendly.

3.1 Wheelchair movement analysis

The robot has two motor controlled wheels at the back and two castors in the front. The two motor controlled wheels control the kinematics (Figure 6) of the chair which is implemented using differential drive kinematics. For defining this we take into account several variables. Where, ICC: Instantaneous centre of curvature; R: Radius of curvature; vl: Velocity of left wheel; vr: Velocity of right wheel; vf: Linear velocity of the bot; wl: Angular velocity of left wheel; wr: Angular velocity of right wheel; w: Angular velocity of the bot; Dw: Distance between the left and right wheel; dw: Radius of the wheels.

Figure 6.

Wheelchair kinematic model.

vf=vl+vr2E1
w=vlvrDwE2
R=vl+vr2vlvrE3
vl=dwwlE4
vr=dwwrE5

3.2 PWM based control mechanism

Once the wheel angular velocities have been determined, motor control is achieved using our control system, which consists of a microcontroller connected to one of the Jetson Nano processors via USB. The microcontroller communicates with the motor drivers using Pulse Width Modulation (PWM) signals. They have a PWM pin and an enable pin, the first determines motor speed, while the other is used to determine motor direction. Based on the input from these two pins, delivered via a microcontroller, the motor driver regulates current delivered to the motor from the main power supply battery. The frequency and width of the PWM pulses determine the speed of rotation of the motors. Here, vmax and vmin have been defined for safety as the maximum and minimum velocities for normal operation.

wl=2πPlPnE6

Pl− Number of pulses sent to the left wheel in 1 sec.

Pm− Number of pulses needed for one complete revolution of the wheel.

3.3 Velocity communication

The computer publishes the reference speed commands as wist message on the cmdvel topic. Then, the microcontroller reads the cmdvel from usb serial and generates the PWM signals for motor control. The required linear velocity vf is in msg.linear.x and the required angular velocity w is in msg.angular.z. Now solving the Eqs. (1) and (2) the required individual velocities of the wheels are found and then from six the required PWM values are calculated.

Advertisement

4. Autonomous navigation

Simultaneous localization and map building (SLAM) [16] and path-planning are at the core of any autonomous or assisted system. In order to provide for reliable locomotion in a dynamic environment upto a determined goal, we have designed a robust navigation stack. The primary step in navigation is determination of the environment. This is followed by pinpointing ego position within the realized environment - and its update as it relocates within the environment. Any dynamic obstacles must be detected in this process and added to the point cloud. Finally, using the determined map, the procured goal and a constantly updated laser scan, we can navigate as required.

4.1 Various SLAM approaches

We can also objectively evaluate functionality of common slam algorithms on the basis of input/output capabilities. Figure 7 shows comparison of various SLAM algorithms.

  • Gmapping [17]: It is the native SLAM approach implemented out of the box in ROS. It is a particle filter based approach that maps 2-D lidar scan to a 2-D costmap. This cost-map is available online, along with pose, and has been widely adapted in conjunction with amcl.

  • Hector SLAM [18]: It is a significant improvement over the above particle filter approach, and can accept imu input for odometery, however external odometery estimates are not accepted by Hector SLAM. Moreover, there is no loop closure support in either of these algorithms, thus earlier estimates of SLAM are not updated upon return.

  • Google Cartographer [19]: It is one of the earliest open-source SLAM implementations in ROS to implement loop closure detection, thus constantly refining mapping estimates. It use lidar graph to generate a 2-D or 3-D map from laser scans of 2-D or 3-D LiDAR respectively.

  • ORB SLAM2 [20]: It is an improvement on the above graph based approach, and is also a visual SLAM approach, accepting both depth and stereo cameras as inputs.

  • RGBD-SLAMv2 [21]: It is another visual SLAM implementation that can handle full occlusion and white noise added to visual data-stream. It compensates loss of data in such situations by multi-sensor fusion with IMU data, implemented using robot localization package in ROS.

Figure 7.

Comparison of various SLAM algorithms.

Advertisement

5. Integration of BCI with wheel chair

Goal is selected from the display GUI (Figure 1) using BCI module. Once the BCI module gives the final prediction of goal, the autonomous path planning modules kicks in. The Global planner, A* generates an efficient path according to the goal, and the DWA local planner ensures any dynamic change in environment and obstacle avoidance. Motion control system sets the velocities of wheels and odometry from the sensor fusion and localisation give feedback on pose and velocities. This loop continues until the wheelchair reaches its destination. If there is any fallback in any of the modules, recovery behaviors take control for the user’s safety. After reaching the destination goal, it waits for the next goal. Figure 8 shows the BCI integration with wheel chair.

Figure 8.

BCI integration with wheel chair.

5.1 Brain-computer interface (BCI)

Brain-Computer Interface (BCI) creates an interface between our brain and the computer. We are able to do this because we get different and differentiable signals for every task we do. We analyze these signals and translate them into commands that are sent to an output device to perform a desired action. In our case, we are using these signals to drive a wheelchair.

5.2 Motor imagery (MI)

MI is one of the standard techniques in BCI in which user is asked to imagine motor action like raising left hand or right limb without actually performing that action. This translates to potential drop that is captured by a EEG Headset. When a person imagining process, event related synchronization and de-synchronization occurs which lie in the frequency range between Mu/Alpha (8 12 Hz) and Beta (16 25 Hz) [22].

5.3 Collection and initial processing of bio-signal recordings

We take the signals from a 16-channel OPENBCI Ultracortex Mark-4 EEG headset (Figure 3e), which takes signals from the brain and wirelessly sends them to the computing device using Bluetooth. The plavement of electrodes is very important for motor imagery applications. Standard 10–20 electrode placement can’t be found in the Figure 9. We have exploited C3, C4, CP1 and CP2 because they provide better signals for motor imagery applications.

Figure 9.

EEG electrode placement [Source:commons.wikimedia.org].

Once we get the raw data, we filter it to get away with noise and artifacts. Artifacts are needed to be removed before feeding this data for feature extraction otherwise they interfare with signal of interest which decreases the Signal to Noise Ratio (SNR). Some of them are [23]:

  • ECG Artifacts: Arises due to heart and pulsing.

  • Muscular Artifacts (MA): Arise due to muscular movement.

  • Ocular Artifacts (OA): Arise due to eye blinking and movement.

  • Electrode Contact Artifacts: Arise due to improper electrode contact and due to formation of salt bridge between scalp and electrode due to sweat.

The frequency required for our operation is from 8 to 50 Hz, so we put a fifth-order bandpass butterworth filter of 4 to 60 Hz to remove unwanted frequencies. This bandpass filter also removes some of the low-order frequencies which arise due to head and body movements.

5.4 Data collection sequence

For Motor Imagery-based data collection, we divided the training sequence into four classes: left hand, right hand, left feet, and right feet (Figure 10). A blank screen was followed by one of the four sequences flashed on display. EEG data of 5 seconds each were collected, of which we trim 30 sec from start and end and divided into four parts. Slicing allows us to collect data efficiently.

Figure 10.

Motor imagery training sequence.

5.5 Feature extraction

Once we have filtered data, we operate on each point and extract features for our machine learning model to classify them into different classes. Features that we have used are as follows:

  • Mean,

x¯=i=1NxiNE7

Median, med(x)

Root Mean Square,

RMSx=1Ni=1Nxi2E8

Variance,

Vaxx=i=1Nxix¯2N1E9

Skewness,

Skewx=i=1Nxix¯3N1σ3E10

where σ is standard deviation.

Kurtosis,

kurtx=EXμσ4E11

Integral features like area under the curve and waveform length.

Slope sign change,

Slope  Sign  Change=n=2N1fxnxn1xxnxn1E12

where,

fx=1,ifxthreshold0,otherwise

Apart from these time-domain features, we have also used frequency domain features. But to transfer time-domain EEG signals to frequency domain, we perform a discrete Fast Fourier Transform (FFT) [24] on time domain signal:

xk=n=0N1xnej2πknNE13

Mean frequency,

fmean=i=0NIifii=0NIiE14

where, I is spectrogram intensity (in dB)

Median frequency, median(f)

Spectral Power Density [25]

xin=xn+iD,n=0,1,2,,M1E15

while; i = 0, 1, 2,…, L-1;

Pxxif=1MUn=0M1xinwnej2πfn2.E16
PxxW=1Li=0L1Pxxif.E17

Peak frequency, frequency corresponding to peak of spectral power distribution.

5.6 Training ANS classification

We show one of the four MI training sequence images in a training routine and ask user to imagine those. The EEG headset takes this data and wirelessly send this for pre-processing and feature extraction. After cleaning, the data is labeled, and the data point is added to the dataset. After collecting all the data from all subjects, we fed this dataset for training. Figure 11 shows flow of training module.

Figure 11.

Block diagram for BCI training module.

We tried may classifiers like SVM with poly kernel but settled with Ensemble Subspace k-NN(k = 3) [26] with an accuracy of 91.4% on 5-fold cross validation (Figure 12). After the creation of the dataset and training, we run our model for real-time prediction. In the goal prediction pipeline, we continuously take raw data, store 1-sec data in a buffer, and do pre-processing and feature extraction. Feature-extracted unlabelled data is now fed to the same classifier that we use in the training stage for class(here goal) prediction. Figure 13 shows the goal prediction module.

Figure 12.

Our BCI flowchart.

Figure 13.

Block diagram for BCI target prediction module.

Advertisement

6. Results

We have taken data of eight healthy subjects for MI in two sessions as multi-session data collection improves overall real-world performance on new data. We take 100 readings of 5 seconds each, raw data for each subject in both the sessions, which on trimming 1 second and slicing the remaining in four, becomes 800 labeled data points per class. This dataset now go for training.

Accuracy of different classifiers at 5-fold cross validation is presented in Table 1 below.

S.No.ClassifiersAccuracy
1k-NN (k = 3)86.7%
2Normal SVM85.4%
3SVM with poly kernel88.2%
4Ensemble Subspace kNN (k = 3)91.4%

Table 1.

Performance comparison of different classifier models.

Advertisement

7. Discussion

The development of an advanced wheelchair is a significant breakthrough in enabling independent mobility for elderly and physically impaired individuals. However, it represents only a modest step towards the creation of an empowering and intelligent assistive technology.

There are certain limitations to our approach, such as the manual addition of key points within a known map by developers or admins, which the user can then use for BCI-based control. In contrast, a newly realized map can accept goal coordinates and orientation via manual touch entry. The main obstacle to achieving a higher level of freedom through BCI control is the intent prediction models associated with BCI. To overcome this, future research can focus on improving the granularity in motor imaging estimates to enable the selection of any point in all maps, both stored and realized, via BCI-based input.

Advertisement

8. Conclusion

In empirical investigations using Motor Imagery with healthy volunteers, we were able to achieve significant results within the confines of our laboratory’s limited navigation space. Although the trials were successful in terms of goal acquisition, it may be necessary to retrain and redevelop the goal selection pipeline based on data obtained from motor-impaired individuals. Further research in this area could be expanded to include real-world settings such as hospitals and airports, in the hopes of establishing wider acceptance for this technology in the future. With these efforts, we envision a future where Motor Imagery becomes a widely recognized tool in the realm of rehabilitation for motor-impaired individuals.

While our wheelchair is well-suited for indoor use and provides excellent user convenience, additional improvements are necessary to ensure the safety of both the user and pedestrians in outdoor environments. A more efficient suspension and braking system can be developed to address this need. The current active obstacle avoidance system incorporates an 8000-sample 2D LiDAR sensor with a detection range of up to 6 m. For outdoor environments with larger navigable spaces and sparser point clouds, the obstacle detection range would have to be significantly increased. Our detection suite is supported by an Intel realsense depth camera, which has been tested and performs efficiently even in outdoor settings.

Our fundamental objective is to accentuate the gravity of the quandary we are attempting to assuage and accentuate the relatively facile approach by which an appropriate resolution can be reached to ensure autonomous mobility for geriatric and motor-challenged individuals. We ardently contend that targeted research and cutting-edge engineering solutions, focused on these innovative methodologies, will effectively ameliorate the quality of life for this particular demographic. Thus, we aim to meticulously scrutinize and develop these avant-garde techniques to revolutionize the way we facilitate independent movement for individuals with limited mobility. Through this, we aspire to make an indelible and significant contribution to the amelioration of society at large.

References

  1. 1. Iturrate I, Antelis JM, Kubler A, Minguez J. A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation. IEEE Transactions on Robotics. 2009;25(3):614-627
  2. 2. Rebsamen B, Guan C, Zhang H, Wang C, Teo C, Ang MH, et al. A brain controlled wheelchair to navigate in familiar environments. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2010;18(6):590-598
  3. 3. Müller ST, Celeste WC, Bastos-Filho TF, Sarcinelli-Filho M. Brain-computer interface based on visual evoked potentials to command autonomous robotic wheelchair. Journal of Medical and Biological Engineering. 2010;30(6):407-415
  4. 4. Millán JR, Renkens F, Mourino J, Gerstner W. Noninvasive brain-actuated control of a mobile robot by human EEG. IEEE Transactions on Biomedical Engineering. 2004;51(6):1026-1033
  5. 5. Choi K. Control of a vehicle with EEG signals in real-time and system evaluation. European Journal of Applied Physiology. 2012;112:755-766
  6. 6. Kim KT, Carlson T, Lee SW. Design of a robotic wheelchair with a motor imagery based brain-computer interface. In: 2013 International Winter Workshop on Brain-Computer Interface (BCI). Gangwon Province, South Korea: IEEE; 2013 Feb 18. pp. 46-48
  7. 7. Duan J, Li Z, Yang C, Xu P. Shared control of a brain-actuated intelligent wheelchair. In: Proceeding of the 11th World Congress on Intelligent Control and Automation. Shenyang, China: IEEE; 2014 Jun 29. pp. 341-346
  8. 8. Andronicus S, Harjanto NC, Widyotriatmo A. Heuristic steady state visual evoked potential based brain computer interface system for robotic wheelchair application. In: 2015 4th International Conference on Instrumentation, Communications, Information Technology, and Biomedical Engineering (ICICI-BME). Bandung, Indonesia: IEEE; 2015 Nov 2. pp. 94-97
  9. 9. Zhang R, Li Y, Yan Y, Zhang H, Wu S, Yu T, et al. Control of a wheelchair in an indoor environment based on a brain–computer interface and automated navigation. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2015;24(1):128-139
  10. 10. Li Z, Zhao S, Duan J, Su CY, Yang C, Zhao X. Human cooperative wheelchair with brain–machine interaction based on shared control strategy. IEEE/ASME Transactions on Mechatronics. 2016;22(1):185-195
  11. 11. Lahane P, Adavadkar SP, Tendulkar SV, Shah BV, Singhal S. Innovative approach to control wheelchair for disabled people using BCI. In: 2018 3rd International Conference for Convergence in Technology (I2CT). Pune, India: IEEE; 2018 Apr 6. pp. 1-5
  12. 12. Xin L, Gao S, Tang J, Xu X. Design of a brain controlled wheelchair. In: 2018 IEEE 4th International Conference on Control Science and Systems Engineering (ICCSSE). Wuhan, China: IEEE; 2018 Aug 21. pp. 112-116
  13. 13. Zgallai W, Brown JT, Ibrahim A, Mahmood F, Mohammad K, Khalfan M, et al. Deep learning AI application to an EEG driven BCI smart wheelchair. In: 2019 Advances in Science and Engineering Technology International Conferences (ASET). Dubai, United Arab Emirates: IEEE; 2019 Mar 26. pp. 1-5
  14. 14. Alkhatib R, Swaidan A, Marzouk J, Sabbah M, Berjaoui S, Diab MO. Smart autonomous wheelchair. In: 2019 3rd International Conference on Bio-Engineering for Smart Technologies (BioSMART). Paris, France: IEEE; 2019 Apr 24. pp. 1-5
  15. 15. Nuo G, Wenwen Z, Shouyin L, Nuo G, Wenwen Z, Shouyin L. Asynchronous brain-computer interface intelligent wheelchair system based on alpha wave and SSVEP EEG signals. In: 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP). Wuxi, China: IEEE; 2019 Jul 19. pp. 611-616
  16. 16. Dissanayake MG, Newman P, Clark S, Durrant-Whyte HF, Csorba M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Transactions on Robotics and Automation. 2001;17(3):229-241
  17. 17. Grisetti G, Stachniss C, Burgard W. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Transactions on Robotics. 2007;23(1):34-46
  18. 18. Kohlbrecher S, Von Stryk O, Meyer J, Klingauf U. A flexible and scalable SLAM system with full 3D motion estimation. In: 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics. Kyoto, Japan: IEEE; 2011 Nov 1. pp. 155-160
  19. 19. Hess W, Kohler D, Rapp H, Andor D. Real-time loop closure in 2D LIDAR SLAM. In: 2016 IEEE International Conference on Robotics and Automation (ICRA). Stockholm, Sweden
  20. 20. Mur-Artal R, Tardós JD. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Transactions on Robotics. 2017;33(5):1255-1262
  21. 21. Endres F, Hess J, Sturm J, Cremers D, Burgard W. 3-D mapping with an RGB-D camera. IEEE Transactions on Robotics. 2013;30(1):177-187
  22. 22. Pfurtscheller G, Da Silva FL. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clinical Neurophysiology. 1999;110(11):1842-1857
  23. 23. Zhang C, Lian Y, Wang G. ARDER: An automatic EEG artifacts detection and removal system. In: 2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS). Glasgow, Scotland: IEEE; 2020 Nov 23. pp. 1-2
  24. 24. Brigham EO, Morrow RE. The fast Fourier transform. IEEE Spectrum. 1967;4(12):63-70
  25. 25. Alam MN, Ibrahimy MI, Motakabber SM. Feature extraction of EEG signal by power spectral density for motor imagery based BCI. In: 2021 8th International Conference on Computer and Communication Engineering (ICCCE). Kuala Lumpur, Malaysia: IEEE; 2021 Jun 22. pp. 234-237
  26. 26. Bavkar S, Iyer B, Deosarkar S. Rapid screening of alcoholism: An EEG based optimal channel selection approach. IEEE Access. 2019;7:99670-99682

Written By

Ashok Kumar Chaudhary, Vinay Gupta, Kumar Gaurav, Tharun Kumar Reddy and Laxmidhar Behera

Reviewed: 24 February 2023 Published: 15 April 2023