Open access peer-reviewed chapter

Conversion of a Conventional Wheelchair into an Autonomous Personal Transportation Testbed

Written By

Volkan Sezer, Rahman Salim Zengin, Hosein Houshyari and Murat Cenk Yilmaz

Submitted: 18 May 2020 Reviewed: 04 June 2020 Published: 07 July 2020

DOI: 10.5772/intechopen.93117

From the Edited Volume

Service Robotics

Edited by Volkan Sezer, Sinan Öncü and Pınar Boyraz Baykas

Chapter metrics overview

724 Chapter Downloads

View Full Metrics

Abstract

Personal transportation is the act of transporting an individual by using a small, low-speed vehicle. It is a very hot research topic both in industry and academia. There are many different types of personal transportation vehicles, and wheelchairs are one of them. Autonomous driving is another very popular subject that is applicable to the personal transportation vehicles. Autonomous personal transportation vehicles are good examples of service robotics applications. In this study, conversion procedure of a conventional electric wheelchair into an autonomous personal transportation testbed and the application of some basic autonomous driving algorithms on the developed testbed are explained. In literature, there are several studies providing information on wheelchairs’ autonomy but not deep information about the conversion itself. In this paper, the conversion process is investigated in detail, under two main sections. The first part is by-wire conversion, which allows the wheelchair to be controlled via computer commands. The second part includes the studies on sensors, computational system, and human interface. After making such modifications on wheelchair, fundamental algorithms required for autonomy, such as mapping and localization, are implemented successfully. The results are promising for the usage of the developed system as a testbed for examining new autonomous algorithms and evaluating the performance of the perceptional/computational components.

Keywords

  • autonomous wheelchair
  • navigation
  • localization
  • drive by wire
  • personal transportation

1. Introduction

Traveling autonomously from one place to another is an extremely important subject, which has been a dream for years and studied a lot in recent years. The research on the subject generally focuses on the autonomy of passenger cars going into traffic. The first serious results of the studies on this subject came from the race named Darpa Grand Challenge, which was organized by the organization “Defense Advanced Research Projects Agency (DARPA),” which carries out the United States’ advanced technology defense projects, in 2004, and then continued in 2005 and 2007, respectively [1]. After DARPA competitions, lots of autonomous studies have been done all over the world such as in [2, 3, 4].

Autonomous traveling is possible and a serious need, not only in the way that passenger cars perform in traffic, but also in closed environments where the use of Global Positioning System (GPS) is not possible and where the density of people is high. People may prefer to go somewhere autonomously in a closed environment even if there is no walking disability. Wheelchairs are very suitable vehicles for this kind of personal transportation. Also, for some people with disabilities, wheelchairs are the only option to get from one place to another. Unfortunately, many people with disabilities lack the ability to safely use their wheelchairs. This situation poses serious risks for them, primarily for the people around them and other environmental elements. In line with all these needs, in this paper, design and development of a fully autonomous smart wheelchair is explained.

In literature, single-person autonomous/semi-autonomous vehicle studies are generally carried out on the wheelchair platform. The wheelchair named TetraNauta in [5] was developed at the University of Seville between 1998 and 2004, and it operates in a known map autonomously. The study shown in [6] explains the smart wheelchair project that started in 2004 at the Massachusetts Institute of Technology. In this study, an efficient, socially acceptable autonomous tour-following behavior was developed. In another work about autonomous wheelchairs [7], RGB-D camera was used as the main perception sensor, and the map of the environment is constructed from this. Ref. [8] provides information on the development of a robot operating point (ROS)-based autonomous wheelchair that will work indoors. Ref. [9] mentions an autonomous wheelchair that uses only light detection and ranging (LIDAR) as its environmental measurement unit. This study is carried out using the ROS platform. In [10, 11], a semi-autonomous wheelchair was designed where the chair was controlled via head movements. In [12], the aim is to estimate the chair pose from video data by using machine learning methods using artificial neural networks. Another autonomous wheelchair study [12] provides the results about navigation in cluttered environments without explicit object detection and tracking.

All these studies use autonomous/semi-autonomous wheelchairs, which are converted from a conventional wheelchair platform. They provide information about their wheelchairs’ autonomy but not deep information about the conversion itself. The conversion process of autonomous systems is explained in some papers for autonomous automobiles [13, 14] but not very detailed for wheelchairs. The dimensions, velocity capabilities, and differential drive architecture make wheelchairs different than standard automobiles. In this paper, we explain how to convert a conventional wheelchair into an autonomous one which is aimed to be used as a testbed for advanced autonomous algorithms. Besides, the results of localizing and mapping algorithms applied on this testbed are illustrated at the end of the work.

Advertisement

2. By-wire conversion

The conventional wheelchair used in this work is a standard differential drive platform with 2 × 500 W electric motors and a 24 V-40 Ah lead acid battery. Its weight (without person) is 80 kg with the dimensions of 120 cm × 65cm × 100cm. After some effort, now it is able to be driven by the outputs of the autonomy algorithms. Figure 1 shows the conventional version of the wheelchair which is controlled by a joystick.

Figure 1.

Conventional electric wheelchair used as a base system for conversion.

The original electric motor driver on this wheelchair is communicating with the joystick via a non-standard protocol, and it was a black box for us since there is no information about how it operates. For this reason, as a first step toward a full autonomy, the motor drivers should be replaced by another one that is able to communicate via a standard protocol and well documented. Therefore, we replaced the motor driver of each motor with a standard brushed DC motor driver “Roboteq MDC2230,” which is a dual channel driver and is able to provide 50 A current continuously for each channel.

In robotic applications, electric motors are generally used in “velocity control mode” instead of “torque mode.” For this reason, the motor driver needs to get the actual velocity value as a reference signal to be tracked. Hence, it is mandatory to measure the motor velocity for a feedback controller. For this reason, additional encoders (Atek-ARCB50360HPL33MY8FZ) for each electric motor are included to the system. The joystick is also removed since it is not necessary anymore in the autonomous concept. As a result, we mounted a motor driver that includes two separated drivers in the same box and two encoders. The motor driver box and one of the encoders are shown in Figure 2.

Figure 2.

Motor driver, encoder, on-off, and emergency buttons mounted.

According to the motor driver’s data interface, the driver is able to communicate via serial port, which is suitable for our computational system. Since it is planned to use the robot operating system (ROS) [15] to develop and implement the autonomous algorithms, a ROS node is also written to convert ROS commands to serial port data packages using “Rosserial.” Rosserial is a protocol for wrapping standard ROS serialized messages and multiplexing multiple topics and services over serial port. As a result, by mounting additional components (two motor drivers and two encoders) and writing a ROS node, the wheelchair is converted to a drive-by-wire system, which means it can take reference velocity commands from ROS and applies it successfully.

Finally, we added an on-off button in order to open and shut down the whole system. In addition, an emergency button is mounted near the seat to be reached by the human easily. The emergency button is connected to the appropriate input of the motor driver which cuts down the energy to motors immediately when emergency signal comes. The on-off and emergency buttons are shown in Figure 2.

Advertisement

3. Sensors, computational system, and human-machine interface

It is planned to use the system as a research and development testbed at the end of the conversion. For this reason, several sensors and computational hardware with different characteristics are mounted to the chair. These devices will be used for performance comparison in the future. Tables 1 and 2 show the list of the sensors for perception and the computational hardware added to the autonomous wheelchair testbed. More information about these components is illustrated in the next subsections.

Device Summary
INTEL REAL-SENSE—D435i RGB-Depth camera with 87° × 58° × 95° field of view and 10 m range
SICK LMS151 2D Lidar with field of view of 270° and maximum 50 m range. Scanning frequency is 25–50 Hz
RPLIDAR-A2M6 2D Lidar with field of view of 360° and maximum 18 m range. Scanning frequency is 5–15 Hz

Table 1.

Perception sensors used in the testbed.

Device Summary
JETSON-TX2 GPU 256 NVIDIA CUDA cores CPU Dual-Core NVIDIA Denver 2 64-Bit CPU Quad-Core ARM® Cortex®-A57 MPCore
AAEON Upboard Intel® Atom ×5 Z8350 Processor (Cherry Trail) of 64 bits up to 1.92 GHz
ST B-F446E-96B01A Based on the STM32F446 microcontroller, including 9-axis accelerometer/gyroscope/magnetometer

Table 2.

Computational hardware used in the testbed.

3.1 Perception system

In this study, we have three main perception sensors: one RGB-Depth camera (Intel Real Sense—D435i) that provides point cloud data with 87° × 58° × 95° field of view and 10 m range; one industrial relatively expensive 2D Lidar (SICK LMS151) that has 270° field of view, maximum 50 m range, and 50 Hz scan rate; and one low-cost Lidar (RPLIDAR-A2M6) that has 360° field of view, maximum 18 m range, and 15 Hz scan rate.

The aim is to analyze the performance of these different characteristic sensors on global mapping, local mapping, tracking, and localization. Some of these sensors can be removed from the system or can be used together depending on their perception performance. The angle of RGB-D camera can be adjusted manually. Figure 3 shows these sensors that are mounted on the chair, using appropriate mechanical parts which are designed for the chair specifically.

Figure 3.

Perception sensors on wheelchair.

3.2 Computational system

In our design, three computational subsystems are added to the system where two of them are planned for autonomous driving algorithms based on ROS platform. The first option is an embedded computer JETSON-TX2 from Nvidia. This low power consumption board is built around NVIDIA Pascal-family graphics processing unit (GPU) and very suitable for matrix operations used in deep learning. Another option is a low power consumption small size embedded computer, AAEON Upboard. It has Intel® Atom x5 Z8350 Processor (Cherry Trail) of 64 bits up to 1.92 GHz. The final computational board is going to be used as a real-time low-level controller if the performance of the motor drivers’ velocity control is not enough. This embedded board is B-F446E-96B01A from ST and is based on the STM32F446 microcontroller, including 9-axis accelerometer/gyroscope/magnetometer. Similar to the aim of using different perception sensors, finding the optimum combination of computational hardware for autonomous algorithms is another objective for the project. Since it is not possible to know the computational power requirements before the decision of autonomous driving algorithms, we will analyze the performance of each hardware. At the end, one of these boards can be removed from system or can be used together depending on their performance. Figure 4 illustrates the computational hardware mounted in a drawer, which is designed for the chair specifically. The gray protection boxes for Jetson TX2 and Upboard, which are produced by 3D printer, can be seen from Figure 4.

Figure 4.

Computational hardware of autonomous wheelchair (drawer is open).

3.3 Power distribution

The main energy source of the system consists of two serial connected lead acid batteries. The main voltage level for the traction motors is 24 V. On the other hand, there are several other components for autonomous operation. In order to provide the appropriate voltage level for each sensor and computational hardware, DC/DC converters are used. Figure 5 illustrates the power distribution scheme, including voltage levels, of the wheelchair components. An additional electric circuit is designed for power distribution using appropriate connectors fuses. DC/DC converters and the designed circuit can be seen from Figure 4.

Figure 5.

Power distribution of autonomous wheelchair.

3.4 Communication structure

Until now, all the electronic components have been shown and their purpose of use has been explained. In order to use all these devices properly, a secure communication network needs to be constructed. Each component has different interface in the system. For this reason, there are five different communication protocols such as Ethernet, USB 2.0, RS232, USB 3.1, and analog signals. Figure 6 illustrates the overall communication architecture of the developed autonomous wheelchair.

Figure 6.

Overall communication architecture of the system.

3.5 Human-machine interface

The final requirement for the autonomous wheelchair is a user interface that allows the human user to send the desired goal point on map. After this selection, the wheelchair plans its trajectory and tracks the path continuously. Instead of providing only the goal point, the user can set the waypoints to be tracked, by touching the desired coordinates on the map. Additional features of this interface are to send an emergency signal to the chair and to see the critical information such as wheelchair’s velocity and actual position on map. The interface software is designed on a touchable tablet PC. Assembled form of the tablet on wheelchair and a screenshot from the designed interface software are shown in Figure 7.

Figure 7.

Rear view of autonomous wheelchair and screenshot of the human-machine.

Advertisement

4. Experiments

After the modifications mentioned previously, now, we have a testbed that is being used for testing the autonomous driving algorithms such as simultaneous localization and mapping (SLAM), trajectory planning, trajectory tracking, and low-level controllers. Until now, using the designed testbed, the map of the test environment is constructed using “gmapping” library [16] in ROS development environment. Figure 8 shows a part of the environment where the mapping and localization algorithms are tested.

Figure 8.

Real test environment.

Figure 9 illustrates the real map of the environment which is drawn using AutoCAD® software using accurate manual measurements and the constructed map drawn by the autonomous wheelchair using SLAM. The famous SLAM package of ROS named as “gmapping” is used for map construction. The map is based on SICK LMS-151 Lidar and the odometry information. As it is seen from Figures 10 and 11, the constructed map and the real map are almost same. Since the office doors are closed in the AutoCAD version, the constructed map for comparison is prepared in the same condition. According to the structural similarity index method (SSIM) [17], when the window size is taken as 5 × 5 pixels, the similarity of two maps is 94%.

Figure 9.

Real map drawn using AutoCAD® software (left) and constructed map by SLAM (right).

Figure 10.

Adaptive particle filter localization application (blue arrows: particle poses, yellow: real pose).

Figure 11.

Localization performance of the wheelchair.

Another application is global localization of the wheelchair. In order to plan a path and track it, an autonomous agent must be aware of its pose in the map. Adaptive Monte Carlo Localization (AMCL), which is based on particle filter, is used for pose estimation. The localization algorithm is run separately after the map construction. This time we use a map that was constructed when the doors are open since the localization tests were done in open-door condition. Figure 10 shows the poses of particles (blue) and the real pose (yellow) during wheelchair’s motion. It should be noted that the wheelchair is driven manually for both mapping and localization.

In Figure 10, the figure in the left shows the beginning phase, and all the particles are scattered into environment uniformly. No prior information is provided as initial pose. The figure in the middle illustrates that the particles concentrated on places coherent with the Lidar and odometry measurements. Finally in the right figure, it is shown that particles are gathered around the real position of the wheelchair after 9 seconds. The real position of the wheelchair is calculated by the manual measurements from the wheelchair.

Figure 11 shows the localization performance of the wheelchair in the same real environment. It is observed that the average localization error after the points are converged is below 10 cm, which is acceptable for our future studies.

Advertisement

5. Conclusion

In this study, the conversion procedure of a conventional electric wheelchair into an autonomous personal transportation testbed is described in detail. The conversion process is investigated under two main sections. The first part is by-wire conversion that allows the wheelchair to be controlled via digital commands. The second part includes the studies on the sensors, computational system and human-machine interface. Sensors and computational hardware that have different characteristics are included in the system for comparison and optimization of their performance in the future. The platform was tested using SLAM and AMCL algorithms for mapping and localization successfully. It is observed that the constructed map using industrial LIDAR and odometry data is almost the same with the real map and localization performance is acceptable for the next studies.

In the future, we plan to use the wheelchair as a research platform to further improve the autonomous personal transportation algorithms that will be used in narrow and cluttered environments. Comparison of localization and mapping performance of different methods and sensors in such environments will be studied as well.

Advertisement

Acknowledgments

This work was supported by the Turkish Scientific and Technological Research Council (TUBITAK) under project no. 118E809.

References

  1. 1. Buehler M, Iagnemma K, Singh S, editors. The 2005 DARPA Grand Challenge: The Great Robot Race. Vol. 36. Springer; 2007
  2. 2. Broggi A, Medici P, Zani P, Coati A, Panciroli M. Autonomous vehicles control in the VisLab intercontinental autonomous challenge. Annual Reviews in Control. 2012;36(1):161-171
  3. 3. Poczter SL, Jankovic LM. The Google car: Driving toward a better future? Journal of Business Case Studies (JBCS). 2013;10(1):7
  4. 4. Sezer V, Bandyopadhyay T, Rus D, Frazzoli E, Hsu D. Towards autonomous navigation of unsignalized intersections under uncertainty of human driver intent. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg, Germany: IEEE; 2015
  5. 5. Diaz SV, Rodriguez CA, Rio FDD, Balcells AC, Muniz DC. TetraNauta: A intelligent wheelchair for users with very severe mobility restrictions. Proceedings of the International Conference on Control Applications. Vol. 2. pp. 778-783
  6. 6. Hemachandra S, Kollar T, Roy N, Teller S. Following and interpreting narrated guided tours. In: 2011 IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE; 2011
  7. 7. Baklouti E, Amor NB, Jallouli M. Autonomous wheelchair navigation with real time obstacle detection using 3D sensor. Automatika. 2016;57(3):761-773
  8. 8. Li Z, Xiong Y, Zhou L. ROS-based indoor autonomous exploration and navigation wheelchair. In: 2017 10th International Symposium on Computational Intelligence and Design (ISCID). Hawaii, USA: IEEE; 2017
  9. 9. Grewal H, Matthews A, Tea R, George K. LIDAR-based autonomous wheelchair. In: 2017 IEEE Sensors Applications Symposium (SAS). New Jersey, USA: IEEE; 2017
  10. 10. Demir M, Sezer V. Design and implementation of a new speed planner for semiautonomous systems. Turkish Journal of Electrical Engineering & Computer Sciences. 2018;26(2):693-706
  11. 11. Mavus U, Sezer V. Head gesture recognition via dynamic time warping and threshold optimization. In: 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA). Georgia, USA: IEEE; 2017
  12. 12. Pierson A, Vasile C-I, Gandhi A, Schwarting W, Karaman S, Rus D. Dynamic risk density for autonomous navigation in cluttered environments without object detection. In: 2019 International Conference on Robotics and Automation (ICRA). Montreal, Canada: IEEE; 2019
  13. 13. Sezer V, Dikilita C, Ercan Z, Heceoglu H, Oner A, Apak A, et al. Conversion of a conventional electric automobile into an unmanned ground vehicle (UGV). In: 2011 IEEE International Conference on Mechatronics. Istanbul, Turkey: IEEE; 2011
  14. 14. Babu GA, Guruvayoorappan K, Variyar VS, Soman K. Design and fabrication of robotic systems: Converting a conventional car to a driverless car. In: 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI). Manipal, India: IEEE; 2017
  15. 15. Quigley M, Conley K, Gerkey B, Faust J, Foote T, Leibs J, et al. ROS: An open-source robot operating system. ICRA Workshop on Open Source Software. 2009;3(3.2):5
  16. 16. Abdelrasoul Y, Saman ABSH, Sebastian P. A quantitative study of tuning ROS gmapping parameters and their effect on performing indoor 2D SLAM. In: 2016 2nd IEEE International Symposium on Robotics and Manufacturing Automation (ROMA). Ipoh, Malaysia: IEEE; 2016
  17. 17. Wang Z, Simoncelli E, Bovik A. Multiscale structural similarity for image quality assessment. In: The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers. California, USA: IEEE; 2003

Written By

Volkan Sezer, Rahman Salim Zengin, Hosein Houshyari and Murat Cenk Yilmaz

Submitted: 18 May 2020 Reviewed: 04 June 2020 Published: 07 July 2020