Open access peer-reviewed chapter

Mechatronic Design of a Lower Limb Exoskeleton

Written By

Luis I. Minchala, Fabián Astudillo‐Salinas, Kenneth Palacio‐Baus and Andrés Vazquez‐Rodas

Submitted: 13 September 2016 Reviewed: 12 January 2017 Published: 03 May 2017

DOI: 10.5772/67460

From the Edited Volume

Design, Control and Applications of Mechatronic Systems in Engineering

Edited by Sahin Yildirim

Chapter metrics overview

3,340 Chapter Downloads

View Full Metrics

Abstract

This chapter presents a lower limb exoskeleton mechatronic design. The design aims to be used as a walking support device focused on patients who suffer of partial lower body paralysis due to spine injuries or caused by a stroke. First, the mechanical design is presented and the results are validated through dynamical simulations performed in Autodesk Inventor and MATLAB. Second, a communication network design is proposed in order to establish a secure and fast data link between sensors, actuators, and microprocessors. Finally, patient‐exoskeleton system interaction is presented and detailed. Movement generation is performed by means of digital signal processing techniques applied to electromyography (EMG) and electrocardiography (EEG) signals. Such interaction system design is tested and evaluated in MATLAB whose results are presented and explained. A proposal of real‐time supervisory control is also presented as a part of the integration of every component of the exoskeleton.

Keywords

  • lower limb
  • exoskeleton
  • control
  • design
  • kinematics
  • dynamics
  • analysis

1. Introduction

Lately, exoskeletons are designed to provide strength in gait and to transport heavy loads. There are also designs for assisting people with disorders in motion or elderly people. Gait rehabilitation is one of the greatest challenges for society in the upcoming years due to population aging and the increase of diseases affecting motion. A partial or total paralysis of one side of the body due to injuries in the motor centers of the brain is called Hemiplegia. Hemiplegia is a disorder that causes one half of the human body to fail to perform its functions. This disorder is caused mainly due to stroke and in many cases it is hereditary. Recovery from the stroke is difficult and the treatment is prolonged. Generally, an injury to the right side of the brain will cause a left‐sided hemiplegia while an injury to the left side of the brain will cause a right‐sided hemiplegia [1].

Wearable robotics is an area that provides solutions for such problems. A wearable robot extends, complements, or fully substitutes human function or empowers the human limb where it is worn. These kinds of robots are classified according to the function they perform [2]:

  • Empowering robotic exoskeletons: These kinds of robots are known as extenders since they extend the strength of the human hand beyond its natural ability while maintaining human control of the robot.

  • Orthotic robots: An orthosis maps the anatomy of a limb to restore lost functions. The robotic counterpart of orthosis is robotic exoskeletons that complement the ability of the limbs. Exoskeletons are also capable of restoring handicapped functions.

  • Prosthetic robots: These robots are devices that fully substitute lost limbs.

Figure 1 shows two examples of wearable robots. Scientific community differentiates exoskeletons from orthosis by defining the former as the devices that enhance physical capabilities of wholesome users, and the latter as the devices that assist persons with impairments in the limbs [3]. In spite of their differences, both devices act in parallel with the limb. The applications of exoskeleton robots are varied, ranging from military applications to the medical field. In the medical field, exoskeletons in conjunction with rehabilitation therapies can assist patients with spinal cord injury, stroke and lower limb paralysis, caused by hemiplegia, to perform gait training [4].

Figure 1.

(a) Lower limb orthotic exoskeleton; (b) lower limb prosthetic robot.

1.1. Lower limb exoskeleton prototypes around the world

There are many designs of robotic exoskeletons for different purposes around the world. Table 1 shows a summary of some of the main active projects in the scientific community which have been reported in journal and conference papers.

No.InstitutionNamePurposeTechnical dataControl method
1Yonsei University
South Korea
Walking assistance lower limb exoskeletonPatients with lower limb paralysis
  • 200W brushless dc motor

  • Harmonic drives

  • Torques:

    • Hip: 79.3 nm

    • Knee: 42.2 nm

  • Active kinematic control

  • Stability control

2University of Tsukuba
Japan
HAL‐3Patients with lower limb paralysis
  • DC servo motors

  • Harmonic reducers in joints

  • Conscious recognition based on plantar pressure and torso angle

3Centre for Automation and Robotics
Spain
ATLASQuadriplegic patients
  • Brushless motors (Maxon)

  • Harmonic reducers

  • Peak torque of 57 nm (hip)

  • CoP stability control

  • Conscious recognition

4Berkeley Robotics and Human Engineering Laboratory
USA
Lower extremity exoskeleton (BLEEX)Allow personnel the ability to carry major loads such as food, rescue equipment, first‐aid supplies, etc.
  • DC brushless motors

  • Harmonic reducers

  • CoP stability control

  • Conscious recognition

Table 1.

Robotic exoskeletons around the world.

1.2. Lower limb exoskeleton features

Exoskeletons are defined as anthropomorphic mechanical devices worn by an operator that fit closely to the body anatomy and work in coordination with the movements of the user. Among the main features of an exoskeleton to be considered during the design, we have:

  • Design must be anthropomorphic: Current designs are unnatural in shape. Another limitation of exoskeletons is the lack of direct information exchange between the human nervous system and the wearable robot. Motion intentions originate with the user whose physiological state and desires must be discerned and interpreted.

  • Design must be flexible: The length of the thigh, shank and waist should be adjustable [5]. The length variation of thigh and shank is about 6 cm for average people with height from 1.60 m to 1.80 m. The shank length is about 0.246 times the stature, and thigh length is about 0.245 times the stature.

  • Increase joint strength: Exoskeletons do not transfer substantial load to the ground, but augments joint torque. This consideration might be used to reduce joint pain or increase joint strength in paralyzed or weak joints.

  • Selection of the degrees of freedom (DOF). The exoskeleton must be compliant to the freedom of motion of joints. Table 2 shows the DOF of a lower limb exoskeleton.

  • The actuator of exoskeleton robot should have a large output power‐to‐weight ratio and characteristics as low inertia, fast response, high precision, etc.

DOF of jointsJointsDOFRange of freedomDriving force needed
HipFlexion/extension120°θ65°80–100 N/m
Adduction/abduction30°θ40°Spring
Rotation30°θ30°Spring
KneeFlexion/extension120°θ0°45–70 N/m
AnklePronation/rotation15°θ30°Spring
Dorsiflexion/toe flexion20°θ50°Spring
Varus/valgus30°θ20°Spring

Table 2.

Design of DOF for a lower limb exoskeleton.

Advertisement

2. Lower limb exoskeleton mechanical design

2.1. Kinematic analysis

Kinematics gives the relationship between the articular space and the end effector of the robot. This approach is useful to generate trajectories, and joint actuators set points for controlling them. Figure 2 shows the kinematic analysis results.

Figure 2.

Kinematics of a robot.

Section 2.1.1 shows direct kinematics analysis of the exoskeleton and Section 2.1.2 shows inverse kinematics of the exoskeleton.

2.1.1. Direct kinematics

The main objective of the direct kinematics analysis is to find the position and orientation of the last link of the robot from the values of the articular coordinates. Direct kinematics equations are defined by

x=fx(q1, q2,, qn)y=fy(q1, q2,, qn)z=fz(q1, q2,, qn)α=fα(q1, q2,, qn)β=fβ(q1, q2,, qn)γ=fγ(q1, q2,, qn)E1

Figure 3 shows reference axes placed on each joint of the exoskeleton by using the Denavit‐Hartemberg (DH) algorithm [6]. Table 3 shows the DH parameters for each link of the exoskeleton.

Figure 3.

Allocation of reference axes on the lower limb exoskeleton.

Right limbaiαidiθiLeft limbaiαidiθi
1l100θ14l100θ4
2l200θ25l200θ5
3l300θ36l300θ6

Table 3.

DH parameters for the exoskeleton links.

The exoskeleton has six DOFs. Every joint is revolute, and the distribution of the DOF considering displacement of the exoskeleton in the sagittal plane is the following:

  • Two DOFs in each side of the hip

  • Two DOFs in the knees, one per each joint

  • Two DOFs in the ankles, one per each joint

The analysis to be performed, due to symmetry, is presented in one right limb which has the articular variables q1, q2, and q3. The transformation matrix that relates the position of the third link of the right limb is:

T=[C(θ1+θ2+θ3)S(θ1+θ2+θ3)0l1Cθ1+l2C(θ1+θ2)+l3C(θ1+θ2+θ3)S(θ1+θ2+θ3)C(θ1+θ2+θ3)0l1Sθ1+l2S(θ1+θ2)+l3S(θ1+θ2+θ3)00100001]E2

where C(θi) represents the cosine of the articular variable i, S(θi) represents the sine of the articular variable i.

The forth column of matrix T in Eq. (2) represents the position of the third link of the right limb. The analysis for obtaining Eq. (2) is performed in the plane xy (sagittal plane). Therefore, the direct kinematic equations of the right limb are:

xd=l1C(θ1)+l2C(θ1+θ2)+l3C(θ1+θ2+θ3)yd=l1S(θ1)+l2S(θ1+θ2)+l3S(θ1+θ2+θ3)zd=d E3

2.1.2. Inverse kinematics

Inverse kinematics provides the values of the articular variables (q=[q1, q2,,qn]T) for a specific location in the space of the last link of a kinematic chain in a robot [x,y,z,α,β,γ]T. Inverse kinematics equations are defined by

q1=f1(x,y,z,α,β,γ)q2=f2(x,y,z,α,β,γ)qn=fn(x,y,z,α,β,γ)E4

The inverse kinematics equations for the lower limb exoskeleton are to be obtained by the geometric method. It is convenient to perform the analysis in the sagittal plane, as shown in Figure 4.

Figure 4.

Geometric diagram to perform the inverse kinematics analysis.

Considering Figure 4, the following variables are defined:

Final link position=[x,y]Third link angle=δLinks lenghts=l1,l2,l3

By applying triangle's laws and proportions, we obtain:

LA=yl3cos(δ)LB=xl3sin(δ)c=LA2+LB2α=tan1(LALB)E5
θ1=α+βγ=cos1(l12+l22c22l1l2)θ2=γπθ3=δθ1θ2E6

Finally, by substituting variables, we obtain:

θ1=tan1(yl3sin(δ)xl3cos(δ))+cos1(l12l22+(yl3sin(δ))2+(xl3cos(δ))22l1(yl3sin(δ))2+(xl3cos(δ))2)θ2=cos1(l12l22(yl3sin(δ))2+(xl3cos(δ))22l1l2)θ3=δθ1θ2E7

2.1.3. Kinematics simulation

The Robotics toolbox for MATLAB is used for the kinematic simulation. Figure 5a shows the result of applying [π2,0,π2] as articular variables values. For the inverse kinematics simulation, the third link is located at

Left limb= [0.9,0.3,0.5]Right limb=[0.9,0.3,0.5]

Figure 5.

(a) Direct kinematics simulation; (b) inverse kinematics simulation.

The joint values are:

θ1=0θ4=0θ2=π/2θ5=π/2θ3=π/2θ6=π/2

2.2. Dynamic analysis

There are two well‐known methodologies for obtaining the dynamic model of any mechanism, the Euler‐Lagrange method which is an energy method, and the Newton‐Euler method which is based on the equilibrium of forces and torques. Hereafter, the Euler‐Lagrange method is used for obtaining the dynamic equations of the exoskeleton, whose equations are:

τ=ddt(Lq˙i)LqiL=KPE8

where i represents the DOF, qi represents the generalized coordinates (articular coordinates θi), τ represents the force and torque vector applied to joint i, L is the Lagrangian function, K is the kinetic energy of the system, and P is the potential energy of the system.

According to [6], a dynamic model is based on:

  • The spatial location of a robot is defined by its articular variables or by the coordinates of its last link, as well as their derivatives;

  • The forces and torques applied in the joints or in the last link of the robot; and

  • The robot dimensions.

There are two types of dynamic models:

  • Direct dynamic model: Express the transient evolution of the articular coordinates of the robot, according with the forces and torques involved in the system.

  • Inverse dynamic model: Express the forces and torques in every junction of the system, according with the transient evolution of the articular coordinates and their derivatives.

The direct dynamic model is obtained in the following, starting with the kinetic energy calculation,

Ki=12mis˙iTs˙i+12ωiTIiωiE9

where mi is the mass of the i‐link, s˙i is the translational speed of the center of gravity G of the i‐link, Ii is the inertia matrix of the link, and ωi is the angular speed.

The potential energy of an i‐link is defined by,

Pi=mighciE10

where mi is the mass of the link, g is the gravity acceleration, hci is the distance between the origin of the link, and the center of gravity of the i‐link is parallel to the gravity vector.

Therefore, the following Lagrangian function is obtained:

Li=i=1nKii=1nPiE11

By applying Eqs. (9)(11) with the consideration shown in Figure 6, we obtain the dynamic equations:

L=K1+K2+K3P1P2P3L=12m1lg12θ˙12+12I1θ˙12+12m2[l12θ˙12+lg22(θ˙1+θ˙2)2+2l1lg2θ˙1(θ˙1+θ˙2)C2] +12I2(θ˙1+θ˙2)2+12m3[l12θ˙12+l22(θ˙1+θ˙2)2+lg32(θ˙1+θ˙2+θ˙3)2+2l1l2θ˙1(θ˙1+Iθ̇2)C2 +2l1lg3θ˙1(θ˙1+θ˙2+θ˙3)C23+2l2lg3(θ˙1+θ˙2)(θ˙1+θ˙2+θ˙3)C3]+12I3(θ˙1+θ˙2+θ˙3)2m1glg1S1m2g(l1S1+lg2S12)m3g(l1S1+l2S12+lg3S123)E12

Figure 6.

i‐link parameters for dynamic modeling.

The torque for joint 1, τ1, is obtained by deriving Eq. (12) with respect to θ1:

Lθ˙1=m1lg12θ˙1+I1θ˙1+m2[l12θ˙1+lg22(θ˙1+θ˙2)+l1lg2C2(2θ˙1+θ˙2)]+I2(θ˙1+θ˙2)+m3[l12θ˙1+l22(θ˙1+θ˙2)+lg32(θ˙1+θ˙2+θ˙3)+l1l2C2(2θ˙1+θ˙2)+l1lg3C23(2θ˙1+θ˙2+θ˙3)+l2lg3C3(2θ˙1+2θ˙2+θ˙3)]+I3(θ˙1+θ˙2+θ˙3)E13

The time derivative of Eq. (13) is:

ddt(Lθ˙1)=θ¨1[m1lg12+I1+m2(l12+lg22+2l1lg2C2)+I2+m3[l12+l22+lg32+2l1l2C2+2l1lg3C23+2l2lg3C3]+I3]+θ¨2[m2(lg22+l1lg2C2)+I2+m3(l22+lg32+l1l2C2+l1lg3C23+2l2lg3C3)+I3]+θ¨3[m3(lg32+l1lg3C23+l2lg3C3)+I3]m2l1lg2S2(2θ˙1θ˙2+θ˙22)m3l1l2S2(2θ˙1θ˙2+θ˙22)m3l1lg3S23(2θ˙1+θ˙2+θ˙3)(θ˙2+θ˙3)m3l2lg3S3(2θ˙1θ˙3+2θ˙2θ˙3+θ˙32)E14

The partial derivative Lθ1 is:

Lθ1=m1glg1C1m2g(l1C1+lg2C12)m3g(l1C1+l2C12+lg3C123)E15

Finally, τ1 is calculated:

τ1=ddt(Lθ˙1)Lθ1τ1=θ¨1[m1lg12+I1+m2(l12+lg22+2l1lg2C2)+I2+m3[l12+l22+lg32+2l1l2C2+2l1lg3C23]+I3]+θ¨2[m2(lg22+l1lg2C2)+I2+m3(l22+lg32+l1l2C2+l1lg3C23+2l2lg3C3)+I3]+θ¨3[m3(lg32+l1lg3C23+l2lg3C3)+I3]m2l1lg2S2(2θ˙1θ˙2+θ˙22)m3l1l2S2(2θ˙1θ˙2+θ˙22)m3l1lg3S23(2θ˙1+θ˙2+θ˙3)(θ˙2+θ˙3)m3l2lg3S3(2θ˙1θ˙3+2θ˙2θ˙3+θ˙32)+m1glg1C1+m2g(l1C1+lg2C12)+m3g(l1C1+l2C12+lg3C123)E16

The same procedure is applied for joints 2 and 3:

τ2=ddt(Lθ˙2)Lθ2τ2=θ¨1[m2(lg22+l1lg2C2)+I2+m3(l22+lg32+l1l2C2+l1lg3C23+2l2lg3C3)+I3]+θ¨2[m2lg22+I2+m3(l22+lg32+2l2lg3C3)+I3]+θ¨3[m3(lg32+l2lg3C3)+I3]m2l1lg2S2θ˙1θ˙2m3l1l2S2θ˙1θ˙2m3l1lg3S23θ˙1(θ˙2+θ˙3)m3l2lg3S3(2θ˙1+2θ˙2+θ˙3)(θ˙3)+m2l1lg2S2θ˙1(θ˙1+θ˙2)+m3l1l2S2θ˙1(θ˙1+θ˙2)+m3l1lg3S23θ˙1(θ˙1+θ˙2+θ˙3)+m2glg2C12+m3g(l2C12+lg3C123)E17
τ3=ddt(Lθ˙3)Lθ3τ3=θ¨1[m3(lg32+l1lg3C23+l2lg3C3)+I3]+θ¨2[m3(lg32+l2lg3C3)+I3]+θ¨3[m3lg32+I3]+m3l1lg3S23θ˙12+m3l2lg3S3(θ˙1+θ˙2)2+m3glg3C123 E18

2.3. Dynamic simulation

The Simulink graphical programming environment is used for simulating the dynamic response of the exoskeleton when a variable torque is applied to every joint. Figure 7 shows the transient response of the system for this simulation.

Figure 7.

Dynamic simulation of the exoskeleton in simulink.

Advertisement

3. Communication networks for robotics

The exoskeleton communication network is one of the key design issues to be solved. The main goal of such a network is the proper integration of the exoskeleton mechanism, sensors, actuators, and MPU (Main Process Unit). Several state‐of‐the‐art works describe some details regarding the communication system. The following section presents a review of some relevant and recent published works.

3.1. Related work

CORBYS is a cognitive robotic system architecture proposed for a robot‐assisted gait rehabilitation device [7]. Its purpose is to allow the integration of high‐level cognitive control modules, a semantically driven selfawareness module, and a cognitive framework for anticipation of human behavior based on biologically inspired information theoretic principles. The main focus of the work is on the low‐level real‐time physical layer control system. The proposed CORBYS architecture consists of a cognitive, an executive, a control, and physical layers. At the cognitive level, the decision on actuation is carried out evaluating the human state by means of sensor data processing. This level is also responsible for the system model learning and the generation of a reference trajectory. The main objective of this layer is to anticipate human behavior and adapt accordingly the commands sent to the real‐time control module. The executive layer is responsible of the communication between all layers and the coordination for the general system functionality. Heartbeat messages are used to verify the proper functionality of each module. The CORBYS modules are deployed on individual computers connected to a general‐purpose network. A parameter server is required for storing and distributing application parameters. A real‐time control system executes high‐level commands from cognitive layer and controls the physical layer actuators. Sensors data are gathered and centralized by a real time data server. These four layers are integrated using a robot operating system (ROS). At the physical layer, smart actuators consist of a brushless DC motor, gearbox, sensors, logic, and power electronics. Additionally, they have three integrated communication protocols: RS‐232, CANopen, and profibus DB. The last two are suitable for real‐time operation. Regarding smart sensors, authors consider two categories: one for non‐real‐time applications intended for fatigue, intention/attention to motion sensing. And another one for the real‐time control subsystem. Sensors are processed by a local microcontroller and their data is provided digitally through the communication network. Sensors include magnetic encoders at the orthosis joints and 6‐DOF force/torque sensors with data processed by the self‐contained hardware. The communication network for the cognitive and executive layers is based on Ethernet, while the control and physical layers use EtherCAT (Ethernet for Control Automation Technology) protocol. This is a real‐time industrial Ethernet fieldbus with a data rate of 100 Mbps [8], optimized for short cyclic process data (10 μs), low jitter for accurate synchronization (1 μs) and low hardware costs [9]. Since the sensors and actuators of the system are not EtherCAT enabled, the authors of CORBYS have designed a dedicated low level control module to act as a gateway and as a local controller at the same time.

Authors in [10] propose a lower limb exoskeleton for physical assistance and rehabilitation. The purpose of the system is to provide powered assistance in the sagittal plane at both hip and knee joints. It essentially consists of a motorized leg device with four degrees of freedom including hip, knee, and ankle. The system is a combination of an exoskeleton robot for the legs and an external supporting end‐effector robot for the pelvis. The exoskeleton sensors are Hall effect and high‐resolution encoders of 2048 pulses per cycle in each joint. It also includes a custom distributed embedded driven system. The actuation elements are Maxon dc flat brushless motors. The proposed architecture includes a four‐layer control scheme as exoskeleton, distributed embedded system, servomotor, and control computer. Specifically, the robot control architecture which interfaces the control algorithm with the robot, consists of the exoskeleton, sensors, and actuators; a control unit for real‐time algorithm execution, which includes the acquisition card for interface with sensors, actuators, and the controller; and a host computer, which runs an application for doctor to user interface. The main functions of the control system are, in first place, to gather the position, angular velocity, and acceleration signals needed for control, data collection, and evaluation. This information is used to generate the required signal to activate the driven motors. A low‐level driver that is connected to a Windows XP PC controls the non–real‐time tasks. The interface between this WinXP controller and the active orthosis is based on a controller area network (CAN). Specifically, the communication is done through a wired CAN bus using CANOpen protocol and a dynamic link library developed by the authors. During the execution of the control strategy, this interface is used for real‐time signals monitoring and controller parameters tuning. The main objective is that the user should be able to track any continuous desired trajectory assisting the knee and angle movement. For this, sensors attached to the limb measure its motion, an error‐canceling algorithm performs a real‐time discrimination of the undesired motion components, and error information is the input for the controller in order to generate the desired actuator action to minimize the error.

In [11], with the purpose of enhancing the real‐time movement detection of a lower limb exoskeleton, authors propose an inertial sensor network based on CAN. In order to acquire the inertial signals, these sensor nodes are distributed over the lower limb, trunk and waist of the exoskeleton. All the inertial sensors include three‐axis gyro, accelerometer, and magnetometer together with an ARM microprocessor and the CAN communication module. The microprocessor is responsible for signal conversion and filtering, and data merging. The complete body area network is built by means of the CAN network to transfer the sensors data to a central perception node. A work station for data analysis receives the data from the central perception node and records the exoskeleton postures. In more detail, the CAN bus network gather and integrate the data from eight sensor nodes placed on each left and right foot, calf, thigh, on the waist, and on the back of the exoskeleton system. The communication protocol is based on CAN 2.0a, which frames a structure that defines an 11‐bit identifier. Among other, messages could be remote frames, data, or error frames. In order to improve the real‐time performance, the medium access control for the shared bus used in this work is a collision free one. Specifically, it is based on time division multiplexing (TDM), i.e. tCAN, and so, each sensor has available an exclusive time slot for data transmission. The required synchronization is carried out by means of a special synchronous message and alternatively by the local clock in each sensor node. Authors define the perception cycle to be 10 ms divided into 10 time slots (one for synchronization message, one for back time slot, and eight for each sensor). Authors claim that using this protocol reduces the data load by 16.7% regarding a system that transmits the original raw sensor data.

A lower limb rehabilitation exoskeleton suit which includes biomimetic framework, perception sub‐system, controller, electrical motor, driving system, and power supply subsystem is presented in [12]. The perception system includes sensors (pressure, gyroscopes, encoders, and pressure switches) and a CAN‐based communication network. Data from the sensors is gathered by any of the seven sensor node microsystems and are fundamental to obtain movement position and perceive movement intention. The CAN communication bus uses a single‐line topology. The master node consists of an embedded controller that uses a CAN bus adapter for network connection, while sensors transmit their information directly to the bus.

A bed‐type lower limb rehabilitation training robot consisting of a robot bed and an exoskeleton is presented in [13]. The perceptual system is a multisource information perception system, which includes the detection methods of bioelectric (EEG, EMG, oxygen) and physical (joint angles, interaction forces) signals and the integration of a sensor network based on a multiagent system. The integration between the several control softwares and the variety of data and different communication protocols is done by means of a multi‐agent software system. A modular design has been selected for the subsystems and the communication between them is based on TCP/IP. Data communication is done in an asynchronous way and the heterogeneous devices are integrated by means of an Ethernet protocol.

As an alternative to a wired‐based exoskeleton communication network, authors in [14] propose a wireless remote control arm exoskeleton intended for upper limb rehabilitation. The robot system can be remotely controlled by means of a ZigBee‐based network [15] which is implemented with low energy consumption XBee modules. The physiotherapist controls and receives feedback information by means of a LabVIEW‐based human machine interface.

Under these premises, and regarding the exoskeleton communication network, our future line of work will focus on the performance comparison between wired and wireless solutions.

3.2. Architecture

Figure 8 shows the architecture of the communication system. It has one main process unit (MPU) and six sensor and actuator units (SAU). The MPU components are a microcomputer, an accelerometer, a module of biological signal acquisition (EEG and EMG), and a communication device. A motor, an encoder, a communication device compose the SAU, and the SAU+, additionally, has two strain gauges at the ankles.

Figure 8.

Architecture of the exoskeleton system.

The MPU performs the control of the exoskeleton using the data generated by SAUs. The SAUs implements the signal acquisition of the encoder and the strain gauges (if SAU is located at the ankle), the communication (transmission and reception), and the actuation of the control performed by the MPU.

The microcomputer used in the MPU is an embedded device with enough processing power to carry out the control task. Different options available commercially are Raspberry Pi, BeagleBone, Odroid, etc., but in the market, some other options exist.

The use of communication devices is to reduce the wiring in the exoskeleton. The communication device in the MPU has two interfaces and in the SAU has one interface. The bus network topology is used for the communication. Each communication interface in the MPU operates one leg.

3.3. Communication protocol

The selected protocol for the required real time communications is CAN. It is a serial communications protocol. The advantage of this protocol is the support of real time control with high level of reliability that is a necessary feature for the exoskeleton system. It supports the real time communications offering a guaranteed maximum latency.

There exist two versions of CAN, and the last one was standardized in 2015 in ISO 11898‐1. The chosen version is the newest one, version 2.0. CAN standard defines a multicast‐based communication protocol originally intended for automotive industry. Currently, its applications include factory and plant controls, robotics, medical devices, and avionics systems [16]. CAN is a broadcast digital bus that supports data rates from 20 kbps to 1 Mbps depending on the bus length and transceiver speed. The longest bus length of the exoskeleton would be 40 meters, for whose case a speed of 1 Mbps is selected. The selection of a CAN‐based communication network was mainly motivated by its reliability and deterministic operation. Such behavior is based on node priorities, which depends on the node identification (Node ID). The node with the lowest ID value is the one with the highest priority. If two or more nodes attempt to simultaneously transmit over the shared bus, the node with the lesser ID will automatically get the access to the bus. In this way, there is no need of master or slave CAN devices. The version CAN 2.0a uses identifiers of 11 bits (standard frame) and the version CAN 2.0b uses identifiers of 29 bits (extended frame). The exoskeleton system requires seven units then the version CAN 2.0a can be used.

3.4. Data types

The three types of units (MPU, SAU, and SAU+) generate data. MPU send actuation data about the position in degrees of the motors, and SAUs and SAUs+ send sensor information from encoders and strain gauges. All units send information in the data field of a CAN 2.0a frame, shown in Figure 9. The data field in the CAN frame can be from 0 to 64 bits, which means a maximum of 8 bytes can be send by frame.

Figure 9.

CAN 2.0a frame.

The data transmitted by the MPU is basically the position in degrees of the motors located at the two SAUs and the SAU+ of each leg. The MPU transmits two MPU frames (Figure 10), one by each leg. The total size of the MPU frame is 24 bits (3 bytes). This frame is received by the SAU and SAU+. MPU frame is broadcast by the MPU in the bus. SAU and SAU+ read the same frame. The first eight bits are processed by SAU located at the hip, the next eight bits are processed by the SAU located at the knee, and the last eight bits are processed by the SAU+, which is located at the ankle.

Figure 10.

MPU frame.

SAUs send a data field of eight bits, which has been called SAU frame. The SAU frame (Figure 11) sends information about position of motor in degrees, which it gets from encoders. This data is collected by an MPU to perform the control. SAUs+ frame (Figure 11) has, additionally, one byte by each strain gauge; there are two gauges by leg. The total size of data field in this unit is 24 bits. Each unit has a slot time of the bus to transmit their data.

Figure 11.

SAU frame.

Advertisement

4. Patient‐exoskeleton interaction system design

Since a robotic structure such as an exoskeleton comprises a human component, the amount of interaction the individual may have with it is directly related to his impairment degree. A patient wearing an exoskeleton becomes part of it, and the way he interacts with such device must be considered as a nuclear part of the whole system. Since the ability of walk depends on the impairment degree of the subject, exoskeletons are built so that they may work independently as isolated structures with autoequilibrium mechanisms where the patient becomes only a passenger, or they can be built as a complementary structure where the patient controls how the exoskeleton must behave. We consider a patient having partial leg impairment, such that he can participate his motion intention to the exoskeleton by trying to walk.

4.1. Signal acquisition and feedback

An exoskeleton structure comprises many input signals related to two main features: first, structural feedback, which is mainly related to the actual movement and relative position of the mobile parts of the exoskeleton; and secondly, human feedback, which is related to the biological electrical signals generated by the nervous system because of the exoskeleton action on human body and due to the subject intention to move. While the first set of signals can be obtained using rotation encoders, torque sensors, force sensors, etc., the latter set of signals must be acquired directly from the subject who wears the exoskeleton. For instance, to register the electrical activity generated within muscles, electromyographic signals (EMG) are retrieved by implementing wired electrodes located directly on the subject's skin, specifically, on the muscles whose parts are related to the exoskeleton movement such as laps, hips, etc. Movement intention may be guessed by analyzing electroencephalogram signals (EEG). Brain‐computer interfaces are regularly used to allow computers to acquire brain activity related to actual body translation or motion intention. Some external interfaces may also be used to recognize human motion intention because of their suitability in being quickly to put on and the easiness to get used to control them. Among many proposals found in the literature, joystick‐like devices that are easy to move stand out as the most relevant. Normally, these kinds of devices take advantage of the body parts that are fully controllable by partially impaired persons, such as the eyes blinking, tongue movement, hands, arms, shoulders, etc. These devices usually take advantage of electromyographic signals that can be captured from muscles to generate linearly separable patterns that are usually associated to a limited set of motion functions. Basic motion function commands are: go forward, go backward, sit down, stand up, stop, etc.

There are many other sources of information that can be leveraged towards helping impaired persons to regain the walking independence ability. For instance, the use of cameras and ultrasonic sensors has become more frequent among researchers and computer vision enthusiasts mainly due to the development of high computational power reached in recent years. Although their study has gained attention in recent years, their analysis goes beyond the scope of this book chapter. The interested reader, however, may want to look to [1720].

4.2. EMG interface and signal conditioning

Electromyographic signals (EMG) represent the electrical activity found in body muscles related to motions and sensations. These signals travel from the brain to muscles and vice versa through the spinal cord to control body movement. Normally, acquired EMG signals are converted into muscle forces variables that are introduced into the total exoskeleton control strategy.

Several authors have proposed the use of EMG signal in the implementation of lower body exoskeletons. Human motion intention is clearly reflected on the legs muscles.

EMG signals normally vary very slowly with frequency, and then, it is possible to acquire them using sampling rates of 512–1000 samples per second. As usual, these signals are also low‐pass filtered to remove high frequency and noise components. More details in conditioning EMG signals can be found at [21].

The use of EMG signals in controlling exoskeleton structures emerged as the natural intuition of having signals that although being present in a muscle (the actuator) are not able to generate movement. Then, the basic idea is to take advantage of these residuary motion electrical signals to ignite external mechanisms based on electrical motors, pneumatic pistons, etc.

In [22], EMG signals are used to control a lower body exoskeleton and test their system on the right leg of a subject. The authors located differential electrodes on the rectus femoris, vastus lateralis, and semitendinosus muscles to extract information related to knee flexion. Signals are acquired with a 12 bit resolution analog to digital converter (ADC). Moreover, they explore two different control methods: one based on a human body model that is used to determine actuator controllers’ inputs; and a second method in which they convert EMG signals into forces that are compared with forces present in the exoskeleton such that a difference can be found and used as the controller input. The work presented in [23] describes the use of EMG signals and Bayesian information criteria (BIC) combined with well‐known feature extraction techniques aimed to feed a linear discriminant function that can effectively classify eight distinct stages of gait.

In [24] authors use surface electromyography (sEMG) signals and a least squares support vector machine classifier in order to recognize six different motion patterns. The feature extraction phase involves the use of the wavelet transform and two different decomposition substages that yielded results with very high accuracy. Once sEMG signals are classified, they can be easily linked to a specific motion pattern of the exoskeleton. Another control strategy based on the use of EMG signals is presented in [25].

4.3. EEG interface and signal conditioning

As many other functions of the human body, motion originates in the brain. Electroencephalogram signals (EEG) reflect brain activity; therefore, they are normally considered whenever it is necessary to associate some human experience with a determined pattern. Although thoughts of moving a specific body part for trivial activities such as shaking hands or playing video games are caused by human will, there are many activities that imply body motion that seem to be generated unconsciously. Nobody thinks about how to walk unless he may seem to be interested in doing so in a very specific way other than the one he does on a regular basis. Nonetheless, muscles react to an order that comes from the brain, which indicates that the human structural system cinematic components must work harmonically together to move the body to another location. We may want to think about taking a glass from a table; however, we do not really care about the actual path of our hand heading from wherever it is located to the actual position of the glass. Moreover, certain translation activities rely heavily on our natural feedback sensors of sight and equilibrium.

EEG signals are retrieved by brain‐computer interfaces, which can use invasive and non‐invasive methods to register electrical brain activity. The latter methods are preferred since they do not require to introduce estrange objects into the human body such as electrodes. Tests in animals have proven that although the insertion of physical electrodes into the brain may be a suitable way to obtain and to introduce electrical signals in the brain, the involved risks are still too high, mainly since long term effects and possible brain damage in humans are still unknown factors. Therefore, non‐invasive methods involving devices that require only installing a set of small electrodes on the head have gained the attention of researchers and have reported preliminary satisfactory results that did not require surgery of physical invasion of the brain. Devices such as the Mindwave from Neurosky, or the Emotiv Epoc have shown to be effective ways to acquire human brain activity [26]. As the complete understanding of the brain functioning is still in development, we are currently transiting a path of discoveries and research guided by experimentation.

4.3.1. EEG signals

Human brain activity can be acquired through an EEG device. These signals comprise a set of signals in different frequency bands, ranging from 0 to 30 Hz. This bandwidth is separated in five well‐known kinds of waves called correspondingly α, β, γ, Δ, and θ. Due to the very low frequency range, these waves exhibit sampling rates of brain‐computer interfaces. BCI varies from 512 to 1000 samples per second. Normally, a raw signal is acquired and passed through a filter bank that separates the waves of interest.

4.4. Related work

Current brain‐computer interfaces require subjects to wear a cap‐like device that includes a set of several electrodes located at different positions aiming to capture the activity generated in the brain in relation of what subjects are indeed experiencing. It is well known that specific parts of the head reflect brain activity related to specific activities such as concentration, movement, speech, etc.

Authors in [27] make use of the device Emotiv EPOC to control a robotic arm. They point out that body motion is well known to be reflected on the standardized 10–20 brain‐computer interface (BCI) electrodes location; for instance, the left front side of the head with electrode locations AF3, F3, F7, and FC5 effectively reflect brain activity related to the right side of body movement. The general strategy considered here is to use three different feature extraction methods from brain signals: the wavelet packet decomposition transforms to separate the acquired signal into a vector space; the use of the first 10 coefficients of the fast Fourier transform as the relevant components of each EEG derivation and lately, the use of principal components analysis (PCA) to reduce space dimensionality. Later, a classifier based on neural networks is used to determine different subject movement intentions comparing the obtained results given the use of the three feature extraction methods mentioned.

In [28], the authors propose the use of a commercial and auto‐balanced exoskeleton that is tested under the use of a BCI as an acquisition device that feeds an artificial neural‐based decoder that will determine the user motion intention. They also propose the use of intracortical electrodes inserted into the cranium to study how intracortical networks evolve with the extended use of the exoskeleton.

Authors of [29] reach a 98% of accuracy in predicting motion intention using a BCI of 64 channels, using the standard 10–20 norm. The interface provides signals sampled at 100 Hz, which is wirelessly sent to a computer. They make use of digital filters to obtain the Δ wave using a 200 ms sliding window. Later, they use a local fisher linear discriminant (FLD) classifier.

4.5. General approach

Analyzing a general way to approach the use of EEG and EMG signals in predicting the human intention to move, we may require first to determine the best BCI device considering how invasive it could be, and how easy will it be to wear in a regular basis; secondly, the set of EMG electrodes and their relative location. Most BCI devices will return a raw EEG signal which needs to be filtered to extract the waves of interest. Normally, the use of IIR digital filters is adopted because of its flexibility and the easiness for them to be implemented in software only.

The feature extraction stage is one of the most important phases in the design of a classifier. Current research is focused on the use of domain change strategies such as the use of Fourier, wavelets, discrete cosine transforms, etc. As for the classifier part, neural networks have shown to exhibit good results; nonetheless, researchers are now using modern techniques such as support vector machines (SVMs).

Advertisement

5. Control design

Figure 12 shows hierarchical control architecture for efficient management/interaction of the lower limb exoskeleton with the user. The control architecture proposes three layers of execution, each for specific functionalities:

  • High level: This layer performs an activity mode recognition, which enables the controller to switch between mid‐level controllers that are appropriate for different locomotive tasks, such as level walking, stair ascent, standing, hopping, etc.

  • Mid level: This layer maps the user intentions into orders/setpoints to be sent to local controllers, each installed at every joint of the exoskeleton.

  • Low level: This layer performs real‐time control in each joint by executing feed forward and feed backward control loops.

Figure 12.

Hierarchical control architecture for efficient control of lower limb exoskeleton.

Several approaches can perform the switching selection among midlevel controllers. The easiest and safe selection for this layer is a discrete‐event controller. Figure 13 shows proposed transition states for such controllers when performing gait routines.

Figure 13.

Transition states for gait control in the exoskeleton.

References

  1. 1. S. M. Rajesh, “Design of human exo‐skeleton suit for rehabilitation of hemiplegic people,” Procedia Engineering, vol. 51, pp. 544–553, 2013.
  2. 2. Pons, José L. Wearable Robots: Biomechatronic Exoskeletons, doi 10.1002/9780470987667, 9780470512944, 1729-8806, 1–338, Wiley, 2008.
  3. 3. M. R. Tucker et al., “Control strategies for active lower extremity prosthetics and orthotics: a review,” Journal of NeuroEngineering and Rehabiliation, vol. 12, pp. 1, 2015.
  4. 4. N. Li, L. Yan, H. Qian, H. Wu, J. Wu, and S. Men, “Review on lower extremity exoskeleton robot,” Open Automation Control Systems Journal, vol. 7, pp. 441–453, 2015.
  5. 5. U. Onen, F. M. Botsali, M. Kalyoncu, M. Tinkir, N. Yilmaz, and Y. Sahin, “Design and actuator selection of a lower extremity exoskeleton,” IEEE/ASME Transactions Mechatronics, vol. 19, no. 2, pp. 623–632, Apr. 2014.
  6. 6. T. Yoshikawa, Foundations of robotics: analysis and control. MIT Press, Cambridge, MA, USA, 1990.
  7. 7. V. Grosu et al., “Real‐time physical layer architecture for CORBYS gait rehabilitation robot,” IEEE International Conference on Rehabilitation Robotics (ICORR), 2015, pp. 606–611.
  8. 8. I. K. Jung and S. Lim, “An EtherCAT based control system for human‐robot cooperation,” 16th International Conference on Methods Models in Automation Robotics, 2011, pp. 341–344.
  9. 9. EtherCAT, “EtherCAT‐ the Ethernet Fieldbus,” EtherCAT ‐ the Ethernet Fieldbus. [Online]. Available from: https://www.ethercat.org/en/technology.html. [Accessed: 29‐Dec‐2016].
  10. 10. R. Lu, Z. Li, C. Y. Su, and A. Xue, “Development and learning control of a human limb with a rehabilitation exoskeleton,” IEEE Transactions on Industrial Electronics, vol. 61, no. 7, pp. 3776–3785, 2014.
  11. 11. X. Zhang, Y. Shi, and Y. Zhang, “A CAN‐based inertial sensor network for lower limb exoskeleton,” IEEE International Conference on Communiction Problem‐solving, 2014, pp. 473–476.
  12. 12. X. Zhang, Z. Xiang, Q. Lin, and Q. Zhou, “The design and development of a lower limbs rehabilitation exoskeleton suit,” ICME International Conference on Complex Medical Engineering, 2013, pp. 307–312.
  13. 13. J. Chen, X. Zhang, and R. Li, “A novel design approach for lower limb rehabilitation training robot,” IEEE International Conference on Automation Science and Engineering (CASE), 2013, pp. 554–557.
  14. 14. Y. Bouteraa and I. Ben Abdallah, “Exoskeleton robots for upper‐limb rehabilitation,” 13th International Multi‐Conference on Systems, Signals Devices (SSD), 2016, pp. 1–6.
  15. 15. ZigBee‐Alliance, “ZigBee Alliance,” [Online]. Available: http://www.zigbee.org/. [Accessed: 29‐Dec‐2016].
  16. 16. M. Di. Natale, H. Zeng, P. Giusto, and A. Ghosal, Understanding and using the controller area network communication protocol: theory and practice. Springer Science & Business Media, New York, 2012.
  17. 17. S. Qiu, Z. Li, W. He, L. Zhang, C. Yang and C. Y. Su, “Brain?Machine Interface and Visual Compressive Sensing-Based Teleoperation Control of an Exoskeleton Robot,” in IEEE Transactions on Fuzzy Systems, vol. 25, no. 1, pp. 58–69, Feb. 2017.
  18. 18. K. Kiguchi and M. Liyanage, “A study of a 4DOF upper‐limb power‐assist intelligent exoskeleton with visual information for perception‐assist,” IEEE International Conference on Robotics and Automation, 2008, pp. 3666–3671.
  19. 19. T. Tanaka, S. Hori, R. Yamaguchi, M. Q. Feng, and S. Moromugi, “Ultrasonic sensor disk for detecting muscular force,” The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN, 2003, pp. 291–295.
  20. 20. G. S. Sawicki and N. S. Khan, “A simple model to estimate plantarflexor muscle–tendon mechanics and energetics during walking with elastic ankle exoskeletons,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 5, pp. 914–923, May 2016.
  21. 21. G. De Luca, Fundamental Concepts in EMG Signal Acquisition. DelSys Inc., 2003.
  22. 22. C. Fleischer, A. Wege, K. Kondak, and G. Hommel, “Application of EMG signals for controlling exoskeleton robots,” Biomedical Techiques, vol. 51, no. 5/6, pp. 314–319, 2006.
  23. 23. C. D. Joshi, U. Lahiri, and N. V. Thakor, “Classification of gait phases from lower limb EMG: Application to exoskeleton orthosis,” IEEE Point‐of‐Care Healthcare Technologies (PHT), 2013, pp. 228–231.
  24. 24. H. Wang, T. Mu, H. Li, and X. Zhang, “Research on surface EMG based accurate perception method for exoskeleton robot control,” IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015, pp. 1900–1905.
  25. 25. D. Chen, M. Ning, B. Zhang, and G. Yang, “Control strategy of the lower‐limb exoskeleton based on the EMG signals,” IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), 2014, pp. 2416–2420.
  26. 26. F. Torres et al., “Adquisición y análisis de señales cerebrales utilizando el dispositivo MindWave,” Maskana. Revista Científica, 2014.
  27. 27. H. A. Shedeed, M. F. Issa, and S. M. El‐sayed, “Brain EEG signal processing for controlling a robotic arm,” 8th International Conference on Computer Engineering Systems (ICCES), 2013, pp. 152–157.
  28. 28. J. L. Contreras‐Vidal and R. G. Grossman, “NeuroRex: A clinical neural interface roadmap for EEG‐based brain machine interfaces to a lower body robotic exoskeleton,” in 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2013, pp. 1579–1582.
  29. 29. A. Kilicarslan, S. Prasad, R. G. Grossman, and J. L. Contreras‐Vidal, “High accuracy decoding of user intentions using EEG to control a lower‐body exoskeleton,” in 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2013, pp. 5606–5609.

Written By

Luis I. Minchala, Fabián Astudillo‐Salinas, Kenneth Palacio‐Baus and Andrés Vazquez‐Rodas

Submitted: 13 September 2016 Reviewed: 12 January 2017 Published: 03 May 2017