In recent years, detecting upper-limb motion intention has attracted growing research attention in order to improve the manipulation of a prosthetic hand. Recording of forearm muscle activity has been used as a signal source to detect wrist and hand motions using different pattern recognition techniques. However, it is difficult to take in consideration body coordinated motions from these signals alone; therefore the movement of the artificial limb can be unnatural, if consider as a part of the whole body coordination, and a dynamical coupling between the user and the prosthesis is impossible. Also, using only forearm muscle activities to drive the artificial limb leaves aside the possibility for higher level amputees to use these systems.
It is well known that most daily-life upper limb activities present a coordination of the shoulder-arm-hand complex. For example, the shoulder, elbow and hand's trajectories are tightly coupled when reaching and grasping an object, or when throwing and catching a ball. It is because of this dependency that research effort had been done to differentiate hand motions using EMG activity of proximal muscles. For example, C. Martelloni et al were able to discriminate different grip types from EMG signals of proximal and distal muscles by statistical means. Also, Xiao Hu et al compared the performance of a Scalar Autoregresive model with a Multivariate AR modeling using EMG data obtained from the bicep, tricep, deltoid, and brachioradialis, successfully classifying different arm movements. Although the results obtained are encouraging, relying only on EMG information is not accurate enough for a robust dynamical control of a prosthetic hand since it is still complex to classify and interpret the information acquired in real time.
In a previous study, we showed the possibility of improving the discrimination rate by using accelerometers, to detect kinematical information from the around-shoulder muscles, along with the EMG signals. In that study we obtained EMG and accelerometer information from only proximal muscles and used an off-line neural network classifier to discriminate different grips and arm positions. Therefore, the objective of this study was to investigate the possibility of associating around-shoulder muscle activity with different grasps and arm positions while reaching for an object using an on-line recognition method. This way we might be able to deduce the user's motion intention and coordinate the prosthetic arm position and movements to his body in a dynamical way.
2. Measurementexperiment and analysis
2.1. Experimental setup
Three male subjects, all 23 years old, participated in the experiments. They were informed about the experimental procedures and asked to provide their consent. All subjects were healthy with no known history of neurological abnormalities or musculo-skeletal disorders.
2.1.2. Experiment procedure
The subjects were asked to sit comfortably in front of a table. Then, they were asked to move their dominant arm, from an fixed position towards an end-position, and grasp an object(Figure1). Before starting the trial, they had to push a switch button by resting their dominant hand in a natural open flat position. When the trial started they had to extend their arm to reach and grasp an object. Three different object-related grips were used during the experiments, and the object was place in one of five positions five different positions relative to the subject, as denoted in figure 1. The object was placed so as to allow the maximal elbowextension in one of the five directions. Moreover, the height of the chair was regulated for each subject in order to obtain an elbow’s angle of 90° (maintaining the trunk erected). Also, the subjects were asked to reach and grasp a total of 20 times for each position and grip, giving a total of 300 (3 objets x 5 position x 20 repetitions) trials. They were able to rest for a few seconds between each trial and were requested not to bend or rotate the trunk in order to prevent translational motions of the shoulder.
In the experiment, EMG sensor: TYE-1000M, Sikikou Engineering, Japan, was used. The sensor has a band-pass filter with pass frequency band of 10~1000Hz. Muscle activity signals were recorded at a sampling frequency of 400Hz. Generally, the sampling frequency used for EMG recording is of 1 kHz or more, but in order to reduce the processing time and load of the on-line discrimination system, the sampling frequency was decreased. Preliminary experiments showed that this reduction did not significantly affect the recognition. The accelerometer used for measuring the kinematical information from around-shoulder muscles was the MMA7260Q from Freescale Semiconductor Inc. and was sampled at 100Hz.
In total 5 EMG sensors and 4 accelerometers were placed on the skin surface of the different muscles around the shoulder as shown figure 3. The EMG sensors were placed on top of the following muscles:
The clavicular part of pectoralis major muscle.
Acromial part of deltoid muscle (central fibers).
Descending fibers of trapezius muscle.
Ascending fibers of trapezius muscle.
Teres major muscle.
The accelerometers were attached to the skin above the following muscles:
Short head of biceps brachii muscle.
Long head of triceps brachii muscle.
Greater rhomboid muscle.
Infraspinatus muscle and infraspinous fascia.
2.2. Signal processing and feature extraction
In order to decide a suitable filter for signal processing, a fast Fourier transform was applied to the recorded EMG signals and, as shown in figure 4 (a), the highest power spectrum is located below 150Hz. Accordingly, the EMG signals were filtered with a 150Hz low-pass filter, then a 2Hz high pass filter was applied, and finally, the signal was smoothed by a 50-point moving average window (Fig. 4 (b)).
3 features were extracted from the data, based on literature of motion recognition from EMG signals, and they were compared in order to investigate their effect. One of the features obtained is the Mean Value (MV) described by equation (1). This value reflects the average amplitude of a biological signal. Another feature is the Subtract Value (SV) described by equation (2), which expresses the amount of change of a biological signal. Also, the Point Value (PV) was obtained as described by equation (3), which observed the temporal amount of EMG of the biological signal. In these equations,
2.3. Motion discrimination
A back-propagation training based neural network (NN)was used for analysis and classification of the data. It consisted of three layers (input layer, middle layer and output layer). The input layer contained 9 neurons (5 for EMG inputs and 4 for accelerometers), the Middle layer had twice the neuron number as the input layer, and the Output layer had 3 neurons for classifying grips (g1, g2, g3) or 5 neurons to classify positions (p1, p2, p3, p4, p5). The maximal iteration for learning was 2000.
A total of 300 measurements were recorded from each subject. A set of 285 (5 positions x 3 grips x 19 times) measurements were used as the neural network learning data and the remaining 15 measurements (5 positions x 3 grips x 1 times) as the test data, according to the leave-one-out training scheme. After, a different set of 285 measurements was used to train a new neural network and the remaining 15 measurements were used to test it. Following this scheme 20 different neural networks were obtained for each subject. For the off-line analysis the average of the 20 neural networks' discrimination rate was used as the final result. On the other hand, the neural network that showed the best discrimination rate was used as the on-line classifier.
In order to analyze the data extracted from the EMG sensors and the accelerometers, an off-line analysis was made. The neural network was tested using different features in order to explore the feasibility of different motion recognition. Thereafter, an on-line discrimination experiment was designed to test and validate whether or not it is possible to classify different arm motion using EMG and acceleration signals from around-shoulder muscles.
3.1. Off-line analysis
For demonstration purposes only the data of one subject is shown in this section, but the results are consistent with the other 2 subjects. Figure 6 shows the results obtained when discriminating the different positions of the arm and figure 7 shows the discrimination rate of the grips used in this study.
It can be observed from the figures that combining the data obtained from the accelerometers and the EMG improves greatly the discrimination rate of the different motions.
3.2. On-line discrimination result
The on-line recognition system used for this experiment consisted of four parts: sampling the signal, processing the signal, extracting signals features, and finally, classifying the signal. A total of 300 trials were performed by each subject (5positions * 3 grips * 20 repetitions).
3.2.1. Analysis of discrimination rate
Figure 8 shows a comparison between the discrimination rate of 5 different arm positions and the different data lengths used to calculate each feature (200 points, 400 points, 600 points). Figure 9 shows the same comparison for the discrimination of different grips.
Figure 12 shows a comparison of the highest discrimination rate between subjects.
From figure 12, it can be noted that the highest discrimination rates were obtained with different features and data lengths for each subjects. This points out that, by calculating different features, it is possible to assess individual differences in muscle activities, improving the recognition rate for different people.
3.2.2. Analysis of incorrect discrimination
Although good discrimination rates were obtained in the experiment there were times when a position or a grip was wrongly classified. Figure 13 shows how many times an arm position was incorrectly classified as another position. From this data it is possible to determine signal similarities between different arm positions. Additionally, figure 14 show the cases for grip discrimination.
For example in figure 13, for Subject A, p2 was discriminated incorrectly as p1, p4, or p5. Also, for Subject B, p2 had a low discrimination rate, which was discriminated incorrectly as p5 and p4. For Subject C p2 was discriminated incorrectly as p4 and p5, and p4 was discriminated incorrectly as p2. Finally, for all subjects p1 was discriminated incorrectly 36 times, p2 31 times, p3 17 times, p4 21 times, and p5 was 36 times. This results showed that p2 and p5 present difficulties to be recognized.
Similarly, figure 14 shows how many times a grip was incorrectly classified as another grip. For Subject A g2 was discriminated incorrectly as g1, and g1 as g3. For Subject B most of the times g1 was mistaken for g2, and for Subject C g1 were discriminated incorrectly as g2. For all subjects g1 was discriminated incorrectly 86 times, g2 110 times, and g3 34 times. This results showed that g1 and g2 were the most difficult grips to be identify.
These mistakes are mainly due to similarities in the recorded muscle activities for each task, but we also explored how the location of the sensors was affecting the discrimination. The contribution of each sensor to the discrimination and the combination of grips and positions with difficult discrimination was investigated using the Tukey-Kramer test. Table 1, 2 and 3 show the result of the analysis of variance for the different positions. In this case the squares with an “O” refers to the data which showed a statistical significant difference between 2 positions (pi/pj) and the squares with an “X” refers to the data were there wasn't any significant difference between 2 positions. Also, in the table the sensor that showed more than 7 significant differences between positions are marked with a yellow color. This indicates which location contributed more to the discrimination.
It can be noted from table 1 that, when comparing p2 and p4, only 9 sensors showed a significant difference in their data, and when comparing p2 and p3 only 11 sensors showed a significant difference.
Table 2 shows the results obtained for Subject B. In this case, the comparison between p2 and p3, p2 and p5, p3 and p4 showed a statistical significant difference in only 9 sensors.
Additionally, table 3 shows the results obtained for Subject C. It can be observed that when comparing between p1 and p2, or when comparing between p2 and p4, only 10 sensors showed a significant difference in their data.
This results seems to support the finding showed in figure 13 and p2 is the position most likely to be discriminated incorrectly for all Subjects. Moreover, most of the contribution to the discrimination is mainly due to the accelerometers, and it was difficult to differentiate between position using only the EMG sensors. Also, among all the subjects there wasn't a common EMG sensor that showed more than 7 significant difference between 2 position, which points out how individual differences affect the EMG signals.
Table 4 shows the analysis of variance comparing different grips of Subject A. When comparing g1 and g3 only 7 sensors showed a statistical significant difference. Moreover, the EMG data of sensor 1 didn't showed any statistical difference for any of the comparisons.
Table 5 shows the results obtained from Subject B data. In this case, when comparing g1 and g2, or g1 and g3, only 7 sensors showed a significant difference. The EMG data from sensor 1, 3 and 4 didn't show any significant difference for any of the comparisons.
Finally, table 6 shows the results for Subject C. Comparing the data obtained for g1 and g2 only 6 sensors showed significant differences. The X axis data from the accelerometers 1, 2, 3 and 4 didn't showed any statistical difference in any of the cases.
As an overall, for all subjects, g1 was the most likely grip to be mistaken, which agrees with the result showed in figure 14. Also, the EMG sensor 2 contributed greatly to the discrimination of the grips for all subjects. The information acquired from the Z axis of the accelerometers 1 and 2 also had a great contribution to the discrimination.
In order to verify the effect of the sensors that didn't showed any statistical difference to the discrimination process, we removed the data from these sensors and applied the new data set to the off-line neural network. The sensor data removed was: the EMG sensor 1, the Y axis of accelerometers 3, 4, and the X axis of accelerometer 4 for Subject A; the EMG sensor 1, 3, 4, and the X axis of accelerometers 4 for Subject B; and the X axis of accelerometers 1, 2, 3, 4, and the Y axis of accelerometer 3. Figure 15 shows a comparison between the
discrimination results of using the data from all sensors(17 sensors) and the new data set for each subject (13 sensors for Subject A, 13 sensors for Subject B, and 12 sensors for Subject C). By removing the sensors that didn't have any contribution to the discrimination it is possible to improve the discrimination rate for all grips.
In order to take full advantage of the many degrees of freedom in current light-weighted dexterous prosthetic hands, research efforts are required to improve the motion intention deduction algorithms available nowadays. The new algorithms have to take into account the whole body dynamics, to aid the amputee to realize natural and intuitive manipulation of the prosthesis. To this date, different data mining methods have been used to extract and predict information from EMG sensors, among which neural networks are the most used method commonly. R. Ashan et al. made a review on the different types of classifiers used until 2009 for EMG extraction for Human Computer Interaction applications. They concluded that the use of neural networks dominates for these applications, but point out the advantages of other methods. Moreover, a statistical approach was used by Hu X et al. in order to model the arm motions. In this case a multivariate analysis of the EMG information approach was used in order to take advantage of the correlation of the EMG activities when making arm movements. The results of these studies show that it is possible to extract arm dynamics information from activities of the proximal muscles. Although these results are encouraging, all of them processed the data off-line, therefore the real time performance of the classifier is not taken into account.
The results of this study shows that it is possible to recognize different arm motion positions and grips in a dynamical way using an on-line classifier. As seen from the results, the discrimination rate for most of the motions was high, but the lowest discrimination rate was observed in reaching position p2, which is reaching an object at the center. In this case, there aren’t any distinctive components like in the other motions (for example reaching up or to the sides), making the discrimination very difficult. Also, as expected, the subject variability is very high. This can be due to sensor position or subject’s individual muscle strength. Despite this difference, by inspecting different features different positions and grips can be correctly discriminated. Certainly, we need to test more subjects to determine statistically the performance of the system. Another point that has to be taken into account, is that the NN was trained only off-line, which can leave many important features out. This is why currently an on-line training scheme is being developed in order to improve the detection rates, despite of individual differences. Finally, the recognition delay has to be further improved, as for now it is about 1s, which is too slow to use in any real life application.
These preliminary findings are of importance to achieve a dynamical coupling between the person and the machine because, by taking in account the whole body dynamics, it will allow more natural and intuitive manipulation of the robot hand.
This study shows that it is possible to distinguish, dynamically, different grips and arm positions from only around-shoulder muscle activities using an online classifier and that the use of EMG and accelerometers improves greatly this discrimination despite individual differences. In near future the recognition delay has to be reduced, and the system has to be tested with real amputees.
Otsuka A. Tsuji T. Fulida O. . Sakawa M. 2002Research on Upper Extremity Prosthesis based on Human Motion Analysis-Development of Internally Powered Functional-cosmetic Prosthetic Hand,
Nishikawa, D.; W, Yu.; Yokoi, H. &Kakazu, Y. ( 2002). On-Line Supervising Mechanism for Learning Data in SurfaceElectromyogram Motion Classifiers, System and Computers in Japan 33, pp.1-11(1999). On-Line Learning Methods for EMG Prosthetic Hand Controlling, Institute of Electronics, Information and Communication Engineers, 9 1510 1519
Tsukamoto, M.; Kondo, T. & Ito, K. A Prosthetic Hand Control by Non stationary EMG at the beginning of Motion, The Institute of Electronics, Information and Communication Engineers
Tsuji T. Yoshihiro T. . Shima K. 2007An MMG-based Control Method of Prosthetic Manipulators Using Acceleration Sensors,
Lacquaniti F. . Soechting J. F. 1982Coordination of arm and wrist motion during a reaching task,
Soechting J. F. Flanders M. 1993Parallel, interdependent channels for location and orientation in sensorimotor Transformation for reaching and grasping,
Desmurget M. Prablanc C. Rossetti U. Arzi M. Paulignam Y. Urquizar C. . Mignot J. C. 1995Postural and synergic control for three-dimensional movements of reaching and grasping,
Cothros N. Wong J. D. Gribble P. L. 2006Are there distinct neural representations of object and limb dynamics,
Thiel V. E. . Steenbergen B. 2001Shoulder and hand displacements during hitting, reaching and grasping movements in hemiparetic cerebral palsy,
Wilson F. R. 1998The Hand: How its use shapes the brain, language and human culture,
Martellon C. Carpaneto J. . Micera S. 2008Classification of Upper arm EMG signals during object-specific grasp. 30th Annual International IEEE EMBS Conference
Nakamura R. Saito H. 1992Basic Kinesiology, 4th Ed, ISHIYAKU publication
Ahsan M. Ibrahimy M. . Khalifa O. 2009EMG signal classification for human computer interaction: a review,
Hu X. . Nenov V. (2004 2004Multivariate AR modeling of electromyography for the classification of upper arm movements,