Open access peer-reviewed chapter

Estimation of Targeted-Reaching-Positions by Around-Shoulder Muscle Activities and Images from an Action Camera for Trans-Humeral Prosthesis Control

Written By

Yohei Muraguchi and Wenwei Yu

Submitted: 05 October 2018 Reviewed: 07 February 2019 Published: 04 March 2019

DOI: 10.5772/intechopen.85026

From the Edited Volume

Prosthesis

Edited by Ramana Vinjamuri

Chapter metrics overview

837 Chapter Downloads

View Full Metrics

Abstract

For trans-humeral amputation, daily living tasks requiring bimanual coordination, such as lifting up a box, are most difficult, hence most urgent for a trans-humeral prosthesis to fulfill. However, in studies reported on trans-humeral prosthetic control, the states of the target objects, such as their size, relative pose and position, which are important for any real reaching and manipulation tasks, have not been taken into account. In our previous study, for a box lifting-up task, we investigated the possibility of using around-shoulder EMG (electromyogram), for identifying target-reaching-positions for the boxes with different configurations (relative pose and position). However, with only the around-shoulder EMG, it is impossible for the system to guide the prosthesis to hold or grasp target objects precisely and fast sufficiently. The purpose of this study is to explore the possibility of using both the image information from an action camera and around-shoulder EMG, to identify targeted-reaching-positions for various box configurations more accurately and more rapidly. Multinomial logistic regression was employed to realize both information integration of, and the target-reaching-position identification. A set of experiments were conducted. As a result, an average classification rate of 75.1% could be achieved for various box configurations.

Keywords

  • trans-humeral prosthesis
  • bimanual coordination
  • reaching motion
  • target objects information
  • logistic regression

1. Introduction

Fore-arm prostheses [1, 2] controlled by users’ bio-signals have been the focus so far, while only fewer studies have been reported on prostheses for higher level amputees [3], due to the fact that there are fewer residual upper limb functions but higher DoFs (degree-of-freedoms) have to be controlled.

To solve this problem, several different approaches have been proposed. The iEEG (intracranial electroencephalogram), obtained from the intracranial electrodes embedded in the brain was used to control trans-humeral prostheses [4]. In [5], Kuiken et al. reported their research efforts to control trans-humeral prostheses using EMG by TMR (targeted muscle reinnervation) technology. By the above-mentioned methods, an intuitive user-prosthesis interface could be achieved using the bio-signals with more direct information of intended motions, however, the problems are clear: they are invasive and need surgery, which costs high, and may cause physical and mental burden to patients.

In [6, 7], the EMG (electromyogram) signals from the around-shoulder area (ASA), and in [8], the EMG from the ASA, together with additional motion-related EEG were used, and machine learning methods were employed to explore the limited information.

Bimanual coordination between one’s healthy arm and its prosthetic counterpart, in bimanual tasks such as holding a bottle with one hand while opening its lid with another hand, operating a car handle, and lifting up a box, was proposed as one solution [9, 10, 11, 12, 13, 14, 15]. This is because at first, the needs of trans-humeral prostheses might mostly come from the bimanual coordination, since in daily living, there are many tasks that need the coordination of the limbs of both sides [16], while most amputees can use their healthy (normal) side to complete most tasks that do not need bimanual coordination. Secondly, more information for controlling trans-humeral prostheses can be acquired from both coordinating sides, since the required behavior of the prostheses could be estimated from not only the residual stumps, but also the motion and motor behavior of the normal side, too.

However, in the studies of bimanual coordination mentioned before, the states of target objects, such as their relative position, size, and pose, which are important for any real manipulation and reaching tasks, has not been taken into consideration. In a typical bimanual coordination task: lifting up a box by two hands, the target-reaching-position for a trans-humeral prosthesis to reach varies depending on the state of the box. For this reason, it is necessary to take into consideration the states of target objects when identifying the target-reaching-position of the healthy arm for realizing the bimanual coordination for the users of trans-humeral prostheses.

Similarly, bimanual coordination has been addressed in robotics [17]. In a study on bimanual box grasping by a humanoid robot, the concept of grasping stability was used to deal with the different states of the box [18].

In our previous study, we explored the possibility of identifying target-reaching-positions with respect to various box configurations (box size and relative pose) and investigated the features highly generalized for unknown data: i.e., those that could enable the classifiers to be trained by fewer box configurations. However, it was made clear that, with only the ASA EMG, it is impossible for the system to guide the prosthesis to hold or grasp target objects precisely and fast sufficiently for the daily living activities.

This study has two relevant purposes, throughout the experiments and analyses for the bimanual box lifting task. The first is to explore the possibility of identifying the target-reaching-positions with respect to various box configurations, using two signal sources: bio-signals detected from the around-shoulder area and images from an action camera. Here a box configuration specifies the pose and the position of a box relative to the user. The reason for using the bio-signal only from around-shoulder area is that the sensors at the distal sites are more likely to be affected by external perturbation, moreover, around shoulder sensors configuration could be also applied to the amputated side. On the other hand, the reason for attaching the camera near the shoulder is that the camera there does not limit the use of both arms in practical use even in a wearable setting, and its positional relation with the trans-humeral prosthesis is straightforward. Classifiers are trained to identify the intended target-reaching-positions for different box configurations.

The second is to explore the optimal way to integrate the information from the two signal sources, to realize fast and accurate target-reaching-position. Since only with the fast and accurate estimation, there could be sufficient time for controlling the trans-humeral prosthesis to match the healthy upper limb.

Advertisement

2. Feature selection and classification for target-reaching-positions

2.1 Feature selection

Three hundred and ninety eight features (i.e., 8 EMG sensors × 10-time steps + 28 ratio of WL × 10-time steps + 8 total sums of WL + 28 ratio of total sum of WL + box pose + box position) were calculated from the measured data. Apparently, using all the features for classification may cause training problems, such as flattening or over-fitting. In this study, the Akaike information criterion (AIC) [19] was used for feature selection.

AIC=2k2lnLE1

Here, k is the number of parameters in the statistical model, and L is the maximized value of the likelihood function for it.

Method of incrementally increasing and decreasing representative variables in [20] was used to select features. That is, if the AIC does not decrease when the next feature is added, the feature selection ends. To decide the initial values for the selection, the ratio between interclass variance and in-class variance in [21] was used. The feature with the largest ratio is adopted as the initial value.

2.2 Evaluation of the features and classification of the target-reaching-positions in the multinomial logistic regression

Multinomial logistic regression analysis was employed as the classifier. The method is called a multinomial logit model, which is one of several natural extensions of the binary logit origin. This multinomial logit model counts the relative probability of being in one category versus being in a reference category, k, using a linear combination of predictor variables. Consequently, the probability of each outcome is expressed as a nonlinear function of p predictor variables [22].

The multinomial logit model can be expressed as the following equations:

ln(π1πk)=α1+β11X1+β12X2++β1pXp,ln(π2πk)=α2+β21X1+β22X2++β2pXp,ln(πk1πk)=α(k1)+β(k1)1X1+β(k1)2X2++β(k1)p

where πj = P (y = j)(j = 1, 2, …, k) is the probability of an outcome being in category j, k is the number of response categories, πj = P (y = j), and p is the number of predictor variables. A total of j-1 equations was solved simultaneously to estimate the coefficients. The coefficients in the model express the effects of the predictor variables on the relative risk or the log odds of being in category j versus the reference category, here k, [22]. When used in classification, the probability of each label can be obtained from the above equations and the feature obtained by measurement. The label with the highest probability is the classification result.

In the feature selection by AIC, a feature is selected by its compatibility with the previously selected features. Therefore, in essence, the features selected earlier are not guaranteed to the best. On the other hand, coefficient, and the p value of the coefficient of the feature by the logistic regression (coefficient, p value of the coefficient) can represent how the feature affects the classification. The feature with the smallest p value affects the classification the most. The reason for using AIC as feature selection is that the logistic regression equation could not deal with directly a large number of predictor variables, i.e., features. For the above reasons, we performed the feature selection using AIC, and feature evaluation logistic regression.

Regarding classification methods, SVM [23] and neural networks [1, 2] are well used for bio-signals. However, in this research, not only the classification but also the information integration based on feature selection and evaluation is required, which is difficult for both SVM and neural networks. Contrarily, the multinomial logistic regression can perform a dual role of classification and feature evaluation. In addition, since classification results of the multinomial logistic regression come with the probability, it is also possible to evaluate the ambiguity of the classification. Furthermore, the multinomial logistic regression uses only j-1 (j: number of categories) weighted sum for classification, its computational cost shall be lower than that of SVM and neural networks.

The difficulty of this research lies in the fact that, the reaching motion to the same relative position of the box with different box configuration (relative pose and position) should be classified as the same class, and in some cases, as the box position changes, even though the actual target-reaching-position is almost the same, the label of the target-reaching-position that should be classified shall be completely different. For example, the back of one box placed at a certain position, and the front of another box placed at a displacement of the box width are the planes with same position. If with only EMG, the reaching motion to both planes would be identified as the same, though they should be classified as the different ones. Therefore, it is necessary to introduce in some forms the box configuration information, and investigate how to integrate the two types of signals.

We compared between two datasets. Dataset 1 used EMG only; dataset used EMG and the box configuration (relative pose and position). Also, the classification was performed in two steps. In step 1, the upper side of the box (RP1, 2, 3) and the bottom side of the box (RP4, 5, 6) were classified. In step 2, in the case where it was classified as the upper side of the box in step 1, classification of RP1, RP2, RP3 was performed. If it was classified as the bottom side of the box, RP4, RP5, RP6 were classified. When the classification result is correct in both steps, the classification rate was increased. In that case, the classification rate was calculated by leave-one-out cross validation.

Feature extraction and feature selection were performed every 0.1 s from the start of motions. Feature selection was performed for each subject and classifier (for the upper side of the box and the bottom side of the box, for RP1, RP2 and RP3, for RP4, RP5 and RP6), and the feature was not unified among subjects. After that, the multinomial logistic regression was constructed using the selected features from the data until a specified elapsed time step, and the change of classification rate was investigated each dataset.

Advertisement

3. Measurement experiment

3.1 Subjects

Three male healthy subjects, of age 23, with no known history of neurological abnormalities or musculo-skeletal disorders, participated in the experiments. They were informed about the experimental procedures and asked to provide a signed consent.

3.2 Experiment procedure

The subjects were required to stand comfortably in front of a table. Before starting a new trial, they were asked to rest the palm of their dominant hand naturally open. They were instructed to move their dominant hand towards one of the six target-reaching-positions on the side of a box, for the purpose of lifting it up (Figure 1), after pushing a button to denote a new trial.

Figure 1.

Reaching position (RP: reaching position).

The size of the box used during the experiment was 260 × 310 × 165 mm (Length × Width × Height). The box was placed in one of four different poses, and three different positions relative to the subject, as denoted in Figures 2 and 3, respectively. The subjects were asked to reach a total of five times for each box configuration, giving a total of 360 (6 positions × 4 poses of box × 3 position of box × 5 times) trials. The subjects were required to do the reaching motion with 1.0 s, following the tempo of a metronome. They could rest for a few seconds between each trial. Muscle activity, skin surface undulation during the motions were recorded with the sensors and devices described in the next subsection.

Figure 2.

Box pose (P: pose).

Figure 3.

Box position (L: position).

3.3 Devices

In the experiment, eight EMG sensors (Trigno, DELSYS), were used to measure the muscle activity. The sensor signals were recorded using Powelab 16/35 (AD instruments), at a sampling frequency of 400 Hz. Generally, the sampling frequency used for muscle activity recording is 1 kHz or more, but because no frequency-domain features are to be used in the classification, as shown in the next subsection, the sampling frequency was decreased.

The eight EMG sensors were placed on the skin surface of eight different muscles around the shoulder: Latissimus dorsi, Deltoid middle strand, Deltoid front strand, Deltoid rear strand, Triceps branchii, Middle part of trapezius, Descending part of trapezius, Pectoralis major, as shown in Figure 4, were selected according to the shoulder anatomy [24].

Figure 4.

The position of the sensors.

The action camera was attached to the shoulder mouth. Then, the image during the reaching motion measurement was acquired. However, this study no information was acquired using image processing. Although the action camera and the algorithms to process the images have been determined, in this study, because it is the integration of information from different signal sources that is to be investigated, the information of relative pose and position the of box was directly used.

3.4 Feature extraction

The EMG signals were processed by a 1 Hz high-pass filter.

The features were based on the waveform length (WL) of filtered raw signals. WL is a measure of complexity of the EMG signal, which is defined as the cumulative length of the waveform over the time segment [25]. The following features calculated.

  1. WL in the segmentation delimited by every time step (0.1 s) and the ratio of WL of each two EMG channels in that interval

  2. The total sum of WL until a specified elapsed time and Ratio of WL of each two channels in that interval

Regarding the relative pose and position information of the box, the angle of the reaching side (as shown in Figure 2), and the distance between the subject and the box (as shown in Figure 3) were used, respectively. To simulate the error possibly caused by image processing, and investigate the tolerance of the classification to configuration deviation, a simulated error was added to the configuration information.

In the analysis for classification rate (Sections 4.1 and 4.2), evaluation of features (Section 4.3), random values (−0.5 to +0.5) created using MATLAB were added to the box pose and box position. Here, 0.5 mean 50% of the angular interval between different poses (45° × 0.5 = 22.5°) (see Figure 2), or distance interval between different relative positions (15 cm/2 = 7.5 cm) (see Figure 3). In the analysis of the effect of simulated error (Section 4.4), four levels of simulated errors: 0, ±0.5, ±0.75, ±1.0 were added. That is, the maximal actual distance (position) error is ±15 cm (±1.0) and, the maximal actual angle (pose) error is ±45° (±1.0). That is, the maximum error given is same as the angular interval between box poses.

The box pose and box position information were introduced as categorical variables. The box pose information P1 and P2 are set to 1, P3 and P4 are set to 2 in Figure 2. The box position information L1, L2 and L3, as shown in Figure 3, were set to 1, 2, and 3, respectively. Also, all features were standardized using the Z score for evaluation of selected features. For a random variable X with mean μ and standard deviation σ, the z-score of a value x is

z=xμδE3
Advertisement

4. Results and discussion

4.1 Comparison using classification rates

Figures 57 show the classification rate at each elapsed time step of the reaching motion for each subject. RPi (i = 1–6) in each figure represents a reaching positions, the meaning of the digit i can be found in Figure 1. At the end of the reaching motion, the classification rate achieved by classification with only EMG and that of EMG + box configuration information was 60.0 and 75.1%, on average for all subjects, respectively. It is clear that the classification rate was greatly improved by integrating the box configuration information and ASA muscle activities.

Figure 5.

Classification rate at each elapsed time step of reaching motion (subject A, (a) uses only EMG for the feature, (b) uses EMG and box configurations (pose, position) for the feature, RP: reaching position, see Figure 1).

Figure 6.

Classification rate at each elapsed time step of reaching motion (subject B, (a) uses only EMG for the feature, (b) uses EMG and box configurations (pose, position) for the feature, RP: reaching position, see Figure 1).

Figure 7.

Classification rate at each elapsed time step of reaching motion (subject C, (a) uses only EMG for the feature, (b) uses EMG and box configurations (pose, position) for the feature, RP: reaching position, see Figure 1).

In Figures 57, the legend markers RP 123, RP 456, RP upper_and_bottom represent the result of classifying relative position 1, 2, 3, relative position 4, 5, 6, and relative position upper row and bottom row, respectively.

As seen from Figures 5(a),6(a), and 7(a), when using only EMG as the features, at the elapsed time step of 0.5 seconds, the classification rate of RP 123, RP 456 and RP upper and bottom was 55.4, 59.6, and 84.3% on average for all subjects, respectively. At the end of the reaching motion, the classification rate of RP 123, RP 456 and RP upper and bottom was 68.9, 62.2 and 91.5% on average for all subjects, respectively.

In contrast, when using the EMG and the box configuration (relative pose and position) information as the features, at the elapsed time step of 0.5 s, the classification rate of RP 123, RP 456 and RP upper and bottom was 76.9, 74.4 and 84.5% on average for all subjects, respectively. At the end of the reaching motion was 83.5, 82.2 and 90.9% on average for all subjects, respectively.

It can be seen from these results that, no clear classification rate increase was observed even if the state of the box was introduced in classification of the box top and bottom. On the other hand, it is found that the box configuration is effective for identifying the depth of the reaching motion, since an increase of about 20% was observed.

Although the classification rates are not as high as those in the studies for recognizing the motions of hands and fingers [1, 2, 4], considering the disadvantages brought by the boxes with different configurations, and limited EMG measurement sites, the results are acceptable. Moreover, the results are comparable to those in the research on complex motions [26, 27], in which the classification rate reported was around 70% too. So in the following analysis, 70% is used as the threshold for investigating the real-time characteristics.

Table 1 shows the timing when the classification rate exceeded 70%, and the classification rate at 0.5 seconds for each subject. As seen from the table, when using only EMG as the features, the classification rate did not exceed 70% for any subjects. When using the EMG and box configuration as the features, subject A, B, C achieved 70% at the timing of 0.4 0.9 and 0.8 s, respectively. Also, when using only EMG as the features, the classification rate at 0.5 seconds was 51.7, 46.4, and 46.1%, for subject A, B, and C, respectively. In contrast, when using the EMG and box configuration as the features, subject A, B and C achieved 71.7, 53.6, and 65.8%, respectively. By introducing the box configuration information as the features, the classification rate of subject A, B, and C increased by 20.2, 7.2, and 19.7%, respectively. From these results, it is clear that, the information of box configuration enables more accurate and faster classification.

The timing exceeding the classification rate of 70% [s]The classification rate at 0.5 s [%]
SubjectOnly EMGEMG and box configurationOnly EMGEMG and box configuration
ANot exceeded0.451.771.7
BNot exceeded0.846.453.6
CNot exceeded0.946.165.8

Table 1.

The timing exceeding the classification rate of 70% and the classification rate at 0.5 s each dataset.

4.2 Comparison using classification probabilities

Figure 8 shows the probabilities obtained by the logistic regression at the end of the reaching motion of the subject A. In the figure, (a) shows the case using only EMG as the features, (b) shows the case using both EMG and box configuration (pose, position) information as the features. A reaching position with the highest resultant probability was counted as the classification result.

Figure 8.

The probability obtained by the logistic regression equation (subject A, at the end of the reaching motion, (a) uses only EMG for the feature, (b) uses EMG and box configurations (pose, position) for the feature, the reaching position with the highest probability is the identification result).

From Figure 8(1, 2), it can be seen that in the classification of box upper and bottom, high probabilities were achieved even when only EMG was used as the features. From Figure 8(3–8), when only the EMG was used as the features, the probabilities were low even if the classification results were correct (a), but when both EMG and box configuration were used, the probabilities showed a clear difference for classification, which means that ambiguity decreases by introducing the box configuration information as features.

4.3 Evaluation of selected features

Tables 2 and 3 show the features selected using AIC, the coefficients of each feature in the logistic regression, p value in the classification of upper and bottom side reaching position. Table 3 have the similar layout, showing the features selected using AIC, coefficients of each feature in the logistic regression, and p value for classification of RP1/2/3, and RP4/5/6, respectively. In Tables 2 and 3, the selected features were arranged in the selected order.

Selected featureCoefficientp valueSelected featureCoefficientp value
Intercept0.030.87434, rdWL1.763.85E-5
137, vdWL1.230.001127, vdWL1.550.003
89, rdWL−0.880.02493, rdWL−2.911.23E-7
99, rdWL2.211.90E-853, rdWL−0.840.004
152, rtWL1.150.00187, rdWL1.120.001
117, vdWL1.389.48E-6180, vtWL1.784.16E-4
84, rdWL1.100.001176, vtWL−1.262.69E-4
32, rdWL−1.555.64E-721, rdWL0.890.006
65, rdWL−1.586.11E-561, rdWL0.700.002
85, rdWL−2.112.70E-6111, rdWL0.990.18
102, rdWL−1.801.79E-4

Table 2.

In classification of RP upper and bottom side, feature selected using AIC, coefficients of the logistic regression equation, p value (dataset: EMG and box configuration).

The meaning of the symbols: r, the ratio of WL; the value of WL; t, total sum for the whole period of the reaching motion; d, segmentation delimited by every time step (0.1 s). The ID number and type of features are expressed as follows. rdWL(1–112): the ratio of WL in the segmentation delimited by every time step (0.1 s); vdWL(113–144): WL in the segmentation delimited by every time step (0.1 s); rtWL(145–172): the ratio of the total sum of WL until a specified elapsed time; vtWL(173–180): the total sum of WL until a specified elapsed time; BP(181): the box pose and BL(182): the box position; p value represents statistical significance of coefficient.

Selected featureCoefficientp valueCoefficientp value
π4 versus π6π5 versus π6
(a) In classification of RP1/2/3
Intercept−2.690.0113.291.05E-6
154, rtWL10.942.30E-102.240.046
182, BL14.503.34E-176.206.49E-8
175, vtWL7.940.0024.780.017
181, BP7.501.67E-142.742.02E-6
177, vtWL7.722.69E-82.290.017
169, rtWL−1.690.005−1.120.006
115, vdWL−6.988.06E-6−1.070.287
152, rtWL−14.314.23E-10−2.482.81E-4
π4 versus π6π5 versus π6
(b) In classification of RP4/5/6
Intercept1.650.1343.564.61E-4
145, vdWL−2.890.009−1.040.161
182, BL8.171.98E-95.433.83E-6
131, vdWL3.880.0043.240.009
181, BP5.171.33E-83.371.56E-5
174, vtWL8.614.88E-55.820.003
8, rdWL−3.368.58E-4−2.210.007
17, rdWL−2.150.087−0.960.420

Table 3.

Feature selected using AIC, coefficients of the logistic regression equation, p value (dataset: EMG and box configuration).

The meaning of the symbols: r, the ratio of WL; the value of WL; t, total sum for the whole period of the reaching motion; d, segmentation delimited by every time step (0.1 s). The ID number and type of features are expressed as follows. rdWL(1–112): the ratio of WL in the segmentation delimited by every time step (0.1 s); vdWL(113–144): WL in the segmentation delimited by every time step (0.1 s); rtWL(145–172): the ratio of the total sum of WL until a specified elapsed time; vtWL(173–180): the total sum of WL until a specified elapsed time; BP(181): the box pose and BL(182): the box position; p value represents statistical significance of coefficient.

From Table 2, it is clear that, for the classification of RP upper and bottom side, the box configuration information (both the pose and position), was not selected by the AIC selection process. As can be seen from Table 3, for the classification of RP1/2/3 and RP4/5/6, the box pose and position were selected. Moreover, the p value of the box position is the smallest, which means the box position is the most contributing feature for the classification.

4.4 Influence of the simulated errors for box configuration information

If the box configuration information is calculated from the image processing from the action camera, errors occur due to the influence of noise, measurement error and the other system errors. Therefore, in this research, the tolerable range of the error was investigated by adding simulated error to the true box configuration.

Figure 9 shows the influence of the simulated error level of the box configuration information on the classification rate of each subject. As seen from the figure, the classification rate decreased when the error level was increased in all subjects. In the case of subject A, even if the highest level error, 1.0 was given, a classification rate exceeding 70% was obtained. In the case of subject B, when the simulated error level 0.75 or more was given, the classification rate was lower than 70%. For subject C, when simulated error level 1.0 was given, the classification rate fell below 70%. From these results, it can be said that the error level should be controlled to 0.5 or less (position: 7.5 cm or less, posture: 22.5° or less).

Figure 9.

Influence of box configuration information on classification rate due to error (±1.0: Corresponding to 15 cm for position, and 45° for pose). (a) Subject A, (b) subject B, and (c) subject C.

Advertisement

5. Conclusion

In this research, we employed multinomial logistic regression to realize both information integration of two signal sources: images and around-shoulder EMG, and the target-reaching-position identification for 12 box configuration (pose 4 × position 3).

A high classification rate was achieved using both information sources. It was found that the box configuration information contributes to the classification of the depth of the reaching motion. Moreover, since the timing at which the classification rate exceeds 70% greatly differs from each subject, it is considered that the optimal classification timing might be individual dependent. Furthermore, the classification rate decreased when the error level was increased in all subjects.

In the experiment, we only changed the box position in the depth direction relative to the subject. Lateral changes of the box position relative to the subject shall be investigated, in the near future. Moreover, the effect of the box configuration information calculated from the real images captured by the active camera should be studied and compared with the results of this study. Since the error caused by the image acquisition and processing, as well as the real computational cost shall affect the information integration. Finally, the system should be finally validated with the data from amputee subjects.

Advertisement

Acknowledgments

This work was supported by JSPS Grant-in-Aid for Scientific Research (B) 17H02129.

References

  1. 1. Nishikawa D, Yu W, Yokoi H, Kakazu Y. On-line learning method for EMG prosthetic hand controlling. Electronics and Communications in Japan. 2001;84(10):35-46
  2. 2. Nishikawa D, Yu W, Yokoi H, Kakazu Y. On-line supervising mechanism for learning data in surface electromyogram motion classifiers. Systems and Computers in Japan. 2002;33(14):1-11
  3. 3. DEKA Research & Development Corp. LUKE Arm Details [Online]. Available from: http://www.mobiusbionics.com/luke-arm/ [Accessed: December 4, 2018]
  4. 4. Fifer MS et al. Simultaneous neural control of simple reaching and grasping with the modular prosthetic limb using intracranial EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2014;22(3):695-705
  5. 5. Kuiken TA et al. Targeted muscle reinnervation for real-time myoelectric control of multifunction artificial arms. JAMA: The Journal of the American Medical Association. 2009;301(6):619-628
  6. 6. Horiuchi Y, Kishi T, Gonzalez J, Yu W. A study on classification of upper limb motions from around-shoulder muscle activities. In: 2009 IEEE International Conference on Rehabilitation Robotics (ICORR 2009); 2009. pp. 311-315
  7. 7. Gonzalez J. Classification of upper limb motions from around-shoulder muscle activities: Hand biofeedback. The Open Medical Informatics Journal. 2010;4(2):74-80
  8. 8. Jacobo FV, Yee CL, Kahori K, Yu W. 3D continuos hand motion reconstruction from dual EEG and EMG recordings. In: ICIIBMS 2015—International Conference on Intelligent Informatics and Biomedical Sciences; November 2016. pp. 101-108
  9. 9. Mahira T, Imamogh N, Gomez-Tames J, Kita K, Yu W. Modeling bimanual coordination using back propagation neural network and radial basis function network. In: IEEE International Conference on Robotics and Biomimetics; 2014. pp. 1356-1361
  10. 10. Uoi T, Inohira E, Yokoi H. Generation of movements used in daily life using a bimanual coordinated motion generation system for myoelectric upper limb prostheses. Transactions of Japanese Society for Medical and Biological Engineering. 2012;50:337-344
  11. 11. Vasan G, Pilarski PM. Learning from demonstration: Teaching a myoelectric prosthesis with an intact limb via reinforcement learning. In: IEEE International Conference on Rehabilitation Robotics (ICORR); 2017. pp. 1457-1464
  12. 12. Choi H et al. Improved prediction of bimanual movements by a two-staged (effector-then-trajectory) decoder with epidural ECoG in nonhuman primates. Journal of Neural Engineering. 2018;15(1):1-10. Article 016011
  13. 13. Strazzulla I, Nowak M, Controzzi M, Cipriani C, Castellini C. Online bimanual manipulation using surface electromyography and incremental learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2017;25(3):227-234
  14. 14. Toyokura M, Muro I, Komiya T, Obara M. Activation in the frontomesial cortex during bimanually coordinated movement: Analysis using functional magnetic resonance imaging. Rehabilitation Medicine: The Japanese Journal of Rehabilitation Medicine, The Japanese association of rehabilitation medicine. 2000;37(10):662-668
  15. 15. Tazoe T. The effectiveness of bimanual coordination in stroke hemiplegia. Uehara Memorial Life Science Foundation Study Reports. 2012;26:1-4
  16. 16. Kamakura N. Shapes of Hands, Movements of Hands. Tokyo: Medical, Dental and Pharmacological Publication Co.; 1989
  17. 17. Vahrenkamp N, Boge C, Welke K, Asfour T, Walterl J, Dillmann R. Visual servoing for dual arm motions on a humanoid robot. In: IEEE-RAS International Conference on Humanoid Robots; 2009. pp. 208-214
  18. 18. Huebner K, Welke K, Przybylski M, Vahrenkamp N, Asfour T, Kragic D. Grasping known objects with humanoid robots: A box-based approach. In: IEEE International Conference on Advanced Robotics; 2009
  19. 19. Akaike H. A new look at the statistical model identification. IEEE Transactions on Automatic Control. 1974;19(6):716-723
  20. 20. Sato Y. Classification of Multivariate Data–Discriminan Analysis, Cluster Analysis. Tokyo: Asakura Publishing Co., Ltd.; 2009
  21. 21. Han M, Liu X. Feature selection techniques with class separability for multivariate time series. Neurocomputing. 2013;110:29-34
  22. 22. The MathWorks, Inc. Multinomial Models for Nominal Responses [Online]. Available from: https://jp.mathworks.com/help/stats/multinomial-models-for-nominal-responses.html?lang=en [Accessed: January 2019]
  23. 23. Subasi A. Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders. Computers in Biology and Medicine. 2013;43(5):576-586
  24. 24. Nakamura R, Saito H. Basic Kinesiology. 6th ed. Tokyo: Medical, Dental and Pharmacological Publication Co.; 2003
  25. 25. Phinyomark A, Phukpattaranont P, Limsakul C. Feature reduction and selection for EMG signal classification. Expert Systems with Applications. 2012;39(8):7420-7431
  26. 26. Itou T, Terao M, Nagata J, Yoshida M. Mouse cursor control system using electrooculogram signals. In: IEEE Engineering in Medicine and Biology Society; 2001. pp. 1368-1369
  27. 27. Jung KK, Kim JW, Lee HK, Chung SB, Eom KH. EMG pattern classification using spectral estimation and neural network. In: Proceedings of the SICE Annual Conference; 2007. pp. 1108-1111

Written By

Yohei Muraguchi and Wenwei Yu

Submitted: 05 October 2018 Reviewed: 07 February 2019 Published: 04 March 2019