Open access peer-reviewed chapter

Perspective Chapter: Classification of Grasping Gestures for Robotic Hand Prostheses Using Deep Neural Networks

Written By

Ruthber Rodríguez Serrezuela, Enrique Marañón Reyes, Roberto Sagaró Zamora and Alexander Alexeis Suarez Leon

Submitted: 25 July 2022 Reviewed: 23 August 2022 Published: 25 January 2023

DOI: 10.5772/intechopen.107344

From the Edited Volume

Human-Robot Interaction - Perspectives and Applications

Edited by Ramana Vinjamuri

Chapter metrics overview

100 Chapter Downloads

View Full Metrics

Abstract

This research compares classification accuracy obtained with the classical classification techniques and the presented convolutional neural network for the recognition of hand gestures used in robotic prostheses for transradial amputees using surface electromyography (sEMG) signals. The first two classifiers are the most used in the literature: support vector machines (SVM) and artificial neural networks (ANN). A new convolutional neural network (CNN) architecture based on the AtzoriNet network is proposed to assess performance according to amputation-related variables. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods and The performance it is compared with other CNN proposed by other authors. The performance of the CNN is evaluated with different metrics, providing good results compared to those proposed by other authors in the literature.

Keywords

  • electromyography
  • convolutional neural networks
  • support vector machine
  • artificial neural network
  • underactuated hand prosthesis

1. Introduction

Upper limb amputations are injuries that substantially limit a person’s quality of life by drastically reducing the number of independent activities they perform in daily life (ADL). Current myoelectric prostheses are electronically controlled by the user’s voluntary muscle contractions. A general scheme of how these and other devices that use biosignals work is presented in Figure 1. In this sense, the prostheses for amputees with higher performance follow this common pattern of development. There is a wide variety of very sophisticated myoelectric prostheses commercially available that use sEMG signals [1, 2, 3, 4, 5].

Figure 1.

Most common configuration of human–machine interaction [6].

A relevant limitation in the development of pattern recognition methods for myoelectric control is that their tests are mainly performed offline. It is now established that high offline precision, does not necessarily translate into accurate functional control of a physical prosthesis. In this sense, several recent studies have shown the discrepancy between “on and offline” in performance metrics [7, 8, 9, 10, 11]. However, very few studies have published validation results of pattern recognition in terms, and even fewer in clinical settings, relating the variability of the signal and the performance of the classifiers with those parameters related to amputation (disability index, length remaining limb, amputation time, phantom limb sensation, etc.) [12, 13, 14, 15].

By other hand, the number of features extracted also depends on the number of EMG sensors and the feature extraction strategy for each sensor. Many investigations in have implemented alternatively, dimensionality reduction has been shown to be an effective feature projection (PC) method [16]. Among the most used methods are: principal component analysis (PCA) [17, 18, 19], linear-nonlinear PCA composite analysis, self-organizing feature maps [16] and supervised discretization together with PCA [20, 21].

Convolutional neural networks have been applied for myoelectric control with interest in inter-sessions/subjects and inter-session performance, in addition to many other applications in biomedical signal processing [22, 23, 24]. Some authors have commented on the advantages of these deep neural networks and their ability to assimilate the recognition of hand gestures corresponding to groups of sEMG signals. Although the results obtained come from a small number of investigations, their employment possibilities are promising [25, 26, 27].

However, most of the research has been carried out on healthy subjects. In recent decades, different authors [28, 29, 30, 31] have shown that the variation of the signal over time in amputated patients is even greater than in healthy subjects. The EMG signal is weaker due to the amputation of certain muscle groups and as the amputation time elapses, the muscles become more atrophied and weak. There are also few databases of amputees, a situation that constitutes a significant obstacle for these researches. and for the gestures recognition at the international level [29, 30]. Additionally, amputees’ performance was found to be proportional to residual limb size, indicating that an anthropomorphic model might be beneficial [28, 29, 30, 31]. The previous findings motivated the study on the variance of results between amputee patients and fit populations under disturbances of dynamic factors such as the length of the remaining limb, age, level of atrophy, among others. That is why the results obtained in amputated patients are far from those presented.

Advertisement

2. Materials and methods

2.1 Databases subjects

The review of these databases allows knowing the characteristics of the population involved and the signal capture protocols. The literature review showed that there are few databases with sEMG data collection in a significant number of patients, with subjects without known prior deficiencies and whose data are heterogeneous, so the most used is the NINAPRO database [32, 33, 34], which contains the electromyography recording using the system of 8 sEMG sensors Thalmic Labs - MYO. The data in this repository is free to use and is intended for use in developing hand gesture movement classifiers [22]. The NINAPRO database in its DB3 section establishes the parameters with which the sEMG data of 11 subjects with transradial amputation were recorded [35].

In the DB3 dataset, as explained above, the transradial amputee wears two Myo cuffs side by side. The superior MYO cuff is placed closest to the elbow with the first electrode at the radio-humeral joint, following the configuration of the NINAPRO electrode. The lower MYO cuff is placed just below the first one, closer to the amputation region (Table 1).

PatientHandLateralityAgeRemained Forearm (%)Years since amputationAmputation causeDASH ScoreTime wearing prostheses (years)
1Rightright handed325013Traumatic injury1.6713
2Leftright handed35706Traumatic injury15.186
3Rightright handed50305Traumatic injury22.508
4Rightright handed34401Traumatic injury86.670
5Leftleft handed67901Traumatic injury11.670
6Leftright handed324013Traumatic injury37.500
7Rightright handed3507Traumatic injury31.670
8Rightright handed33505Traumatic injury33.330
9Rightright handed449014Traumatic injury3.330
10Rightright handed59502Traumatic injury11.670
11Rightright handed45905Cancer12.500

Table 1.

Clinical characteristics of subjects with amputation. NinaPro DB3.

In order to build our own database, the subjects invited to participate in this stage are amputated subjects, without neurological deficiencies. Invited subjects followed the population parameters used in the NINAPRO Database [36, 37]. Ten male and female amputees ranging in age from 24 to 65 years participated in the experiments. The procedures were performed in accordance with the Declaration of Helsinki and were approved by the ethics committee of the Universidad del Tolima (approval number: N – 20,160,021). All subjects participated voluntarily, providing their written informed consent before the experimental procedures. Any amputee who has experience in the use of hand prostheses will be included in the study, registering in advance their experience in the use of passive or myoelectric prostheses.

Aspects and demographic data to be recorded: For each subject, age, sex, education level, related to the amputation: dominant hand, amputated side, year of amputation, cause, type of prosthesis if used or has been used, and level of amputation.

Inclusion criteria: Adults in an age range of 20–65 years, no history of neurological and/or psychiatric diseases, voluntary participation in the study and acceptance of the medical staff. Only the transradial level of amputation will be considered, amputations above the elbow or beyond the wrist will not be admitted to the study. Any non-compliance with these parameters becomes criteria for exclusion from the study. Table 2 shows the characteristics of the amputee patients who participated in the trials.

PatientAge (years)GenderRemained Forearm Length (Below elbow)Amputated
hand
Years since amputationDASH Score
P0136M10 cmDominant hand145
P0251MWrist DisarticulationDominant hand3019
P0362MWrist DisarticulationDominant hand3639.16
P0426M10 cmDominant hand1220
P0560MWrist DisarticulationNot dominant Hand4126.6
P0655MWrist DisarticulationDominant hand516.66
P0728M10 cmDominant hand924.16
P0848FWrist DisarticulationDominant hand2220.83
P0965MWrist DisarticulationDominant hand2942.5
P1035M10 cmDominant hand247.5

Table 2.

Clinical characteristics of subjects with amputation.

2.2 Sensor EMG MYO armband

Data was recorded using the commercial MYO armband (MYO). MYO is a portable EMG sensor developed by Thalmic Lab and has eight dry electrode channels with a sampling rate of 200 Hz. It is a low cost, consumer grade device with a nine inertial measurement unit (IMU) [22], that connects wirelessly with the computer via Bluetooth. It is a non-invasive device, easier to use compared to conventional electrodes [38, 39]. Despite the low sampling frequency, its performance has been shown to be similar to that of full-band EMG recordings using conventional electrodes [22, 40], and the technology has been used in many studies [29, 35, 38] (Figure 2).

Figure 2.

Signals acquisition through the application developed in Matlab 2020b. Author. Six movements were identified in MYO sensor to achieve grip improvement: power grip (AP), palm inward (PI), palm outward (PO), open hand (MO), pincer grip (AT) and rest (RE) (Figure 3).

Figure 3.

Gestures to identify with the MYO device.

sEMG recording: Prior to carrying out the tests, the patients will be instructed on the experimental procedure and as a first step. The sensor operation will be calibrated for both limbs. To make the records in each gesture, the subjects will be seated comfortably in front of the computer with both limbs with their elbows flexed at 90 degrees and will be instructed to perform the gestures that are reflected on the monitor, in the case of amputated patients with the contralateral limb and with the amputated limb (Figures 4 and 5).

Figure 4.

(a) Amputee patient in front of the computer with a graphic signal to perform the movements. (b) MYO device arrangement.

Figure 5.

User interface that indicates the imaginary movement to be performed and includes completion and rest time.

The graphic interface will provide the patient with the times for performing the tests and the state of rest (Figure 5). Amputee recordings were performed in repeated sessions for 1 week.

The procedure carried out to capture the myoelectric signals is as follows: for each grip or gesture of the hand, 200 samples are taken during an interval of 30 seconds. Transitions are made between each of the six proposed gestures for spaces of 1 minute as recommended in [41]. sEMG signals were captured during several sessions and on different days of the week. The data of these myoelectric signals are stored in dataset for later offline processing.

2.3 Signal pre-processing

The segmentation and overlay methods used in this work improved the training efficiency by increasing the number of training samples based on recent work such as [17, 20, 42].

2.4 Feature extraction

Each captured sEMG signal is subdivided into 200 ms windows. The signal captured by the MYO was obtained at a frequency of 200hz [11, 21]. In order to be analyzed, it is divided every 200 ms, leaving a total of 40 data in each sub-window. Each sub-window has a 50% overlap with the immediately previous window, which allows increasing the number of samples and thus expanding the database obtained. The extraction described here is applied to each of the MYO channels. These data obtained for each of the channels [19], are concatenated horizontally, thus allowing a database to be obtained with 10 data features for each channel and a column with the information on the grasping gestures that are performed.

Different kinds of features extracted are used by different researchers, such as mean absolute value (MAV), root mean square (RMS), autoregression coefficients (AC), variance (VAR), standard deviation (SD), crossover by zero (CC), the length of the waveform (LO), Amplitude of Wilson (AW), slope of mean absolute value (PVAM). Features in the time domain were treated in [42]. These extracted features are used in SVM and in ANN and the raw signals for the CNN classifier.

2.5 Classifiers

Artificial neural networks Artificial neural networks (ANN) are a nonlinear classifier that simulates brain information processing through a series of weighted nodes, called neurons. Neurons are organized in layers and interconnect with each other to create a network. ANNs use a nonlinear function of a linear combination of the inputs, where the coefficients of the linear combination are adaptive parameters. The basic model of an ANN will be described as a series of functional transformations. First, M linear combinations of the input variables are constructed x1, x2, …, xD in a way:

aj=i=1Dwji1+wj01E1

where j=1,,M, y the superscript (1) indicates that the corresponding parameters correspond to the first layer of the network. Parameters wji1 are weights and the parameters wj01 are polarization constants. The magnitude aj is named activation, and each activation is transformed using a nonlinear and differentiable activation function [22].

In ANN classifier (Table 3), the chosen hyper parameters are highlighted and the variations that were proposed for each of them. Among the hyper parameters that were varied are the weight optimizer, the activation method, the maximum number of iterations and the type of learning.

ParametersOptions
OptimizerSDG, ADAM
Regularization0.0001–0.00001
Activation FunctionTanh, ReLu, Logístico
maximum numbers of iterations10, 30, 50, 100
Hidden layers3, 4, 5 y 6
Learning rateConstant, Adaptive

Table 3.

Hyper-parameters. ANN.

2.6 Support vector machine

Support vector machines are a very powerful method used to solve classification problems, it is highly efficient on multidimensional data sets. It consists of defining a hyperplane or decision limit that separates the samples into two groups, where those above the decision limit are classified as positive and those below it, are classified as negative. The fundamental objective is to maximize the margin M or distance between the neighboring samples called support vectors and the separating hyperplane (Figure 6) [43].

Figure 6.

Optimal separation hyperplane, for linearly separable classes.

For the SVM classifier, the following kernels were used: linear, polynomial, Gaussian and sigmoid as shown in Table 4. In the linear and Gaussian kernels, the Gamma parameter was used with a value of 0.5. In the polynomial and sigmoid nuclei, the Degree parameter was used: between: 0.5 and 3, respectively. For all nuclei, the regularization constant C was used with values of 0.1, 1, 10 and 100 as recommended in the literature [44, 45].

ParametersOptions
KernelLineal, Gaussian, Polynomial, Sigmoid
Gamma0.5
Degree0.5–3
C0.1, 1, 10 y 100

Table 4.

SVM classifier parameters.

2.7 Convolutional neural networks

An CNN is a deep learning algorithm able to collecting an input matrix of size M X N and attributing weights and biases in parallel under the constraints of a predictive problem [46], resulting in specific features. A convolutional layer performs a dot product between two arrays, where one array is the set of learnable parameters and the other is known as the kernel, producing an activation map, as shown below:

Gmn=fhmn=jMkNhjk,fmjnk,E2

Where:

The input matrix is f and the kernel is denoted as h.

m is number of rows in the input matrix

n is the number of columns in the input matrix

j is the index for the offset in the rows of the input

k is the index for the offset in the columns of the input (Figure 7)

Figure 7.

Architecture of the convolutional neural network used.

Table 5 shows the parameters used in the CNN that were selected for the experiment carried out. Highlighted in the text are the hyper parameters: the batch size and the number of epochs that yielded the best results. The batch sizes were worked with values of 64, 128 and 256 respectively. The CNN architecture has four hidden convolutional layers and one output layer (Figure 8).

Figure 8.

Proposed model of CNN.

The first two hidden layers consist of 32 filters of size 8x1 and 3 × 3. The third consists of 64 filters of size 5 × 5. The fourth layer contains 64 filters of size 5 × 1, while the last one is a convolutional layer with six possible outputs with 1 × 1 filters, corresponding to the six gestures to classify. Followed by Nonlinear Rectification Units (ReLU) and Dropout layer with a probability of 0.15 to put to zero the output of a hidden unit. Also, a subsampling layer performs maximum pooling in a 3 × 3 window after removal of the second and third layers. Finally, the last convolutional layer is followed by a Softmax activation function (Table 5).

ParametersSelected choice
Lost functionCategorical cross entropy
OptimizerADAM (lr = 0.001)
Batch size64, 128, 256
Epoch100, 400, 500, 1000
Validation rate30%

Table 5.

Hyper parameters tuning.

2.8 Metrics

The confusion matrix is used to calculate many common classification metrics. The diagonal represents correct predictions and the other positions of the matrix indicate incorrect predictions. If the sample is positive and is classified as positive, i.e. correctly classified positive sample, and it is considered as a true positive (TP); if it is classified as negative, it is considered a false negative (FN). If the sample is negative and is classified as negative, it is considered true negative (TN); If the sample is negative and is classified as negative, it is considered true negative (TN); if it is classified as positive, it is counted as a false positive (FP), false alarm. The most common metrics are sensitivity (Se), specificity (Sp), which indicate the ability of the CNN to identify hand gestures. Accuracy (ACC) is used to assess overall detection performance and Precision (Pr) is used to measure model quality in posture classification tasks. Likewise, the F1 score (F1) is a measure of the precision of a test, it is the average precision and sensitivity of the classification. It has a maximum score of 1 (perfect accuracy and sensitivity) and a minimum of 0. Overall, it is a measure of the accuracy and robustness of your model. This metric gives information about the quantity that the model is able to identify. In this work, six commonly used evaluation metrics were used: accuracy, precision, sensitivity, specificity and F1 to evaluate the performance of CNN:

AccuracyACC=TP+TNTP+TN+FP+FNE3
PrecisionPr=TPTP+FPE4
SensitivitySe=TPTP+FNE5
SpecificitySp=TNTN+FPE6
F1scoreF1=2TPTP+FP+FNE7
Advertisement

3. Results

The analysis in amputated patients is preceded by the great variability of the sEMG signal. Figure 9 shows this behavior of the sEMG signals for the power grab gesture in amputated subjects. It is observed that the data are very scattered and do not have the same symmetry, either the same mean, either standard deviation between each one of patients.

Figure 9.

Box and Whisker Plot of the sEMG for the power grab gesture in amputees.

3.1 ANN classifier

The results obtained with this classifier are shown in Table 6 in the specific patients. The own database has been used for the test data set with the following configuration: Optimizer: ADAM, activation function: ReLu, L2 regularization with a constant of 0.0001, a constant learning rate of 0.001, with three hidden layers and all layers with eight neurons.

ANN
PatientsAccuracyLost
P0185.710.1000
P0280.220.1700
P0356.040.0280
P0482.000.8600
P0576.920,4700
P0679.120.1886
P0785.710.1200
P0872.530.1920
P0967.030.2111
P1081.320.1239

Table 6.

Accuracy results with the ANN classifier on the test data set at 100 epochs.

On both data sets, that is, the test and trial data set, the ANN classifier showed an increase in the accuracy metric, especially when increasing the number of training epochs, passing on average from 76.66% to 100. periods with a minimum of 56.04% and a maximum of 85.77%, respectively. The average accuracy was consistent with the results shown by other authors for this classifier [35, 44, 45, 47].

In Table 6, superior results can also be seen on subjects P01, P02, P04, P05, P07 and P10, which is outstanding, since their accuracy was above 80% with this classifier with a standard deviation of 9.23. This low standard deviation indicates that most of the accuracies obtained tend to be clustered close to their mean.

3.2 SVM classifier

As mentioned above, different kernels were used: linear, polynomial, Gaussian and sigmoid. Likewise, we worked on the regularization constant C with values of 0.1, 1, 10 and 100. On both data sets with the SVM classifier using the RBF and Sigmoid kernels, the best results were obtained when evaluating the accuracy metric, obtaining results up to 80%. in this metric. Table 7 shows the results obtained with all the kernels.

C = 0.1C = 1C = 10C = 100
PatientsKernelAccuracyF1 scoreSensitivityPrecisionAccuracyF1 scoreSensitivityPrecisionAccuracyF1 scoreSensitivityPrecisionAccuracyF1 scoreSensitivityPrecision
P01Polynomial47.541.2551.3948.2347.551.6258.0651.6425.0023.9428.9429.1225.0023.9428.9469.12
Linear62.559.8264.6360.8962.561.1765.3262.7425.0017.2930.0025.0025.0020.2129.8627.50
RBF12.52.7816.672.0822.519.3627.0852.3865.0062.6667.5566.4465.0059.8264.6360.89
Sigmoid12.52.7816.6712.512.518.2133.3327.560.0059.0763.2461.5560.0062.3965.0964.60
P02Polynomial20.009.20257.782030.9732.1630.8620.0015.8723.2143.9820.0015.8723.2143.98
Linear52.552.7852.4659.7652.549.7751.0357.132.501.743.332.502.502.156.115.00
RBF12.52.7816.672.08102.713.332.0842.5035.8547.1638.5742.5036.3747.1647.62
Sigmoid12.52.7816.6712.512.52.7816.6712.535.0041.4243.2742.2635.0030.4438.2731.15
P03Polynomial32.5024.2536.3033.6532.537.2342.2234.7220.0013.2125.0019.1220.0013.2125.0019.12
Linear5549.4755.646.895549.4755.646.895.003.136.675.005.0010.310.211.5
RBF12.502.7816.672.082013.212519.1255.0050.3355.6048.2855.0047.0949.4944.44
Sigmoid12.502.7816.6712.512.52.5613.331050.0044.7849.6345.6950.0046.6152.2743.22
P4Polynomial47.542.1847.2239.5847.564.1565.6565.9320.0015.9924.3152.2520.0015.9924.3152.25
Linear6564.3664.0769.836564.3664.0768.832.503.912.782.502.502.802.702.30
RBF12.52.7916.672.0817.511.4822.2235.5365.0064.1564.0768.7567.5067.2468.1070.24
Sigmoid12.52.7816.6712.512.52.7816.6712.567.5066.8267.4171.5365.0068.2566.8570.83
P05Polynomial17.57.0622.225.6517.527.3738.3328.8420.0014.7525.0033.1620.0014.7525.0033.16
Linear52.550.7656.6253.8952.553.2859.9556.8312.503.2916.6712.5012.503.975.565.00
RBF12.52.7816.672.082014.662533.0855.0053.7459.9557.4555.0050.8057.8754.99
Sigmoid12.52.7816.6712.512.52.7816.6712.540.0037.7044.3540.0840.0051.5156.6255.40
P06Polynomial32.525.4436.336.3132.541.2950.3740.125.0019.4130.5635.7125.0019.4130.5635.71
Linear6565.8565.3274.636565.8565.3274.632.504.173.332.502.503.104.203.70
RBF12.52.7816.672.0817.510.4122.2218.8657.5051.2860.5649.8157.5057.2161.1662.82
Sigmoid12.52.7816.6712.512.52.7816.6712.565.0065.8565.3274.6365.0068.2668.1075.32
P07Polynomial7570.1871.5163.077579.1876.5179.5427.5031.0629.6454.4427.5031.0629.6454.44
Linear8079.7677.7679.48079.7677.7679.415.008.9415.4815.0015.009.3715.4815.00
RBF12.52.7816.672.0817.514.520.8735.1980.0065.7764.5666.0580.0066.0463.8769.44
Sigmoid12.52.7816.6712.512.519.3630.9527.562.5063.1360.4068.0662.5063.3560.5463.33
P08Polynomial4535.2851.6734.474544.1355.3752.1525.0020.7130.5635.7825.0020.7130.5635.78
Linear4546.4346.7150.894549.3249.4955.5117.5010.0621.6717.5017.509.4718.3315.00
RBF12.502.7816.672.0817.5010.4822.2218.9257.5052.2859.4450.6757.5049.7958.1548.96
Sigmoid12.502.7816.6712.5012.5012.98252047.5045.7649.4949.7447.5040.8947.4139.98
P9Polynomial4038.2543.3348.574041.7542.7844.717.5011.1922.2225.0017.5011.1922.2225.00
Linear4542.384540.3745444543.9410.019.277.358.1212.09.278.4812.86
RBF12.52.7816.672.08156.8119.448.2445.0042.7245.0040.4245.0047.8253.3346.52
Sigmoid12.52.7816.6712.5012.503.0110.007.5047.5043.7147.2241.9447.5038.5638.8936.55
P10Polynomial3526.7640.5630.113577.2177.5978.0222.5019.5726.1652.3822.5019.5726.1652.38
Linear72.572.971.2572.0272.573.0771.2572.3212.507.8614.4412.5012.509.7914.4412.50
RBF12.52.7816.672.082015.0524.0735.6575.0075.2174.0374.4075.0070.7069.4070.06
Sigmoid12.52.7816.6712.512.512.4421.6717.575.0075.0573.1073.2175.0069.6869.4070.00

Table 7.

Results of the metrics with the SVM classifier in the specific patients.

3.3 CNN Classifier

Table 8 shows the comparative results of the CNN classifier evaluated in different patients using regularization techniques such as: early stopping, dropout and batch normalization. Which is a technique that tries to apply certain rules to know when it is time to stop training, so that there is no overfitting to the input data, nor under fitting. The average time required to train each convolutional neural network was 1 hour and 25 minutes. The average time needed to test the network was 15.2 s using a GPU NVIDIA Titan-V, 12 GB RAM HBM2 y 640 Tensor Cores.

CNN
EPOCHS
1002003004005006007008009001000
PatientsAccuracyAccuracyAccuracyAccuracyAccuracyAccuracyAccuracyAccuracyAccuracyAccuracy
P0161.5364.8767.3869.2671.6173.8676.0878.1180.2680.51
P0254.1558.2360.1660.8063.6567.7669.5772.9876.1376.52
P0355.3260.4160.9462.8464.5468.7170.9773.6475.9576.34
P0467.2870.4672.2072.9174.0976.2479.3981.5483.1283.69
P0560.0762.9965.6267.0868.3871.6473.9176.2877.9578.58
P0655.5359.5162.6462.5167.3769.6171.8974.3276.5478.85
P0770.4473.2574.6176.5477.1278.8880.6582.5184.6286.63
P0856.5160.8063.3064.8165.8767.9771.1174.2577.3980.53
P0956.7263.5363.2066.4469.8371.3773.8676.3878.5481.20
P1062.0467.7170.0171.3973.2774.8576.5878.3180.1481.77

Table 8.

Comparative results of the CNN classifier of the different patients at different epochs.

Table 9 shows a summary of the different classifiers by means of the accuracy metric in the different patients. In patients P01 and P02, the best classifier is ANN. Although the accuracy shown by the CNN is acceptable. These patients have a DASH index of 45 and 19, respectively. They also present different levels of amputation: one at 10 cm from the elbow and the other at the level of the wrist. Likewise, they present amputation times of 1 and 30 years, correspondingly. These clinical factors affect the performance of the classifiers.

PatientsSVM
(Accuracy %)
ANN
(Accuracy %)
CNN
(Accuracy %)
P0165.0085.7180.51
P0252.2080.2276.52
P0355.0056.0476.34
P0467.5082.0083.69
P0555.0076.9278.58
P0665.0079.1278.85
P0780.0090.1186.63
P0857.5084.6280.53
P0947.5067.0381.20
P1075.0081.3281.77
Average61.9778.3080.46

Table 9.

Accuracy metric comparison between all classifiers. The values in bold represent the highest accuracy values recorded by each patient in the classifiers.

Advertisement

4. Discussion

From the results obtained, the following points are analyzed. Figure 10 presents the confusion matrix of each of the classifiers (SVM, ANN and CNN) corresponding to the patient (P03 0 P09), It is observed that both the SVM and ANN show a low number of hits in the different gestures in this patient whose cause of amputation is due to congenital factors that further affect the variability of the signal. Even though only one case is analyzed in this work, this type of behavior has been reported by other research works in patients of this type.

Figure 10.

Confusion matrices of the different classifiers for the patient P09 (a) SVM (b) ANN (c) CNN using the metric of the accuracy of the data corresponding to the training (session two).

Once again, the performance of the SVM and the ANN is significantly affected. The results of the present work show a significant accuracy rate for the classification of various classes of amputated subjects in comparison with other studies carried out (Table 10).

MethodsAccuracyAuthors
AtzoritNet, CNN classifier, healthy subjects and amputees70.48±1.52%Tsinganos et al., 2018 [39]
Time domain, regressive models, Bayesian Network, ANN, CNN, AD, SVM, healthy subjects80.4±2.6%Ramirez-Martinez, et al., 2019 [40]
CNN, multiclass classifier, input TWC, amputees subjects68.98±29.33%Cote-Allard et al., 2019 [22]
WeiNet, CNN classifier, NinaPro dataset, healthy subjects99.1%Wei et al., 2020 [27]
AD Classifier, ACP, MD, Multiclass Classifier, amputees patients77.3±17.5%Rabin Neta et al., 2020 [38]
SVM classifier, LDA and TWD, healthy subjects94.73%Lin Chen et al., 2020 [48]
CNN classifier, TWD, healthy subjects98.82%Tsinganos et al., 2020 [49]
Multiclass classifiers, SVM, ANN, CNC, amputees patients80.46%This research

Table 10.

Studies conducted using CNN as an EMG-based hand prosthesis movement classifier in healthy subjects and amputated subjects.

Advertisement

5. Conclusions

Over the past decade, deep learning and convolutional neural networks have revolutionized several fields of machine learning, including speech recognition and computer vision. Therefore, its use is promising for obtaining better classification indexes of the sEMG signals if the great variation of these is considered in accordance with clinical variables of the amputation, all of which would contribute to closing the gap between the prosthesis market (which requires fast and robust control methods) and the results of recent scientific research in disability support technologies.

The protocol for obtaining sEMG measurements in amputated patients was applied, as well as the extraction and classification of the signal, all of which is consistent with the proposal for the integrated design of the prosthesis. A database of 10 amputated patients according to the 6 defined hand gestures was constructed. The data is publicly available in the repository of the Huila-Corhuila University Corporation (CORHUILA).

The classification accuracy obtained with CNN using the proposed architecture is 80.46%, but the most significant thing is its ability to obtain a higher performance in the classification between subjects in relation to parameters such as length of the remaining limb, years of amputation or disability index, compared with the results obtained by conventional classifiers such as the support vector machine and artificial neural networks.

References

  1. 1. Fukaya N, Asfour T, Dillmann R, Toyama S. Development of a five-finger dexterous hand without feedback control: The TUAT/Karlsruhe humanoid hand. In: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE. November 2013. pp. 4533-4540
  2. 2. Diftler MA, Mehling JS, Abdallah ME, Radford NA, Bridgwater LB, Sanders AM. et al. Robonaut 2-the first humanoid robot in space. In: Robotics and Automation (ICRA), 2011 IEEE International Conference on 2011, May. IEEE. pp. 2178-2183
  3. 3. Chen Z, Lii NY, Wimböck T, Fan S, Liu H. Experimental evaluation of Cartesian and joint impedance control with adaptive friction compensation for the dexterous robot hand DLR-HIT II. International Journal of Humanoid Robotics. 2011;8(04):649-671
  4. 4. Sun W, Kong J, Wang X, Liu H. Innovative design method of the metamorphic hand. International Journal of Advanced Robotic Systems. 2018;15(1):1729881417754154
  5. 5. Available from: http://es.bebionic.com/ [May 1, 2018]
  6. 6. Azorin José M, et al. La Interacción de Personas con Discapacidad con el Computador: Experiencias y Posibilidades en Iberoamérica. Programa Iberoamericano de Ciencia y Tecnología para el Desarrollo (CYTED). 2013. ISBN-10: 84-15413-22-X
  7. 7. Song Y, Mao J, Zhang Z, Huang H, Yuan W, Chen Y. A novel multi-objective shielding optimization method: DNN-PCA-NSGA-II. Annals of Nuclear Energy. 2021;161:108461
  8. 8. Al-Fawa'reh M, Al-Fayoumi M, Nashwan S, Fraihat S. Cyber threat intelligence using PCA-DNN model to detect abnormal network behavior. Egyptian Informatics Journal. 2022;23(2):173-185
  9. 9. Jiang N, Vujaklija I, Rehbaum H, Graimann B, Farina D. Is accurate mapping of EMG signals on kinematics needed for precise online myoelectric control? IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2014;22(3):549-558
  10. 10. Roche AD. Clinical perspectives in upper limb prostheses: An update. Current Surgery Reports. 2019;7:5. DOI: 10.1007/s40137-019-0227-z
  11. 11. Hahne JM, Markovic M, Farina D. User adaptation in myoelectric man-machine interfaces. Scientific Reports. 2017;7(1):4437
  12. 12. Hargrove LJ, Lock BA, Simon AM. Pattern recognition control outperforms conventional myoelectric control in upper limb patients with targeted muscle reinnervation. In: Proceedings of 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2013. pp. 1599–1602
  13. 13. Wurth SM, Hargrove LJ. A real-time comparison between direct control, sequential pattern recognition control and simultaneous pattern recognition control using a Fitts’ law style assessment procedure. Journal of Neuroengineering and Rehabilitation. 2014;11(1):91
  14. 14. Kuiken TA, Miller LA, Turner K, Hargrove LJ. A comparison of pattern recognition control and direct control of a multiple degree-of-freedom transradial prosthesis. IEEE Journal of Translational Engineering in Health and Medicine. 2016;4:1-8
  15. 15. Chu JU, Moon YJ, Lee SK, Kim SK, Mun MS. A supervised feature-projection- based-real-time EMG pattern recognition for multifunction myoelectric hand control. IEEE Transaction on Mechatronics. 2007;12(3):282-290
  16. 16. Chu J-U, Moon I, Mun M-S. A real-time EMG pattern recognition system based on linear-nonlinear feature projection for a multifunction myoelectric hand. IEEE Transactions on Biomedical Engineering. 2006;53:2232-2239
  17. 17. Guler NF, Kocer S. Classification of EMG signals using PCA and FFT. Journal of Medical Systems. 2005;29(3):29241-29250
  18. 18. Smith RJ, Tenore F, Huberdeau D, Etienne-Cummings R, Thakor NV. Continuous decoding of finger position from surface EMG signals for the control of powered prostheses. In: Proceedings of 30th Annual International Conference of the IEEE EMBS. Vancouver, British Columbia. August 20–25, 2008
  19. 19. Wang JZ, Wang RC, Li F, Jiang MW, Jin DW. EMG signal classification for myoelectric teleoperating a dexterous robot hand. In: Proceedings of 27th Annual International conference of the IEEE EMBS; Shanghai, China. January 17–18, 2006
  20. 20. Kiatpanichagij K, Afzulpurkar N. Use of supervised discretization with PCA in wavelet packet transformation-based surface electromyogram classification. Biomedical Signal Processing and Control. 2009;4(2):127-138
  21. 21. Hargrove L, Guangline L, Englehart K, Hudgins B. Principal Component’s analysis preprocessing for improved classification accuracies in pattern-recognition-based myoelectric control. IEEE Transactions on Biomedical Engineering. 2019;56(5):1407-1414
  22. 22. Côté-Allard U, Fall CL, Drouin A, Campeau-Lecours A, Gosselin C, Glette K, et al. Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2019;27(4):760-771
  23. 23. Amamcherla N, Turlapaty A, Gokaraju B. A machine learning system for classification of emg signals to assist exoskeleton performance. In 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE. October, 2018. pp. 1-4
  24. 24. Boostani R, Moradi MH. Evaluation of the forearm EMG signal features for the control of a prosthetic hand. Physiological Measurement. 2003;24:309-319
  25. 25. Côté-Allard, Ulysse et al. Transfer learning for sEMG hand gestures recognition using convolutional neural networks. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC) Banff Center, Banff, Canada. October 5-8, 2017
  26. 26. Li C et al. PCA and deep learning based myoelectric grasping control of a prosthetic hand. Biomedical Engineering Online. 2018;17:107. DOI: 10.1186/s12938-018-0539-8
  27. 27. Wei W, Wong Y, Du Y, Hu Y, Kankanhalli M, Geng W. A multi-stream convolutional neural network for sEMG-based gesture recognition in muscle-computer interface. Pattern Recognition Letters. 2019;119:131-138
  28. 28. Franti E et al. Methods of acquisition and signal processing for myoelectric control of artificial arms. Romanian Journal of Information Science and Technology. 2012;15(2):91-105
  29. 29. Cognolato M, Atzori M, Marchesin C, Marangon S, Faccio D, Tiengo C, et al. Multifunction control and evaluation of a 3D printed hand prosthesis with the Myo armband by hand amputees. BioRxiv. 2018:445-460
  30. 30. Díaz-Amador R, Mendoza-Reyes MA, Cárdenas-Barreras JL. Reducing the effects of muscle fatigue on upper limb myoelectric control using adaptive LDA. Ingeniería Electrónica, Automática y Comunicaciones. 2019;40(2):10-21
  31. 31. Campbell E, Phinyomark A, Al-Timemy AH, Khushaba RN, Petri G, Scheme E. Differences in EMG feature space between able-bodied and amputee subjects for myoelectric control. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER) IEEE. 2019. pp. 33-36
  32. 32. Yang Z, Jiang D, Sun Y, Tao B, Tong X, Jiang G, et al. Dynamic gesture recognition using surface EMG signals based on multi-stream residual network. Frontiers in Bioengineering and Biotechnology, 2021;9
  33. 33. Bao T, Zaidi SAR, Xie S, Yang P, Zhang ZQ. A CNN-LSTM hybrid model for wrist kinematics estimation using surface electromyography. IEEE Transactions on Instrumentation and Measurement. 2020;70:1-9
  34. 34. Liu J, Chen W, Li M, Kang X. Continuous recognition of multifunctional finger and wrist movements in amputee subjects based on sEMG and accelerometry. The Open Biomedical Engineering Journal. 2016;10:101
  35. 35. Atzori M, Gijsberts A, Castellini C, Caputo B, Hager AGM, Elsig S, et al. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Scientific Data. 2014;1(1):1-13
  36. 36. Bird JJ, Kobylarz J, Faria DR, Ekárt A, Ribeiro EP. Cross-domain MLP and CNN transfer learning for biological signal processing: EEG and EMG. IEEE Access. 2020;8:54789-54801
  37. 37. Akhlaghi N, Dhawan A, Khan AA, Mukherjee B, Diao G, Truong C, et al. Sparsity analysis of a sonomyographic muscle–Computer interface. IEEE Transactions on Biomedical Engineering. 2019;67(3):688-696
  38. 38. Rabin N, Kahlon M, Malayev S, Ratnovsky A. Classification of human hand movements based on EMG signals using nonlinear dimensionality reduction and data fusion techniques. Expert Systems with Applications. 2020;149:113281
  39. 39. Tsinganos P, Cornelis B, Cornelis J, Jansen B, Skodras A. Deep Learning in EMG-based Gesture Recognition. In: PhyCS. 2018. pp. 107-114
  40. 40. Ramírez-Martínez D, Alfaro-Ponce M, Pogrebnyak O, Aldape-Pérez M, Argüelles-Cruz AJ. Hand movement classification using burg reflection coefficients. Sensors. 2019;19(3):475
  41. 41. Dirgantara GP, Basari B. Optimized circuit and control for prosthetic arm based on myoelectric pattern recognition via power spectral density analysis. In AIP Conference Proceedings. AIP Publishing LLC. 2019;2092(1):020013
  42. 42. Benatti S, Milosevic B, Farella E, Gruppioni E, Benini L. A prosthetic hand body area controller based on efficient pattern recognition control strategies. Sensors. 2017;17(4):869
  43. 43. Ortiz-Catalan M, Rouhani F, Branemark R, Hakansson B. Offline accuracy: A potentially misleading metric in myoelectric pattern recognition for prosthetic control. In: Proceedings of 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2015. pp. 1140–1143
  44. 44. Wang S, Chen B. Split-stack 2D-CNN for hand gestures recognition based on surface EMG decoding. In: 2020 Chinese Automation Congress (CAC). IEEE. November 2020. pp. 7084-7088
  45. 45. Côté-Allard U, Gagnon-Turcotte G, Laviolette F, Gosselin B. A low-cost, wireless, 3-D-printed custom armband for sEMG hand gesture recognition. Sensors. 2019;19(12):2811
  46. 46. Hassan HF, Abou-Loukh SJ, Ibraheem IK. Teleoperated robotic arm movement using electromyography signal with wearable Myo armband. Journal of King Saud University-Engineering Sciences. 2020;32(6):378-387
  47. 47. Ozdemir MA, Kisa DH, Guren O, Onan A, Akan A.. EMG based hand gesture recognition using deep learning. In: 2020 Medical Technologies Congress (TIPTEKNO). IEEE. 2020, November. pp. 1-4
  48. 48. Chen L, Fu J, Wu Y, Li H, Zheng B. Hand gesture recognition using compact CNN via surface electromyography signals. Sensors. 2020;20(3):672
  49. 49. Tsinganos P, Cornelis B, Cornelis J, Jansen B, Skodras A. Data augmentation of surface electromyography for hand gesture recognition. Sensors. 2020;20(17):4892

Written By

Ruthber Rodríguez Serrezuela, Enrique Marañón Reyes, Roberto Sagaró Zamora and Alexander Alexeis Suarez Leon

Submitted: 25 July 2022 Reviewed: 23 August 2022 Published: 25 January 2023