Open access peer-reviewed chapter

Classifying and Predicting Respiratory Function Based on Gait Analysis

By Yu Sheng Chan, Wen Te Liu and Ching Te Chiu

Submitted: December 8th 2015Reviewed: April 25th 2016Published: July 21st 2016

DOI: 10.5772/63917

Downloaded: 840

Abstract

The human walking behaviour can express the physiological information of human body, and gait analysis methods can be used to access the human body condition. In addition, the respiratory parameters from pulmonary spirometer are the standard of accessing the body condition of the subjects. Therefore, we want to show the correlation between gait analysis method and the respiratory parameters. We propose a vision sensor-based gait analysis method without wearing any sensors. Our method proposed features such as D′p, V′p and γυ to prove the correlation by classification and prediction experiment. In our experiment, the subjects are divided into three levels depending on the respiratory index. We run classifying and predicting experiment with the extracted features: V′p and γυ. In the classifying experiment, the accuracy result is 75%. In predicting experiment, the correlations of predicting the forced expiratory volume in 1 s (FEV1) and forced vital capacity (FVC) are 0.69 and 0.67, respectively. Therefore, there is a correlation between the pulmonary spirometer and our method. The radar system is a tool using impulse to record the moving of the subjects’ chest. Combining the features of radar system with our features improves the classification result from 75 to 81%. In predicting FEV1/FVC, the correlation also improves from 25 to 42%. Therefore, cooperating with radar system improves the correlation.

Keywords

  • gait analysis
  • classification
  • prediction
  • pattern recognition
  • feature extraction

1. Introduction

Walking behaviour can express information of human body-like pathological symptoms. For example, Parkinson’s disease patients are characterized by special pace rhythm [1].

People increase the respiratory ventilation when they are walking or exercising. However, those people who suffer from chronic obstructive airway disease (COAD) cannot increase their respiratory ventilation quick enough to maintain the exercising behaviour. Consequently, they change their behaviour such as walking slowly so that they can maintain their respiratory ventilation. We can perform gait analysis on COAD patients because their walking behaviours are different with normal people when they are exercising because of their respiratory function.

Chronic obstructive pulmonary disease (COPD) is one condition of COAD. Nowadays, there are many chronic diseases in our daily life. COPD is one of them. COPD is a chronic airway disease characterized by progressive going downhill of the breathing functions [2]. One characteristic of COPD disease is the decreasing of forced expiratory volume in 1 s (FEV1) because of the obstructive airway [2]. Depending on their disease severity, they have different walking behaviours. Therefore, gait analysis can be used to judge the COAD patients’ airway condition by observing their walking behaviour. However, it is difficult to collect the data of COAD patients without medical staff. Without clinical data, we cannot verify the correctness of our gait analysis algorithm.

By cooperating with Shuang-Ho Hospital in New Taipei, Taiwan, we set up an experiment. We film the side view of the subjects when they are performing a 6-min brisk walking test. By gait analysis, we can extract the features from walking behaviour such as pace distance and walking speed variation. However, in order to obtain that physiological information, we may need to wear sensors or markers on the subjects. Our method does not need to wear any sensors on the subjects.

In gait analysis, it is common to wear markers or sensors to record walking behaviour. In the experiment [1], the subjects need to wear a recorder on the ankle so that it could record the stride interval. In another experiment [3], the subjects also need to place a designed insole with 12 sensors into their own shoes.

However, there are some drawbacks of using makers or sensors. Firstly, it is inconvenient and uncomfortable of attaching them on human body and might affect the normal walking. Secondly, some sensors are heavy or hard to use for the elderly. Thirdly, some sensors have the electromagnetic interference that might affect and harm human body. In addition to the sensor problem, it is hard to tell a subject suffering from COAD disease or not by a single experiment. Without a complete examination, it is hard to judge whether the subjects are COAD patients or not. Consequently, we decide to access the respiratory function. By the pulmonary spirometer, we can obtain the tested subjects’ respiratory data. Nowadays, the parameters from pulmonary spirometer are the standard to access the respiratory function.

2. Related work

2.1. Chronic obstructive pulmonary disease

COPD is one condition of COAD and we introduce some studies of COPD disease. The research of the World Health Organization (WHO) and the global initiative for chronic obstructive lung disease revealed that nowadays the COPD is the fourth cause of death in the United States. For the entire world, the COPD can be considered the fifth cause of death [4]. In another study, the COPD can be considered the fourth leading cause of death in the world [5] and is projected to be the fifth by 2020 as a worldwide burden of disease [6].

Generally speaking, chronic disease is gradually rising because of the ageing population and changing habits. The treatment for COPD patients is pulmonary rehabilitation programmes, including when patients are discharged at home [7]. Economic analyses have shown that over 70% of COPD-related health-care expenditures result from emergency room visits and hospital care for exacerbations; this translates to >$10 billion annually in the Unites States [8]. COPD patients have a lower physical activity level than healthy peers [9]. The reduced level of physical activity also related to an increased risk [10]. Liao et al. [11] proposed a review that concentrated on describing wearable devices for measuring physical activity level in COPD patients. In [12], the authors evaluated a method for detecting an exacerbation onset in COPD patients. They used data collected through a pulse oximeter, permitting an easy way to the cohort of patients composed of elderly people affected by COPD. The study [13] provided a system, which offers an effective platform for the satisfaction of the clinical and the patient’s needs in the area of early diagnosis of patient’s health status. The portable system aims at the effective management in the health status of the patients who are suffering from COPD.

2.2. Gait analysis

Gait analysis plays an important role in accessing human’s walking behaviour and it aims to extract biomechanics information. The obvious disease on lower limbs is Parkinson’s disorder. In [14], they provided a feasible method image marker to measure gait with little skin movement. Then, they performed quantitative analysis to extract gait parameter. According to their analysis, the joint angle, rotation angle of lower limbs, stride velocity and stride length have significant difference between Parkinson’s disease patients and non-diseased subjects. The study [1] demonstrated that the gait variability in terms of statistical parameters of stride interval such as STCγ, would be increased in Parkinson’s disorder. In addition to Parkinson’s disease, Alzheimer’s disease (AD) also can be detected by gait analysis. The study [15] presents an inertial-sensor-based wearable device and its associated stride detection algorithm to analyse gait information for patients with Alzheimer’s disease.

The above methods all use markers or sensors to get interest points and then use gait analysis. With the advances in smart phones, Susu Jiang et al. [16] used a smart phone with an accelerometer and a gyroscope to collect human walking gait data in daily life. However, one smart phone only records one feature data. If we want to acquire many interest point data, testers need to tie many smart phones on the body, which might be inconvenient for walking.

Now, we proposed a gait analysis algorithm without any wearable marker or sensor. In addition, we also collect clinical COAD and control data to verify the correlation between pulmonary spirometer and our gait analysis method.

3. Vision sensor-based gait analysis

To avoid wearing sensors on the tested subjects, we propose a vision sensor-based gait analysis method. This method is composed of four parts: input video frame decomposition, pre-processing, feature extraction and gait analysis. The input video takes the side view of the subjects when they are performing a 6-min brisk walking test. Then, the input video is decomposed into individual frames for further processing.

In the pre-processing part, there are three components to extract the silhouette of the tested subject: background subtraction, shadow removal and connected component labelling (CCL). The background subtraction subtracts the background image by the current video frame to capture the moving object. Because the human shadow cannot be removed by background subtraction, we adopt Jamie and Richard’s method [17] to solve the shadow problem. Then, there are some noises that cannot be removed, so we use the connected component-labelling method to reduce the noises and obtain the maximum object.

In the feature extraction part, there are two steps to obtain the desired features: segmentation and feature extraction. In the segmentation part, we have to find the central of gravity (COG) first, and use the COG point to perform body segmentation and extract legs of the subjects. Then, we can get the features such as pace distance and pace velocity in the feature extraction part.

In the gait analysis part, we divide whole subjects into two groups as the Bad and Good by consulting the proposed respiratory index formula. We use support vector machine (SVM) to perform classification and use adaptive network-based fuzzy inference system (ANFIS) to perform prediction.

3.1. Pre-processing

3.1.1. Background subtraction

We take the first frame of the input video as the background image. The background subtraction method is shown in Eq. (1). The components x and y are the pixel location. The factor t is the current frame number. I represents the RGB value of the pixel, which is located at (x, y) and F is the -subtraction result. In our experiment, we set the Th-value at 15

F=|I(x,y,t)B(x,y,1)|>ThE1

3.1.2. Shadow removal

After subtract background image, some interferences still exist. The result of background subtraction is shown in Figure 1. The human shadow is viewed as foreground and we need to remove it. We consult Jamie and Richard’s [17] method to solve the shadow problem. The method is divided into two parts: (1) brightness distortion and (2) chromatic distortion.

Figure 1.

(a) Input frame and (b) background subtraction result.

Ii is the ith pixel of the input frame which can be represented in RGB space by the vector Ii = [(IR(i), IG(i))] as shown in Figure 2. Ei is the ith pixel of the background image which can be represented as Ei = [(ER(i), EG(i), EB(i))]. The lengths of these lines are the intensity of the ith pixel. The projection of Ii onto Ei is denoted as αiEi. We call the αi as brightness distortion. We can solve αi by Eq. (2)

Figure 2.

Colour representation in RGB space.

αi=minαiIi´αiEi´2E2
αi<τBD:ForegroundE3

There is a threshold τBD. We take those pixels whose αi values are smaller than threshold τBD as foreground such as Eq. (3) expressing. In our experiment, we set the threshold τBD at 0.7.

In the chromatic distortion part, we calculate the distance in RGB space between Ii and Ei. Figure 2 shows this as line CDi and we can solve CDi value by Eq. (4)

CDi=Ii´αiEi´E4

In the same way, we also set a threshold τCD to determine this pixel as background or foreground such as Eqs. (5) and (6) expressing. Those pixels whose CDi values are greater than threshold τCD are viewed as foreground and those pixels whose values are smaller as background. In our experiment, we set the threshold τCD at 10.

CDi>τCD:ForegroundE5
Otherwise:BackgroundE6

After finishing these two parts above, we combine these two images and the background subtraction result together to have a result without shadow.

3.1.3. Connected component labelling

Connected component labelling is used to detect the connected components and label them. These components have their own label number. Figure 3 shows an example. Figure 3(a) shows three different regions and (b) shows the labels of the regions. We keep the largest group as our result. In this case, the region of label 3 is reserved and discards other regions.

Figure 3.

(a) Three disconnected components and (b) labelling image.

Though we have an image without shadow, some noises exist. We perform connected component labelling method so that we can have an image without these noises. Figure 4 shows the flow of pre-processing.

Figure 4.

The flow of pre-processing.

3.2. Feature extraction

After the pre-processing, we get the complete target silhouette. In this section, we separate legs from extracted silhouette by segmentation. Then, we find out the gait features such as pace distance and pace velocity in the feature extraction part.

3.2.1. Segmentation

In the segmentation part, we need to find the central of gravity first. Then, we use the extracted silhouette to get the contour by edge detection in order to build the distance map (DM). We can separate legs from human silhouette by DM.

We find the central of gravity from the extracted human silhouette by Eq. (7). After finding the COG of the whole body (COGx, COGy), we use edge detection on human silhouette to extract human contour such as in Figure 5. Then, we draw a DM by computing the Euclidean distance between (COGx, COGy) and extracted human contour shown in Figure 6(b). We compute the distance map by Eq. (8) and xi,yi are the location of the ith pixel of the extracted human contour.

(COGx,COGy)=(i=1NBodyxiN,i=1NBodyyiN)E7
DM=(COGxxi)2+(COGyyi)2E8

Figure 5.

(a) Silhouette image and (b) the contour image of (a).

We find the Nlegs, Nb2(l), Nb2(r) three points from DM as shown in Figure 6. Connecting Nb2(l) and Nb2(r) divides human body into top and down two parts. Connecting (COGx, COGy) and Nlegs separates legs into leg(l) and leg(r) shown in Figure 7.

Figure 6.

(a) Finding COG(x, y) in silhouette. (b) Distance map of COG(x, y).

Figure 7.

Separated legs.

3.2.2. Gait features

In this part, we extract gait features such as pace distance, pace time and pace velocity. Figure 8 shows a pace cycle model and we can extract pace distance (Dp), pace time (Tp) and pace velocity (Vp) from this model and Eq. (9). The distance value of the pace model comes from the horizontal distance between leg(l) and leg(r) such as D1. If the feet are close, we calculate the distance of the closed feet such as D2. D1 and D2 are the longest and shortest distance values, respectively. T1 and T2 are the start and end frame numbers of this step, respectively, so we have to multiply the frame rate to get the actual Tp. The frame rate is 1/30 s per frame in our experiment. Then, Vp comes from dividing Dp by Tp.

{Dp=D1D2Tp=(T2T1)*framerateVp=Dp/TpE9

Figure 8.

The pace cycle model.

3.3. Gait analysis

To verify that there exists a correlation between pulmonary spirometer and our system, we perform classification and prediction experiments. There are two groups, namely bad and good, which are classified by the parameters from pulmonary spirometer and it becomes our classification standard. We take support vector machine for classification and take adaptive network-based fuzzy inference system for prediction. Here, we introduce the tools: SVM and ANFIS.

3.3.1. Support vector machine

SVM comes from Vapnik’s statistical learning theory [18] and it is a machine-learning method, which can be a powerful tool for learning from data and for solving classification problem [18]. In a two-group classification problem such as our study (Bad/Good), the target is to find the Hyperplane between the two data groups. SVM finds the Hyperplane by looking for the maximum margin between two groups. The main idea of SVM is to transform data into higher dimensions and then construct a Hyperplane between two classes in the transformed space. Those data vectors, which are nearest to the constructed line in the transformed space, are called the support vectors which contain information about Hyperplane. Figure 9 shows the concept about the SVM.

Figure 9.

Example of two-group problem showing optimal Hyperplane (dotted line).

3.3.2. Adaptive network-based fuzzy inference system

ANFIS was presented by Jang in 1993 [19]. Adaptive network-based fuzzy inference system can construct an input-output mapping based on human knowledge by a hybrid-learning algorithm. The fuzzy inference system is employed with adaptive network. ANFIS contains a five-layer forward neural network to construct the inference system.

Figure 10.

ANFIS structure with two inputs and four rules.

Input space is mapped to a given membership function (MF). By membership function, the input becomes a degree between 0 and 1. With different membership functions and the number of membership functions, the results are different. Figure 10 shows the ANFIS structure with two inputs and four rules.

The study [20] explained the function of each layer. In Layer(1), the outputs are the membership function degree of the inputs, which are given by Eqs. (10) and (11)

O1,i=μAi(x),i=1,2E10
O1,i=μBi2(y),i=3,4E11
where x and y are the inputs to node i.

Layer(2) involves fuzzy operations. ANFIS fuzzifies the inputs by using AND operation. The label _ means that they perform simple multiplier. Equations (12) and (13) can show the output of Layer(2)

O2,i=wi=μA1(x)*μBi(y),i=1,2E12
O2,i=wi=μA2(x)*μBi2(y),i=3,4E13

In Layer(3), the label N indicates normalization. This layer can be represented by Eq. (14)

O3,i=wi´=wiw1+w2+w3+w4,i=1,2,3,4E14

Layer(4) is the product of the normalized data which can be represented as Eq. (15). The parameters pi, qi and ri are determined during the training process

O4,i=wi´fi=wi´(pix+qix+ri),i=1,2,3,4E15

Layer(5) implements sum of all inputs such as Eq. (16)

O5,i=w´fi=σiwifiσiwiE16

4. Proposed gait features

We propose some gait features from Dp and Vp and call the ith step of Dp and Vp as Dpi and Vpi. The mean distance and mean velocity are denoted as Dp'and Vp, respectıvely. In addition, we divide all steps into S sections depending on the step counts in order to reveal the variation of the subject's movement during the 6-min brisk walking test. Figure 11 shows an example that we divide Dp of all steps into six sections and there are μDiand σDiof each section.

Figure 11.

The six sections of Dp.

The mean of distance for section i is called as μDiand the mean of velocity for section i is called as μVi. The variance of distance for section i is called as σDiand the variance of velocity for section i is called as σVi. These parameters are listed in Eq. (17)

{N:Totalstepcounts,n=NS,Dp'=1N*i=1NDpi,Vp´=1N*i=1NVpiμDi=1n*j=(i1)*n+1(i)*nDpj,μVi=1n*j=(i1)*n+1(i)*nVpjσDi=1n*j=(i1)*n+1(i)*n(DpjμDi)2,σVi=1n*j=(i1)*n+1(i)*n(VpjμVi)2E17

From σDiand σVi, we can calculate the mean of distance variance and the mean of velocity variance, which are denoted as σD(1,S)´and σV(1,S)´. The mean of distance variance for the first two sections and the mean of velocity variance for the first two sections are denoted as σD(1,2)´and σV(1,2),´respectively. The mean of distance variance for the last two sections and the mean of velocity variance for the last two sections are denoted as σD(S1,S)´and σV(S1,S)´respectively. In addition, we also calculate the distance variance ratio and velocity variance ratio, which are denoted as γDand γV. The distance variance ratio is defined as the multiplication of σD(1,S)´and the result of dividing σD(S1,S)´by σD(1,2)´. The velocity variance ratio is defined as the multiplication of σV(1,S)´and the result of dividing σV(S1,S)´by σV(1,2)´. Figure 12 shows the region of these parameters and these parameters are listed in Eq. (18)

{σD(1,S)´=1S*i=1SσDi,σV(1,S)´=1S*i=1SσViσD(1,2)´=12*i=12σDi,σV(1,2)´=12*i=12σViσD(S1,S)´=12*i=S1SσDi,σV(S1,S)´=12*i=S1SσViγD=σD(1,S)´*σD(S1,S)´σD(1,2)´,γV=σV(1,S)´*σV(S1,S)´σV(1,2)´E18

Figure 12.

The variance mean of whole sections (purple region) and first two sections (orange region) and last two sections (green region).

5. Clinical experiment environment

5.1. Experiment set-up and flow

The experiments are run at Shuang-Ho Hospital in New Taipei, Taiwan. We film the side view of the subjects when they perform the 6-min brisk walking test. We set up a green curtain to exclude the interferences such as the movement of other people from our experiment. We film the walking subjects using Nikon P330 digital camera.

Firstly, the therapists ask the subjects’ profile including height, weight and age. Secondly, by using pulmonary spirometer, we can get respiratory parameters such as FEV1 and FVC data about the subjects. Thirdly, before the experiment starts, the therapist helps the subjects wear a pulse oximeter on the index finger, which is used to measure the oxygen and pulse. Fourthly, the subjects need to take a 2-min break so that the pulse oximeter can record the oxygen and pulse in normal condition. Fifthly, when the walking test begins, the subjects should walk along the trail as fast as possible. While the subjects start their walking test, we film the side view of the subjects. Sixthly, after the 6-min walking test, the subjects use the pulmonary spirometer to measure the respiration parameter after exercising again.

5.2. Data collection

We run the experiments from September 2014 to July 2015. In the experiments, there are 60 subjects who aged between 24 and 91 years. Among these 60 subjects, there are 48 men and 12 women.

There are two rooms: the subjects walk from the right room to the left one, then turnaround to walk into the right room. When the subjects walk to the other side of the trail, they need to turnaround and continue to walk along the trail. They decrease their walking speed so that they can turnaround easily when they are close to the border. To avoid recording those slowdown steps, we abandon those steps and keep normal steps. Taking Figure 13 as an example, there are six steps in this walking trail. We just consider steps 1 and 6 as the normal steps and abandon steps 2–5.

Figure 13.

(a) Walking trail before turnaround. (b)Walking trail after turnaround.

Depending on the respiratory index that comes from Eq. (19), these subjects are divided into three levels: level 1 (the worst respiratory function), level 2 (poor respiratory function) and level 3 (normal respiratory function). Table 1 shows the respiratory index used to classify the three levels. We call the respiratory index as REX. The smaller REX represents the worse respiratory function

REX=postFEV1*postFEV1preFEV1*postICpreIC*postFVCpreFVC*postFEV1FVCE19

GroupLevel 1Level 2Level 3
REX0.7[0.7,1.65]1.65

Table 1.

The respiratory index (REX) used to classify the three levels.

The main item of REX formula is postFEV1 and other items are used to adjust it. The three items of postFEV1preFEV1, postICpreICand postFVCpreFVCare greater than one in normal respiratory function subjects but smaller than one in poor respiratory function subjects. The value of postFEV1FVCis lower than 0.75 in those subjects who have poor respiratory function. Figure 14 shows the lung capacity changes of respiratory factors.

Figure 14.

Lung capacity changes [21].

The FEV1 is the volume that has been exhaled at the end of the first second of forced expiration. The FVC is the forced vital capacity that is used for the determination of the vital capacity from a maximally forced expiratory effort. The IC is the inspiratory capacity that is the sum of inspiratory reserve volume and tidal volume. The FEV1FVCis the ratio that is used for the diagnosis of obstructive and restrictive lung disease.

6. Experimental results

In this chapter, we use the features that we get from Chapter 4 to perform our experiment. There are three experiments: classification with support vector machine, prediction with adaptive neural fuzzy inference systems and cooperating with radar system.

In the first experiment, we choose one feature from Dp´, Vp´and the other feature is γV. There are two combinations. Among these two combinations, we find the best one according to the classification results and it becomes the inputs of the prediction experiment. In the second experiment, we use the features that we find in the first experiment as the ANFIS inputs and calculate the correlation, MSE, regression slop under different membership functions. In the third experiment, we combine the features of radar system with our features to perform classification and prediction again.

6.1. Classification with support vector machine

Support vector machine is one of the most widely used machine-learning algorithms for classification problems [22].

We group the subjects of level 1 and level 2 into the Bad group, and the subjects of level 3 belong to the Good group. There are 32 subjects in the Bad group and 28 subjects in the Good group. Those subjects who belong to the Good group are marked with blue triangles and those subjects who belong to the Bad group are marked with red circles.

S-value1234567
Accuracy0.550.550.550.560.560.610.58

Table 2.

The SVM accuracy in different S-value with σD(1,S)´and σV(1,S)´.

According to the above chapter, we divide all steps into S sections. In order to find the S-value, we perform different S-values in SVM classification. The inputs are σD(1,S)´and σV(1,S)´because these two parameters are affected by the S-value. Table 2 lists the SVM accuracy in different S-values and the highest accuracy is 0.61 when S equals six. Therefore, the S-value is six in our experiment. In this article, the bold values in all the tables means the best result in the experiment.

LevelDp´Vp´
Level 147.72117.4
Level 250.49129.2
Level 356.55152.3

Table 3.

The Dp´and Vp´values.

Figure 15.

The Dp´ and Vp´ values of all subjects.

From Table 3, the subjects of level 3 have larger Dp´and Vp´than that of levels 2 and 1. The values of each level are the mean of Dp´and Vp´of the subjects who belong to the level. Therefore, the Dp´and Vp´become our choices of input features. Figure 15 shows the Dp´and Vp´of all subjects.

LevelσD(1,6)´σV(1,6)´
Level 13.1611.2
Level 23.4811.9
Level 33.6914.3

Table 4.

The σD(1,6)´and σV(1,6)´values in three levels.

Figure 16.

The σD(1,6)´ and σV(1,6)´ values of all subjects.

The subjects who have better respiratory function have larger σD(1,6)´and σV(1,6)´than those who have poor respiratory function. In Table 4, the values of level 3 are higher than those of levels 1 and 2. The values of each level are the mean of σD(1,6)´and σV(1,6)´of the subjects who belong to the level. Figure 16 shows the σD(1,6)´and σV(1,6)´of all subjects.

In addition, people have lower variance after they exercise. Figure 17 shows the pace distance of a person in three different conditions: normal walking, walking after a short run and walking after a long run. The variation of walking after a long run is smaller than others.

Figure 17.

Pace distance of a person in three conditions.

The σV(5,6)´is the mean of velocity variance of the hind of the test and σV(1,2)´is the mean of velocity variance of the front of the test. To those subjects who have poor respiratory function, the σV(5,6)´is smaller than σV(1,2)´because they feel like walking after a long run in the hind of the test. On the other hand, for those subjects who have better respiratory function feel like normal walking in the hind of the test.

Therefore, the σV(5,6)´/σV(1,2)´of those subjects who have poor respiratory function should be smaller than those subjects who have better respiratory function. From Table 5, the σV(5,6)´/σV(1,2)´value of level 3 is greater than levels 1 and 2. The values of each level are the mean of σV(5,6)´/σV(1,2)´of the subjects who belong to the level. Figure 18 shows the σV(5,6)´/σV(1,2)´of all subjects.

LevelσV(5,6)´σV(1,2)´
Level 10.93
Level 20.96
Level 31.04

Table 5.

The σV(5,6)´σV(1,2)´values in three levels.

Figure 18.

The σV(5,6)´σV(1,2)´ of all subjects.

Figure 19.

The γV of all subjects.

Input featuresDp´,γVVp´,γV
Accuracy0.660.75

Table 6.

The accuracy of SVM with different features.

The γVconsiders the mean of velocity variance value and the velocity variance ratio (γV=σV(1,6)´*(σV(5,6)´/σV(1,2)´)), so γVbecome our choices of the input features. Figure 19 shows γVvalues of all subjects. Table 6 lists the SVM results with different features.

Figure 20.

SVM classification result with Vp´ and γV.

The accuracy of input features (Vp´,γV) is better than the other input features (Dp´,γV). Therefore, we use (Vp´,γV) as the best inputs of the classification experiment. Figures 20 and 21 show the SVM result with the inputs (Vp´,γV) and (Dp´,γV), respectively. In Figure 20, the subjects of the Good group have higher Vp´and γVthan those who are in the Bad group.

Figure 21.

SVM classification result with Dp´ and γV.

6.2. Prediction with adaptive neural fuzzy inference systems

We utilize adaptive neural fuzzy inference system to help us predict the parameters from pulmonary spirometer. The ANFIS system comes from the toolbox of Matlab.

Because we only collect about 60 cases so far, it is not enough for ANFIS to perform prediction. In order to increase the training samples, we adopt Leave-one-out cross-validation method. Leave-one-out cross-validation is used in analysing small datasets. It uses one sample as the validation set and the remaining as the training set. Repeat on this way for all samples. We can use this method to solve the insufficient data problem.

In ANFIS, it is important to choose a correct membership function. In addition, we also need to choose the input sections. In the experiment, we use six different membership functions including trapmf, gbellmf, gaussmf, gauss2mf, pimf and dsigmf. Figure 22 shows the membership functions we use in our prediction experiment. The inputs of the experiment are Vp´and γV. In our results, we show the correlation, normalized Mean Square Error (MSEN) and regression slope under different membership functions. The formula of MSEN is shown in Eq. (20). The Targeti are the measured values from pulmonary spirometer and the Predicti are the values come from ANFIS.

MSEN=1ni=1n(TargetiPredictiMean(Target))2E20

We try to predict three different parameters that come from the pulmonary spirometer: post FEV1FVC, postFEV1 and postFVC. The ‘post’ name means the parameters after the 6-min brisk walking test. In the following part, for the convenience, we call post FEV1FVC, postFEV1, postFVC as, FEV1FVCFEV1, FVC, respectively. FEV1FVCand FEV1 are used to access the respiratory function, so we choose these two parameters as our predicting targets.

Figure 22.

Different membership functions.

CorrelationMSENRegression slope
Trapmf0.2260.0400.139
Gbellmf0.2030.0410.127
Gaussmf0.2510.0400.167
Gauss2mf0.1950.0400.112
Pimf0.2150.0390.125
Dsigmf0.2100.0390.121

Table 7.

The results of predicting FEV1FVC.

FEV1-FVC is an index which is used to access the severity of airway obstruction. The lower value means that the airway obstructs severely. Table 7 shows our prediction results and we use [2 2] as the input sections. The Vp´and γVare the experiment inputs. Figure 23 shows the predicting results and regression slope under different membership functions.

FEV1 is also a parameter to access the respiratory function. The higher FEV1 value means that the subjects have better respiratory function. Consequently, we also predict the FEV1 value. Table 8 shows the prediction results and we use [3 2] as using 3 nodes for the first input and 2 nodes for the second input in the ANFIS system. The features Vp´and γVare the experiment inputs. Figure 24 shows the predicting results and regression slope in different membership functions.

Figure 23.

The results of predicting FEV1FVC with different membership functions: (a) Trap MF, (b) Gbell MF, (c) Gauss MF, (d) Gauss2 MF, (e) Pi MF and (f) Dsig MF.

CorrelationMSENRegression slope
Trapmf0.6940.1400.746
Gbellmf0.6680.1860.827
Gaussmf0.6400.1930.774
Gauss2mf0.3861.9681.264
Pimf0.5600.3650.880
Dsigmf0.3522.6921.321

Table 8.

The result of predicting FEV1.

The correlation and regression slope of predicting FEV1FVCdo not perform well under all membership functions. However, the correlation and regression slope of predicting FEV1 perform well under trapmf. From the two results above, we infer that our system is good at predicting the respiratory parameter from pulmonary spirometer (FEV1). However, it is hard to predict the computed value (FEV1FVC). Consequently, we also predict the FVC value. Table 9 shows the prediction results and we use [3 2] as using 3 nodes for the first input and 2 nodes for the second input in the ANFIS system. Vp´and γVare the experiment inputs. Figure 25 shows the predicting results and regression slope in different membership functions.

Figure 24.

The results of predicting FEV1 with different membership functions: (a) Trap MF, (b) Gbell MF, (c) Gauss MF, (d) Gauss2 MF, (e) Pi MF and (f) Dsig MF.

CorrelationMSENRegression slope
Trapmf0.6600.0790.621
Gbellmf0.6470.0940.693
Gaussmf0.6780.0760.646
Gauss2mf0.4320.2490.654
Pimf0.1920.4580.354
Dsigmf0.1561.1510.469

Table 9.

The result of predicting FVC.

Table 10 shows the best results of predicting FEV1FVC, FEV1 and FVC. The correlations of FEV1 and FVC are all good and close and the regression slope of FEV1 is better than FVC. However, the correlation and regression slope of FEV1FVCdo not perform well. The MSEN of FEV1FVCis smaller than other two parameters. Our system is good at predicting the parameters of pulmonary spirometer but do not perform well in predicting the ratio. Nevertheless, the high correlations of FEV1 and FVC verify that there is a correlation between pulmonary spirometer and our gait analysis system.

Figure 25.

The results of predicting FVC with different membership functions: (a) Trap MF, (b) Gbell MF, (c) Gauss MF, (d) Gauss2 MF, (e) Pi MF and (f) Dsig MF.

Predicting targetCorrelationMSENRegression slope
FEV1FVC0.2510.0400.167
FEV10.6940.1400.746
FVC0.6780.0760.646

Table 10.

The best result of predicting FEV1FVC, FEV1 and FVC.

6.3. Cooperating with radar system

In this section, we combine the features of radar system with our features Vp´and γV. The radar system is a tool using impulse to record the moving of the subjects' chest and analyse the features of respiration. It uses the ΔAmpand the Δβratioto analyse respiration. These two features are listed in Eq. (21). The Amp is the respiratory intensity of the subject. The β1and β2are the inspiratory speed and expiratory speed, respectively. The names ‘post’ and ‘pre’ mean the parameters after a 6-min brisk walking and before a 6-min brisk walking, respectively. We use these two features with Vp´and γVto perform SVM classification and ANFIS prediction experiments.

In the SVM experiment, there are still 32 subjects in the Bad group and 28 subjects in the Good group. Because we use (Vp´,γV,ΔAmp,Δβratio) as the SVM inputs, we cannot draw a two-dimensional (2D) figure. The accuracy with (Vp´,γV,ΔAmp,Δβratio) is 81.6% and it is higher than the accuracy with (Vp´,γV) (75%).

{ΔAmp=postAmppreAmppreAmp*100Δβratio=postβratiopreβratiopreβratio*100βratio=β1/β2E21

Target (features)CorrelationMSENregression slope
FEV1FVC, (Vp´,γV)0.2510.0400.167
FEV1FVC,(ΔAmp,Δβratio)0.5250.0710.84
FEV1FVC,(Vp´,γV,ΔAmp,Δβratio)0.4280.0540.534
FEV1, (Vp´,γV)0.6940.1400.746
FEV1, (ΔAmp,Δβratio)0.4740.1900.39
FEV1, (Vp´,γV,ΔAmp,Δβratio)0.6750.0200.864
FVC, (Vp´,γV)0.6780.0760.646
FVC, (ΔAmp,Δβratio)0.1290.2170.13
FVC, (Vp´,γV,ΔAmp,Δβratio)0.5170.1900.719

Table 11.

The best results of predicting FEV1FVC, FEV1 and FVC.

In the ANFIS experiment, we also predict FEV1FVCand FEV1 parameters and use (Vp´,γV,ΔAmp,Δβratio) as the ANFIS experiment inputs. The input sections we used is [3 3 3 2] that means the number of nodes used in the four inputs are 3, 3, 3, and 2 respectively. Table 11 shows the results of predicting FEV1FVC, FEV1 and FVC with (Vp´,γV), (ΔAmp,Δβratio) and (Vp´,γV,ΔAmp,Δβratio). We only list the best results among the six different membership functions. Figure 26 shows the best results of predicting FEV1FVC, FEV1 and FVC with (Vp´,γV), (ΔAmp,Δβratio) and (Vp´,γV,ΔAmp,Δβratio). In predicting FEV1FVC, the correlation and regression slope improve strongly though the MSEN value increases slightly by using (Vp´,γV,ΔAmp,Δβratio). In predicting FEV1 and FVC, it does not improve the effects on correlation and regression slope by using (Vp´,γV,ΔAmp,Δβratio). Therefore, the features of radar system cannot improve the results of predicting FEV1 and FVC.

Radar system improves our analysis results on both SVM classification and predicting the parameter FEV1/FVC. With radar system's help, there is a higher correlation and accuracy between the combined system and the pulmonary spirometer.

Figure 26.

(a) Predicting FEV1FVC with (Vp´,γV); (b)Predicting FEV1FVC with (ΔAmp,Δβratio); (c) Predicting FEV1FVC with (Vp´,γV,ΔAmp, Δβratio); (d) Predicting FEV1 with (Vp´,γV); (e) Predicting FEV1 with (ΔAmp,Δβratio); (f) Predicting FEV1 with (Vp´,γV,ΔAmp, Δβratio); (g) Predicting FVC with (Vp´,γV); (h) Predicting FVC with (ΔAmp,Δβratio); (i) Predicting FVC with (Vp´,γV,ΔAmp,  Δβratio).

7. Conclusion

We propose a vision sensor-based gait analysis method without wearing any sensor on human body. In our approach, the proposed gait features analyse the subjects’ respiratory function. We also perform a clinical experiment on COAD patients and normal people with our vision sensor-based gait analysis method. With the extracted features, Vp´and γV, the classification result is close to the classification by the parameters of pulmonary spirometer. The SVM accuracy is 75%. In ANFIS experiment, the correlations of ANFIS prediction on FEV1 and FVC achieve 0.694 and 0.678.

In addition, by combining the features of radar system (ΔAmp and Δβratio) with our features (Vp´and γV), the SVM accuracy and predicting on ratio (FEV1FVC) both improve strongly. The SVM accuracy goes to 81 from 75% and the correlation of ANFIS on predicting FEV1FVCgoes to 0.428 from 0.25. From the experiment above, we verify that there exists a correlation between the pulmonary spirometer and gait analysis system.

© 2016 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Yu Sheng Chan, Wen Te Liu and Ching Te Chiu (July 21st 2016). Classifying and Predicting Respiratory Function Based on Gait Analysis, Advanced Biosignal Processing and Diagnostic Methods, Christoph Hintermüller, IntechOpen, DOI: 10.5772/63917. Available from:

chapter statistics

840total chapter downloads

1Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Position-Free Vital Sign Monitoring: Measurements and Processing

By Dany Obeid, Sarah Samad, Sawsan Sadek, Gheorghe Zaharia and Ghaïs El Zein

Related Book

First chapter

Introduction to Infrared Spectroscopy

By Theophile Theophanides

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us