Physiological features extracted in standing and walking mode of subject #1
1. Introduction
In the present chapter the authors have ventured to explain the process of recognition of physiological and behavioural traits of humangait and humanface images, where a trait signifies a character on a feature of the human subject. Recognizing physiological and behavioural traits is a knowledge intensive process, which must take into account all variable information of about human gait and human face patterns. Here the trained data consists of a vast corpus of human gait and human face images of subjects of varying ages. Recognition must be done in parallel with both test and trained data sets. The process of recognition of physiological and behavioural traits involves two basic processes:
2. Modelling of AHGM and AHFM
2.1. Illustration of an Artificial HumanGait Model (AHGM)
Humangait analysis is a systematic study and analysis of human walking. It is used for diagnosis and the planning of treatment in people with medical conditions that affect the way they walk. Biometrics such as automatic face and voice recognition continue to be a subject of great interest. HumanGait is a new biometric aimed to recognize subjects by the way they walk (Cunado et al. 1997). However, functional assessment, or outcome measurement is one small role that quantitative humangait analysis can play in the science of rehabilitation. If the expansion on humangait analysis is made, then ultimate complex relationships between normal and abnormal humangait can be easily understood (Huang et al. 1999; Huang et al. 1999). The use of quantitative humangait analysis in the rehabilitation setting has increased only in recent times. Since past five decades, the work has been carried out for humangait abnormality treatment. Many medical practitioners along with the help of scientists and engineers (Scholhorn et al. 2002) have carried out more experimental work in this area. It has been found from the literature that two major factors:
Figure 1 gives an outline of the process of formation of AHGM. In this process a known humangait image has to be fed as input. Then it has to be preprocessed for enhancement and segmentation. The enhancement is done for filtering any noise present in the image. Later on it is segmented using connected component method (Yang 1989; Lumia 1983). Discrete Cosine Transform (DCT) is employed for lossless compression, because it has a strong energy compaction property. Another advantage in using DCT is that it considers realvalues and provides better approximation of an image with fewer coefficients. Segmentation is carried out for the detection of the boundaries of the objects present in the image and also used in detecting the connected components between pixels. Hence the Region of Interest (ROI) is detected and the relevant humangait features are extracted. The relevant features that have to be selected and extracted in the present chapter are based on the physical characteristics of humangait of the subject. The physical characteristics that must be extracted are: footangle, steplength, kneetoankle (KA) distance, footlength and shankwidth. These features are calculated using Euclidean distance measures. The speed of the humangait can be calculated using Manhattan distance measures. Based on these features relevant parameters have to be extracted. The relevant parameters based on aforesaid geometrical features are: mean, median, standard deviation, range of parameter (lower and upper bound parameter), power spectral density (psd), autocorrelation and discrete wavelet transform (DWT) coefficient, eigenvector and eigenvalue. In this chapter of the book, the above parameters have been experimentally extracted after analyzing 10 frames of humangait image of 100 different subjects of varying age groups. As the subject walks, the configuration of its motion repeats periodically. For this reason, images in a humangait sequence tend to be similar to other images in the sequence when separated in time by the period of the humangait. With a cyclic motion such as a humangait, the selfsimilarity image has a repeating texture. The frequency of the humangait determines the rate at which the texture repeats. Initially the subject is standing at standstill position. During this instance the features that have to be extracted are the footlength, symmetrical measures of the knee length, curvature measurement of the shank, maximumshankwidth and minimumshankwidth. Through the measurement of the footlength of both the legs of the subject, the difference in the length of two feet can be detected. From the symmetrical measurement of the kneelength, the disparity in length of legs, if any, can be measured. Through curvature measurement of the shank, any departure from normal posture can be detected. Measurement of shankwidth helps in predicting probable anomalies of the subject and also will show any history of injury or illness in the past. The relevant feature based parameters that have to be extracted are fed as input to an Artificial Neuron (AN) as depicted in figure 2. Each neuron has an input and output characteristics and performs a computation or function of the form, given in equation (1):
where X = (x_{1},x_{2},x_{3},….,x_{m}) is the vector input to the neuron and W is the weight matrix with w_{ij} being the weight (connection strength) of the connection between the j^{th} element of the input vector and i^{th} neuron. W^{T} means the transpose of the weight matrix. The f (.) is an activation or nonlinear function (usually a sigmoid), O_{i} is the output of the i^{th} neuron and S_{i} is the weighted sum of the inputs.
A single artificial neuron, as shown in figure 2, by itself is not a very useful tool for AHGM formation. The real power comes when a single neuron is combined into a multilayer structure called artificial neural networks. The neuron has a set of nodes that connect it to the inputs, output or other neurons called synapses. A linear combiner is a function that takes all inputs and produces a single value. Let the input sequence be {X_{1},X_{2},…,X_{N}} and the synaptic weight be {W_{1},W_{2},W_{3},….,W_{N}}, so the output of the linear combiner, Y, yields to equation (2),
An activation function will take any input from minus infinity to infinity and squeeze it into the range –1 to +1 or between 0 to 1 intervals. Usually an activation function being treated as a sigmoid function that relates as given in equation (3), below:
The threshold defines the internal activity of the neuron, which is fixed to –1. In general, for the neuron to fire or activate, the sum should be greater than the threshold value.
In the present chapter, feedforward network has to be used as a topology and back propagation as a learning rule for the formation of corpus or knowledgebased model called AHGM. This model has to be optimized for the best match of features using genetic algorithm. The matching has to be done for the recognition of behavioural features of the subject, not only in openairspace but also in underwaterspace. The reason for adopting genetic algorithm is that, it is the best search algorithm based on the mechanics of natural selection, mutation, crossover, and reproduction. They combine survival of the fittest features with a randomized information exchange. In every generation, new sets of artificial features are created and are then tried for a new measure after bestfit matching. In other words, genetic algorithms are theoretically and computationally simple on fitness values. The crossover operation has to be performed by combining the information of the selected chromosomes (humangait features) and generates the offspring. The mutation and reproduction operation has to be utilized by modifying the offspring values after selection and crossover for the optimal solution. Here in the present chapter, an AHGM signifies the population of genes or humangait parameters.
2.1.1. Mathematical formulation for extraction of physiological traits from humangait
Based on the assumption that the original image is additive with noise. To compute the approximate shape of the wavelet (that is, any real valued function of time, possessing a specific structure), in a noisy image and also to estimate its time of occurrence, two methods are generally used. The first one is simplestructuralanalysis method and the second one is the templatematching method.Mathematically, for the detection of wavelets in noisy image, assume a class of wavelets, S_{i}(t), I = 0,...N1, all possess certain common structural features. Based on this assumption that noise is additive, then the corrupted image has to be modeled by the equation,
where i(m,n) is the clean image, d(m,n) is the noise and G is the term for signaltonoise ratio control. Next windowing the image and assuming G = 1, equation (4) becomes:
Fourier transform of both sides of equation (5), yields:
Where X_{w}(e^{jω1},e^{jw2}), I_{w}(e^{jω1},e^{jw2}) and D_{w}(e^{jω1},e^{jw2}) are the Fourier transforms of windowed noisy, originalimage and noisyimage respectively.
To denoise this image, wavelet transform has to be applied. Let the mother wavelet or basic wavelet be ψ(t), which yields to,
Further as per the definition of Continuous Wavelet Transform CWT (a,τ), the relation yields to,
The parameters obtained in equation (8) have to be discretized, using Discrete Parameter Wavelet Transform (DPWT).
This DPWT (m, n) is to be obtained by substituting a =
where ‘m’ and ‘n’ are the integers, a_{0} and τ_{0} are the sampling intervals for ‘a’ and ‘τ’, x(k,l) is the enhanced image. The wavelet coefficient has to be computed from equation (9) by substituting a_{0} = 2 and τ_{0} = 1.
Further the enhanced image has to be sampled at regular time interval ‘T’ to produce a sample sequence {i (mT, nT)}, for m = 0,1,2, M1 and n=0,1,2,…N1 of size M x N image. After employing Discrete Fourier Transformation (DFT) method, it yields to the equation of the form,
for u=0,1,2,…,M1 and v = 0, 1, 2, ……..,N1
In order to compute the magnitude and power spectrum along with phaseangle, conversion from timedomain to frequencydomain has to be done. Mathematically, this can be formulated as, let R(u,v) and A(u,v) represent the real and imaginary components of I(u,v) respectively.
The Fourier or magnitude spectrum, yields to,
The phaseangle of the transform is defined as,
The powerspectrum is defined as the square of the magnitude spectrum. Thus squaring equation (11) yields to,
Due to squaring, the dynamic range of the values in the spectrum becomes very large. Thus to normalize this, logarithmic transformation has to be applied in equation (11). Thus it, yields,
The expectation value of the enhanced image has to be computed and it yields to the relation as,
where ‘E’ denotes expectation. The variance of the enhanced image has to be computed by using the relation given in equation (16),
The autocovariance of an enhanced image has to be also computed using the relation given in equation (17),
Also the powerspectrumdensity has to be computed from equation (17),
where C_{xx}(m,n) is the autocovariance function with ‘m’ and ‘n’ samples and W(m,n) is the Blackmanwindow function with ‘m’ and ‘n’ samples.
The datacompression has to be performed using Discrete Cosine Transform (DCT). The equation (19) is being used for the data compression.
Further for the computation of principal components (that is, eigenvalues and the corresponding eigenvectors), a pattern vector
where
and
Taking the covariance of equation (20), it yields, the corresponding eigenvector, given in equation (21),
and thus
where ‘λ_{i}’are the corresponding eigenvalues.
Segmentation of an image has to be performed using connectedcomponent method. For mathematical formulation, let ‘pix’ at coordinates (x,y) has two horizontal and two vertical neighbours, whose coordinates are (x+1,y), (x1,y), (x,y+1) and (x,y1). This forms a set of 4neighbors of ‘pix’, denoted as N_{4}(pix). The four diagonal neighbours of ‘pix’ have coordinates (x+1,y+1),(x+1,y1),(x1,y+1) and (x1,y1), denoted as N_{D}(pix). The union of N_{4}(pix) and N_{D}(pix), yields 8neighbours of ‘pix’. Thus,
A path between pixels ‘pix_{1}’ and ‘pix_{n}’ is a sequence of pixels pix_{1}, pix_{2}, pix_{3},…..,pix_{n1},pix_{n}, such that pix_{k} is adjacent to p_{k+1}, for 1 ≤ k < n. Thus connectedcomponent is defined, which has to be obtained from the path defined from a set of pixels and which in return depends upon the adjacency position of the pixel in that path.
From this the speed of walking has to be calculated. Mathematically, it has to be formulated as, let the source be ‘S’ and the destination be ‘D’. Also assume that normally this distance is to achieve in ‘T’ steps. So ‘T’ frames or samples of images are required.
Considering the first frame, with leftfoot (F_{L}) at the back and rightfoot (F_{R}) at the front, the coordinates with (x,y) for first frame, such that F_{L}(x_{1},y_{1}) and F_{R}(x_{2},y_{2}). Thus applying the Manhattan distance measures, the steplength has to be computed as,
Let normally, T_{act} steps are required to achieve the destination. From equation (24), T_{1} has to be calculated for the first frame. Similarly, for ‘nth’ frame, T_{n} has to be calculated. Thus total steps, calculated are,
Thus walkingspeed or walkingrate has to be calculated as,
2.1.2. Mathematical formulation for extraction of behavioral traits from humangait
Next to compute the net input to the output units, the delta rule for pattern association has to be employed, which yields to the relation,
where ‘y_{inj}’ is the output pattern for the input pattern ‘x_{i}’ and j = 1 to n.
Thus the weight matrix for the heteroassociative memory neural network has to be calculated from equation (27). For this, the activation of the output units has to be made conditional.
The output vector ‘y’ gives the pattern associated with the input vector ‘x’. The other activation function may also be used in the case where the target response of the net is binary. Thus a suitable activation function has been proposed by,
Considering two measures, Accuracy and Precision has been derived to access the performance of the system, which may be formulated as,
where TPR = True positive recognition and FPR = False positive recognition.
Further the analysis has to be done for the recognition of behavioral traits with two target classes (normal and abnormal). It can be further illustrated that AHGM has various states, each of which corresponds to a segmental feature vector. In one state, the segmental feature vector is characterized by eleven parameters. Considering only three parameters: the step_length: distance, mean, and the standard deviation, the AHGM is composed of the following parameters
where AHGM_{1} means an artificial humangait model of the first feature vector, D_{s1} means the distance, μ_{s1} means the mean and σ_{s1} means the standard deviation based on step_length. Let w_{norm} and w_{abnorm} be the two target classes representing normal foot and abnormal foot respectively. The clusters of features have been estimated by taking the probability distribution of these features. This has been achieved by employing Bayes decision theory. Let P(w_{i}) be the probabilities of the classes, such that, i = 1,2,….M also let p(β/w_{i}) be the conditional probability density. Assume an unknown gait image represented by the features, β. So, the conditional probability p(w_{j}/β), which belongs to j^{th} class, is given by Bayes rule as,
So, for the class j = 1 to 2, the probability density function p(β), yields,
Equation (33) gives a posteriori probability in terms of a priori probability P(w_{j}). Hence it is quite logical to classify the signal, β, as follows,
If P(w_{norm}/ β) > P(w_{abnorm}/ β), then the decision yields β Є w_{norm} means ‘normal behaviour’ else the decision yields β Є w_{abnorm} means ‘abnormal behaviour’. If P(w_{norm}/ β) = P(w_{abnorm}/ β), then it remains undecided or there may be 50% chance of being right decision making. The solution methodology with developed algorithm has been given below for the complete analysis through humangait, made so far in the present chapter of the book.
Algorithm 1.
NGBBCR {NeuriGenetic Based Behavioral Characteristics Recognition}
Next for the formation of an artificial humanface model (AHFM) the illustration has been thoroughly done in the next subsequent section of this chapter of the book.
2.2. Illustration of an Artificial HumanFace Model (AHFM)
In the recent times, frontal portion of the humanface images have been used for the biometrical authentication. The present section of the chapter incorporates the frontalhumanface images only for the formation of corpus. But for the recognition of physiological and behavioural traits of the subject (humanbeing), sideview of the humanface has to be analysed using hybrid approach, which means the combination of artificial neural network (ANN) and genetic algorithm (GA). The work has to be carried out in two stages. In the first stage, formation of the AHFM, as a corpus using frontalhumanface images of the different subjects have to be done. In the second stage, the model or the corpus has to be utilized at the backend for the recognition of physiological and behavioural traits of the subject. An algorithm has to be developed that performs the above specified objective using neurogenetic approach. The algorithm will be also helpful for the biometrical authentication. The algorithm has been called as HABBCR (Hybrid Approach Based Behavioural Characteristics Recognition). The recognition process has to be carried out with the help of test image of humanface captured at an angle of ninetydegree, such that the humanface is parallel to the surface of the image. Hence relevant geometrical features with reducing orientation in image from ninetydegree to lower degree with fivedegree change have to be matched with the features stored in a database. The classification process of acceptance and rejection has to be done after bestfit matching process. The developed algorithm has to be tested with 100 subjects of varying age groups. The result has been found very satisfactory with the data sets and will be helpful in bridging the gap between computer and authorized subject for more system security. More illustrations through humanface are explained in the next subsection of this chapter of the book.
2.2.1. Mathematical formulation for extraction of physiological traits from humanface
The relevant physiological traits have to be extracted from the frontalhumanface images and the template matching has to be employed for the recognition of behavioural traits of the subject. Little work has been done in the area of humanface recognition by extracting features from the sideview of the humanface. When frontal images are tested for its recognition with minimum orientation in the face or the image boundaries, the performance of the recognition system degrades. In the present chapter, sideview of the face has to be considered with 90degree orientation. After enhancement and segmentation of the image relevant physiological features have to be extracted. These features have to be matched using an evolutionary algorithm called genetic algorithm. Many researchers like Zhao and Chellappa, in the year 2000, proposed a shape from shading (SFS) method for preprocessing of 2D images. In the same year, that is, 2000, Hu et al. have modified the same work by proposing 3D model approach and creating synthetic images under different poses. In the same year, Lee et al. also has proposed a similar idea and given a method where edge model and colour region are combined for face recognition after synthetic image were created by a deformable 3D model. In the year 2004, Xu et al. proposed a surface based approach that uses Gaussian moments. A new strategy has been proposed (Chua et al. 1997 and Chua et al. 2000), with two zones of the frontalface. They are forehead portion, nose and eyes portion. In the present work, the training of the system has to be carried out using frontal portion of the face, considering four zones of humanface for the recognition of physiological characteristics or traits or features. They are:
First head portion, second forehead portion, third eyes and nose portion and fourth mouth and chin portion. From the literature survey it has been observed that still there is a scope in face recognition using ANN and GA (Hybrid approach). For the above discussion the mathematical formulation is same as done for the humangait analysis.
2.2.2. Mathematical formulation for extraction of behavioral traits from humanface
A path between pixels ‘pix_{1}’ and ‘pix_{n}’ is a sequence of pixels pix_{1}, pix_{2}, pix_{3},…..,pix_{n1},pix_{n}, such that pix_{k} is adjacent to p_{k+1}, for 1 ≤ k < n. Thus connected component is defined, which has to be obtained from the path defined from a set of pixels and which in return depends upon the adjacency position of the pixel in that path. In order to compute the orientation using reducing strategy, phaseangle must be calculated first for an original image. Hence considering equation (12), it yields, to considerable mathematical modelling.
Let I_{k} be the sideview of an image with orientation ‘k’. If k = 90, then I_{90} is the image with actual sideview.If the real and imaginary component of this oriented image is R_{k} and A_{k}. For k = 90 degree orientation,
For k = 90
Thus phase angle of image with k = 90 orientations is
If k = k5, (applying reducing strategy), equation (37) yields,
From equation (37) and (38) there will be lot of variation in the output. Hence it has to be normalized, by imposing logarithmic to both equations (37) and (38)
Taking the covariance of (39), it yields to perfect orientation between two sideview of the images, that is, I
The distances between the connectedcomponents have to be computed using Euclidean distance method. A perfect matching has to be done with the corpus with bestfit measures using genetic algorithm. If the matching fails, then the orientation is to be reduced further by 5^{0}, that is k = k5 and the process is repeated till k = 45^{0}.
The developed algorithm for the recognition of behavioural traits through humanface has been postulated below.
Algorithm 2.
HABBCR {Hybrid Approach Based Bihevioral Characteristics Recognition}
The understanding of the two formed models with mathematical analysis has been illustrated in the subsequent sections of this chapter of the book.
2.3. Understanding of AHGM and AHFM with mathematical analysis
The recognition of physiological and behavioral traits of the subject (humanbeing), a test image has to be fed as input. This has been shown in figure 3 below.
From figure 3, first the test image has to be captured and hence to be filtered using DCT after proper conversion of original image to grayscale image. Later on it has to be segmented for further processing and hence to be normalized. Relevant physiological and behavioral features have to be extracted and proper matching has to be done using the developed algorithms named as NGBBCR and HABBCR. In the present chapter two environments:
2.3.1. How openair space environment has been considered for recognition?
Simply a test image in an openair space (either a humangait or humanface) has to be captured. It has to be converted into a grayscale image. Later on it has to be filtered using DCT. Hence normalized and then localized for the region of interests (ROI) with object of interests (OOI). Hence a segmentation process has to be completed. Sufficient number of physiological and behavioral traits or features or characteristics has to be extracted from the test image and a template (say OATEMPLATE has to be formed. Using Euclidean distance measures, the differences have to be calculated from that stored in the corpus (AHGM and AHFM). This has to be carried out using liftingscheme of wavelet transform (LSWT). The details can be explained through the figure 4, as shown below.
Figure 4, shows that the enhanced and segmented part of the test image, x(m, n) has to be split into two components: detail component and coarser component. Considering both the obtained components, additional parameters have to be computed using mathematical approximations. These parameters (referred as prediction (P) and updation(U) coefficients) have to be used for an optimal and robust recognition process.
With reference to figure 4, first the enhanced and segmented part of the test image x(m, n), has to be separated into disjoint subsets f sample, say xe(m, n) and xo(m, n). From these the detail value, d(m, n), has to be generated along with a prediction operator, P. Similarly, the coarser value, c(m, n), has also to be generated along with an updation operator, U, which has to be multiplied with the detail signal and added with the even components of the enhanced and segmented part of the test image. These liftingscheme parameters have to be computed using Polar method or BoxMuller method.
Let us assume X(m, n) be the digitized speech signal after enhancement. Split this speech signal into two disjoint subsets of samples. Thus dividing the signal into even and odd component: X_{e}(m, n) and X_{o}(m, n) respectively. It simplifies to X_{e}(m, n) = X(2m, 2n) and X_{o}(m, n) = X(2m+1, 2n+1). From this simplification two new values have to be generated called detail value d(m, n) and coarser value c(m, n).
The detail value d(m, n) has to be generated using the prediction operator P, as depicted in figure 4. Thus it yields,
Similarly, the coarser value c(m, n) has to be generated using the updation operator U and hence applied to d(m, n) and adding the result to X_{e}(m, n), it yields,
After substituting equation (41) in equation (42), it yields,
The lifting – scheme parameters, P and U, has to be computed initially using simple iterative method of numerical computation, but it took lot of time to display the result. Hence to overcome such difficulty polar method or BoxMuller method has to be applied. The algorithm 3 has been depicted below for such computations.
Algorithm 3.
Computation of U and P values
As per the polar or BoxMuller method, rescaling of the unstandardized random variables can be standardized using the mean and variance of the test image. This can be more clearly expressed as follows: let Q be an unstandardized variable or parameter and Q’ be its standardized form. Thus Q’ = (Q – μ)/σ relation rescales the unstandardized parameters to standardized form Q = σ Q’ + μ, where μ is the mean and σ is the standard deviation.
Further the computed values of U and P has to be utilized for the application of inverse mechanism of lifting scheme of wavelet transform. Further for crosschecking the computed values of the lifting scheme parameters, inverse mechanism of liftingscheme of wavelet transform (IALSWT) has to be employed, that has been depicted in figure 5.
With reference to figure 5, the coarser and detail values have to be utilized along with the values of lifting scheme parameters. Later the resultant form has to be merged and the output, y(m, n) has to be obtained. Further a comparison has to be made with the output y(m, n) and the enhanced and segmented part of the image x(m, n). If this gets unmatched then using feedback mechanism the lifting – scheme parameters have to be calibrated and again the whole scenario has to be repeated till the values of x(m, n) and y(m, n) are matched or closetomatching scores are generated.
From figure 5, further analysis has to be carried out by merging or adding the two signals c”(m, n) and d”(m, n) for the output signal Y(m, n). Thus it yields,
where c”(m, n) is the inverse of the coarser value c(m, n), and d”(m, n) is the inverse of the detail value d(m, n).From figure 5, it yields:
and
On substituting equation (45) in equation (46), it gives:
On adding equations (46) and (47), it yields,
To form a robust pattern matching model, assume the inputs to the node are the values x_{1}, x_{2},…,x_{n}, which typically takes the values of –1, 0, 1 or real values within the range (1, 1). The weights w_{1}, w_{2},…,w_{n}, correspond to the synaptic strengths of the neuron. They serve to increase or decrease the effects of the corresponding ‘x_{i}’ input values. The sum of the products x_{i} * w_{i}, i = 1 to n, serve as the total combined input to the node. So to perform the computation of the weights, assume the training input vector be ‘G_{i}’ and the testing vector be ‘H_{i}’ for i = 1 to n. The weights of the network have to be recalculated iteratively comparing both the training and testing data sets so that the error is minimized.
If there results to zero errors a robust pattern matching model is formed. The process for the formation of this model is given in algorithm4.
Algorithm 4.
Robust pattern matching model
Depending upon the distances, the best testmatching scores are mapped using unidirectionaltemporary associated memory of artificial neural network (UTAM). The term unidirectional has to be used because each input component is mapped with the output component forming onetoone relationship. Each component has to be designated with a unique codeword. The set of codewords is called a codebook. The concept of UTAM has to be employed in the present work, as mappingfunction for two different cases:
Distortion measure between unknown and known images
Locating codeword between unknown and known image feature
To illustrate these cases mathematically, Let K_{in} = {I_{1},I_{2},….,I_{n}} and K_{out} = {O_{1},O_{2},……,O_{m}} consisting of ‘n’ and ‘m’ input and output codeword respectively. The values of ‘n’ and ‘m’ are the maximum size of the corpus. In the recognition stage, a test image, represented by a sequence of feature vector, U = {U_{1},U_{2},….,U_{u}}, has to be compared with a trained image stored in the form of model (AHGM and AHFM), represented by a sequence of feature vector, K_{database} = {K_{1},K_{2},….,K_{q}}. Hence to satisfy the unidirectional associatively condition, that is, K_{out} = K_{in}, AHGM and AHFM has to be utilized for proper matching of features. The matching of features, have to be performed on computing the distortion measure. The value with lowest distortion has to be chosen. This yield to, the relation,
The distortion measure has to be computed by taking the average of the Euclidean distance
where
Dividing equation (50) by equation (51), it yields,
The procedure for computing the weights, has been depicted in an algorithm – 5 below:
Algorithm 5.
Procedure to compute weight (S)
Next for locating the codeword, hybrid approach of soft computing has to be applied in the welldefined way in the present chapter.The hybrid approach of soft computing techniques utilizes some bit of concepts from forwardbackward dynamic programming and some bit of neuralnetworks. From the literature it has been observed that, for an optimal solution, genetic algorithm is the best search algorithm based on the mechanics of natural selection, crossover, mutation and reproduction. It combines survival of the fittest among string structures with a structured yet randomized information exchange. In every generation, new sets of artificial strings are created and hence tried for a new measure. It efficiently exploits historical information to speculate on new search points with expected improved performance. In other words genetic algorithms are theoretically and computationally simple and thus provide robust and optimized search methods in complex spaces. The selection operation has to be performed by selecting the physiological and behavioral features of the humangait and face images, as chromosomes from the population with respect to some probability distribution based on fitness values. The crossover operation has to be performed by combining the information of the selected chromosomes (humangait and humanface image) and generates the offspring. The mutation operation has to be utilized by modifying the offspring values after selection and crossover for the optimal solution. Here in the present chapter, a robust pattern matching model signifies the population of genes or physiological and behavioral features. Using neurogenetic approach a similar type of work has been done by TilendraShishir Sinha et al. (2010) for the recognition of anomalous in foot using a proposed algorithm NGBAFR (neurogenetic based abnormal foot recognition). The methodology adopted was different in classification and recognition process and the work has been further modified by them and has been highlighted in the present part of the book using soft computing techniques of genetic and artificial neural network. Hence the classification and decision process are to be carried as per the algorithm discussed earlier in this chapter of the book.
2.3.2. How an underwater space environment has to be considered for recognition?
Simply a test image of a subject (either walking or swimming in water), has to be captured, keeping the camera in an openair space at a ninetydegree angle to the surface of the water. It has to be converted into a grayscale image. Later on it has to be filtered using DCT. Hence normalized and then localized for the region of interests (ROI) with object of interests (OOI). Later on segmentation process has to be done using structural analysis method. Sufficient number of physiological and behavioral traits or features or characteristics has to be extracted from the test image and a template (say UWTEMPLATE) has to be formed. Using Euclidean distance measures, the differences between the testdataset and the traineddataset has to be calculated. Depending upon the distances obtained, the best testmatching scores are generated using genetic algorithm for an optimal classification process and finally the decision process are to be carried out.
2.4. Experimental results and discussions through case studies
In the present chapter, humangait and humanface have been captured in an openair space. The testing of the physiological and behavioural characteristics or features or traits of the subject is not only done in an openair space but also in underwater space.
2.4.1. Case study: In an openair space
In an openair space, first a humangait image has to be captured and fed as input. Next it has to be enhanced. Later on it is compressed for distortion removal with loss less information. Next it has to be segmented for contour detection and the relevant physiological features have to be extracted. All the features of the humangait image are stored in a corpus called AHGM. Similarly, in an openair space, humanface image has also to be captured and fed as input. Next it is enhanced, compressed, segmented and relevant physiological features are extracted. The extracted physiological features are stored in a corpus called AHFM. For the recognition of physiological and behavioural traits, test images of humangait and humanface (from sideview) are fed as input. Both the images are enhanced and compressed for distortion removal. Then both are segmented for the extraction of relevant physiological and behavioural features. By using the computer algorithm’s(depicted in algorithm2 and algorithm3) the extracted features have to be compared with that stored in the corpus (AHGM and AHFM). Depending upon the result of comparison the classification has to be made. Relevant testing with necessary test data considering 10 frames of 100 different subjects of varying ages proves the developed algorithm. The efficiency of recognizing the physiological and behavioural traits have been kept at a threshold range of 90% to 100% and verified from the traineddataset. Using genetic algorithm the bestfit scores have been achieved. Figure 6, shows the original image of one subject along with the segmented portion of the humangait in standing mode.
After segmentation of humangait image in walking mode, extraction of physiological features using relevant mathematical analysis has to be done. Some of the distance measures of subject #1 with right leg at the front have been shown in figure 8. Similarly, distance measures of subject #1 with left leg at the front have been shown in figure 9.
The relevant physiological feature, that is, steplength and kneetoankle distance has been also extracted that has been shown in figure 11.
As per the developed algorithm called NGBBCR, for most of the test cases, ‘NORMAL BEHAVIOUR’ has been achieved. Very few test cases for ‘ABNORMAL BEHAVIOUR’ have been achieved. Table 1, depicted below describes the physiological features extracted in standing and walking mode of subject #1.

From table 1, it has been observed that minimum variations have been found from one frame to other. This has been plotted in figure 11, below, for the graphical analysis. The extracted parameters with respect to physiological features that have been verified for bestfit scores using NGBBCR has been shown in table 2a and in table 2b. The graphical representation of table 2a and table 2b has been depicted in figure 12.










IMG1  Standing (Left Leg facing towards Camera) 
10.152  269.408  2620.94  156.513  30050  
IMG2  Standing (Right Leg facing towards Camera) 
10.2679  242.651  2584.98  156.12  28665.2 

IMG3  Walking (Left Leg Movement) 
9.10686  290.723  2637.0  151.059  36575.5  
IMG4  Walking (Right Leg Movement) 
9.04764  430.713  2548.37  148.452  41360  
IMG5  Walking (Left Leg Movement) 
9.2831  412.108  2658.3  152.113  43500.5  
IMG6  Walking (Right Leg Movement) 
9.07875  365.896  2650.96  150.842  41360  
IMG7  Walking (Left Leg Movement) 
9.67294  384.685  2544.82  149.573  36796.7  
IMG8  Walking (Right Leg Movement) 
9.67004  376.443  2612.81  152.036  39045  
IMG9  Walking (Left Leg Movement) 
9.83117  423.315  2702.1  155.715  41125.5  
IMG10  Walking (Right Leg Movement) 
9.8643  349.463  2486.73  147.951  35262.5  










IMG1  Standing (Left Leg facing towards Camera) 
6.1983e+008  28.974  3.99699  0.000427322  130.005  
IMG2  Standing (Right Leg facing towards Camera) 
6.1983e+008  15.9459  9.62817  0.00041897  127.639  
IMG3  Walking (Left Leg Movement) 
6.1983e+008  14.5305  3.19103  0.000178086  69.244  
IMG4  Walking (Right Leg Movement) 
6.1983e+008  25.63  8.18713  0.000171731  62.8614  
IMG5  Walking (Left Leg Movement) 
6.1983e+008  14.3655  3.43852  0.000154456  64.3655  
IMG6  Walking (Right Leg Movement) 
6.1983e+008  24.8326  8.01451  0.000174672  64.7545  
IMG7  Walking (Left Leg Movement) 
6.1983e+008  27.8642  9.02633  0.000184038  67.9991  
IMG8  Walking (Right Leg Movement) 
6.1983e+008  26.4634  8.32745  0.000172107  67.8427  
IMG9  Walking (Left Leg Movement) 
6.1983e+008  14.376  3.38278  0.000160546  69.2694  
IMG10  Walking (Right Leg Movement) 
6.1983e+008  13.9372  3.50317  0.000197328  67.6806 
From figure 12, it has been observed that the energy values (on Yaxis) are lying in between 10 to +15, for all the parameters. The parameters power spectral density (psd) and standard deviation (SD) have been found constant for any frame of the subject. The eigenvector and eigenvalues are also satisfying their mathematical properties. For the rest of the extracted parameters in the work, minimum variations have been observed.
Next the frontal part of humanface has been captured, with four zones that have been depicted in figure 13, below.
The relevant physiological features measured from sideview have been shown in table 3 below.

2.4.2. Case study: In underwater space
In underwater space, first a humangait image has to be captured and fed as input. Next it has to be enhanced. Later on it is compressed for distortion removal with loss less information. Next it has to be segmented for contour detection and the relevant physiological features have to be extracted. All the features of the humangait image are stored in a corpus called UWHGM (under water humangait model). Similarly, in underwater space, humanface image has also to be captured and fed as input. Next it is enhanced, compressed, segmented and relevant physiological features are extracted. The extracted physiological features are stored in a corpus called UWHFM (under water humanface model). For the recognition of physiological and behavioural traits, test images of humangait and humanface are fed as input. Both the images are enhanced and compressed for distortion removal. Then both are segmented for the extraction of relevant physiological and behavioural features. By using the computer algorithm’s(depicted in algorithm2 and algorithm3) the extracted features have to be compared with that stored in the corpus (AHGM and AHFM). Depending upon the result of comparison the classification has to be made. Relevant testing with necessary test data considering 10 frames of 100 different subjects of varying ages proves the developed algorithm. The efficiency of recognizing the physiological and behavioural traits have been kept at a threshold range of 90% to 100% and verified from the traineddataset. Using genetic algorithm the bestfit scores have been achieved. Figure 14, shows the original image of one subject along with the segmented portion of the humangait in walkingmode in underwater space.
3. Conclusion and further scope
This chapter includes indepth discussion of an algorithm developed for the formation of a noisefree AHGM and AHFM using relevant physiological and behavioural features or traits or characteristics of the subject using humangait and humanface image. The algorithm has been named NGBBCR and HABBCR. It may be noted that the algorithms have been tested on a vast amount of data and have been found subject independent and environment independent. The algorithms have been tested not only in an openair space but also in underwater space. A thorough case study has been also done. The traineddata set is matched with the testdata set for the best fit and this involves the application of artificial neural network, fuzzy set rules and genetic algorithm (GA). At every step in the chapter thorough mathematical formulations and derivations have been explained.
Humangait analysis may be of immense use in medical field for recognition of anomalies and also to track the prosodic features like mood, age, gender of the subject. It may also be used to track the recovery of a patient from injury and operation. Further it will prove to be handy to spot and track individuals in a crowd and hence help investigation department.
Humanface analysis also will help in medical field to treat various diseases like squint, facial distortions and other problems which show their symptoms through facial anomalies. It will be of great help for plastic surgeons to rectify features and enhance the beauty of the subjects.
Underwater object recognition itself is a vast and challenging area and is proving to be of great help in fields of environmentalbiology, geology, defence, oceanography and agriculture. Understanding the life and possible dangers of deep water animals, tracking submarines and destructive materials like explosives and harmful wastes are some areas of interest in this field.
References
 1.
Cunado D. Nixon M. S. Carter J. N. 1997 Using gait as a biometric, via phaseweighted magnitude spectra in the proceedings of First International Conference, AVBpA’97, CransMontana, Switzerland,95 102  2.
M.S.,Huang P. S. Harris C. J. Nixon M. 1999 Recognizing humans by gait via parametric cannonical space, Artificial Intelligence in Engineering,13 4 359 366  3.
Huang P. S. Harris C. J. Nixon M. S. 1999 Human gait recognition in canonical space using temporal templates IEEE Proceedings Vision Image and Signal Processing,146 2 93 100  4.
Scholhorn W. I. Nigg B. M. Stephanshyn D. J. Liu W. 2002 Identification of individual walking patterns using time discrete and time continuous data sets, Gait and Posture,15 180 186  5.
Garrett M. Luckwill E. G. 1983 Role of reflex responses of knee musculature during the swing phase of walking in man. European Journal Application Physical Occup Physiology,52 1 36 41  6.
Berger W. Dietz V. Quintern J. 1984 Corrective reactions to stumbling in man: neuronal coordination of bilateral leg muscle activity during gait,  7.
Yang J. F. Winter D. A. Wells R. P. 1990 Postural dynamics of walking in humans., Biological Cybernetics,62 4 321 330  8.
Grabiner M. D. Davis B. L. 1993 Footwear and balance in older men.  9.
Eng J. J. Winter D. A. Patla A. E. 1994 Strategies for recovery from a trip in early and late swing during human walking. Experimentation Cerebrale,2 339 349  10.
Schillings A. M. Van Wezel B. M. Duysens J. 1996 Mechanically induced stumbling during human treadmill walking. Journal of Neuro Science Methods,67 1 11 17  11.
Schillings A. M. van Wezel B. M. Mulder T. Duysens J. 1999 Widespread shortlatency stretch reflexes and their modulation during stumbling over obstacles.  12.
Smeesters C. Hayes W. C. Mc Mahon T. A. 2001 The threshold trip duration for which recovery is no longer possible is associated with strength and reaction time,  13.
Yang X. D. 1989 An improved Algorithm for Labeling connected Components in a Binary Image TR89 981  14.
Lumia R. 1983 A New Threedimensional connected components Algorithm  15.
Harris R. I. Beath T. 1948 Etiology of Personal Spatic Flat Foot, The Journal of Bone and Joint Surgery,30B 4 624 634  16.
Kover T. Vigh D. Vamossy Z. 2006 MYRA Face Detection and Face Recognition system, in the proceedings of Fourth International Symposium on Applied Machine Intelligence, SAMI 2006, Herlany, Slovakia,255 265  17.
Jain,ReinLien Hsu. MohammadMottaleb Abdel. Anil K. 2002 Face detection in color images in IEEE transactions of Pattern Analysis and Machine Intelligence,24 5 696 706  18.
Zhang J. Yan Y. Lades M. 199 1997 Face recognition: eigenface, elastic matching and neural nets, in the proceedings of IEEE,85 9 1493 1435  19.
Turk M. A. Pentland A. P. 1991 , Face recognition using eigenfaces, in the proceedings of IEEE Computer society conference on computer vision and pattern recognition,586 591 .  20.
Zhao W. Y. Chellapa R. 2000 SFS Based View Synthesis for Robust Face Recognition in the proceedings of IEEE international Automatic Face and Gesture recognition.  21.
Hu., Y., Jaing, D., Yan, S., Zhang, L., and Zhang, H., Automatic 3d reconstruction for face recognition, in the proceedings of IEEE International Conferences on Automatic Face and Gesture Recognition 2000.  22.
Improving the performance of Multiclass SVMs in face recognition with nearest neighbours rule in the proceedings of IEEE International conference on tools with Artificial IntelligenceLee C. H. Park S. W. Chang W. Park J. W. 2000  23.
Xu, C., Wang, Y., Tan, T., Quan, L., Automatic 3D Face recognition combining global geometric features with local shape variation information, in the proceedings of IEEE International Conference for Automatic Face and Gesture Recognition 2004.  24.
Chua, C.S., Jarvia, R., Point Signature : A new Representation for 3D Object Recognition, International Journal on Computer Vision vol 25, 1997.  25.
Chua, C.S., Han, F, Ho, Y.K., 3D Human face recognition using point signature, in the proceedings of IEEE International Conference on Automatic Face and Gesture Recognition 2000.  26.
Sinha Tilendra. Shishir Patra. Rajkumar Raja Rohit. 2011 A Comprehensive analysis for abnormal foot recognition of humangait using neurogenetic approach