Open access

Efficiency of Recognition Methods for Single Sample per Person Based Face Recognition

Written By

Milos Oravec, Jarmila Pavlovicova, Jan Mazanec, Lubos Omelina, Matej Feder and Jozef Ban

Submitted: 27 October 2010 Published: 27 July 2011

DOI: 10.5772/18432

From the Edited Volume

Reviews, Refinements and New Ideas in Face Recognition

Edited by Peter M. Corcoran

Chapter metrics overview

3,724 Chapter Downloads

View Full Metrics

1. Introduction

Even for the present-day computer technology, the biometric recognition of human face is a difficult task and continually evolving concept in the area of biometric recognition. The area of face recognition is well-described today in many papers and books, e.g. (Delac et al., 2008), (Li & Jain, 2005), (Oravec et al., 2010). The idea that two-dimensional still-image face recognition in controlled environment is already a solved task is generally accepted and several benchmarks evaluating recognition results were done in this area (e.g. Face Recognition Vendor Tests, FRVT 2000, 2002, 2006, http://www.frvt.org/). Nevertheless, many tasks have to be solved, such as recognition in unconstrained environment, recognition of non-frontal images, single sample per person problem, etc.

This chapter deals with single sample per person face recognition (also called one sample per person problem). This topic is related to small sample size problem in pattern recognition. Although there are also advantages of single sample – fast and easy creation of a face database and modest requirements for storage, face recognition methods usually fail to work if only one training sample per person is available.

In this chapter, we concentrate on the following items:

  • Mapping the state-of-the-art of single sample face recognition approaches after year 2006 (the period till 2006 is covered by the detailed survey (Tan et al., 2006)).

  • Generating new face patterns in order to enlarge the database containing single samples per subject only.

Such approaches can include modifications of original face samples using e.g. noise, mean filtering, suitable image transform (forward transform, then neglecting some coefficients and image reconstruction by inverse transform), or generating synthetic samples by ASM (active shape method) and AAM (active appearance method).

  • Comparing recognition efficiency using single and multiple samples per subject.

We illustrate the influence of number of training samples per subject to recognition efficiency for selected methods. We use PCA (principal component analysis), MLP (multilayer perceptron), RBF (radial basis function) network, kernel methods and LBP (local binary patterns). We compare results using single and multiple training samples per person for images taken from FERET database. For our experiments, we selected large image set from FERET database.

  • Highlighting other relevant important facts related to single sample recognition.

We analyze some relevant facts that can influence further development in this area. We also outline possible directions for further research.

Advertisement

2. Face recognition based on a single sample per person

2.1. General remarks

Generally, we can divide the face recognition methods into three groups (Tan et al., 2006): holistic methods, local methods and hybrid methods.

Holistic methods like PCA (eigenfaces), LDA (fisherfaces) or SVM need principally more image samples per person in the training phase. To solve the one sample problem there are basically two ways how to deal with it:

Local methods can be divided into 2 groups:

  • Local feature based, which mostly work with some type of graph spread over the face regions with corners in important face features – face recognition is formulated as a problem of graph matching. These methods deal with the one sample problem better than the typical holistic methods (Tan et al., 2006). EBGM (Elastic Bunch Graph Matching) or DCP (directional corner points) are examples of this type of methods.

  • Local appearance-based methods extract information from defined local regions. The features are extracted by known methods for texture classification (Gabor wavelets, LBP, etc.) and the feature space is reduced by known methods like PCA or LDA.

An excellent introduction to the single sample problem and survey of related methods mapping state-of-the-art till 2006 is described and discussed in (Tan et al., 2006).

2.2. State-of-the-art in single sample per person face recognition from 2006

After year 2006, new approaches were proposed. They are based mainly on enhancement of various conventional methods.

Principal Component Analysis (PCA) is still one of the most popular methods used to deal with one sample problem. Despite of its popularity, calculating of representative covariance matrix from one sample is very difficult task. In contrast to conventional application of PCA, 2DPCA (Yang et al. 2004) is based on two dimensional matrices, where the image does not need to be previously transformed into a 1D vector.

In (Que et al., 2008) a new face recognition algorithm MW(2D)2PCA was proposed. Modular Weighted (2D)2PCA (MW(2D)2PCA) is based on the study of (2D)2PCA. Weighting method (W) emphasizes the different influence of different eigenvectors and image blocking method (M) can extract detailed information of face image more effectively. Modularization of image into several blocks according to face elements provides more detailed information of face and assigns this approach rather to local appearance than holistic methods. The best recognition rate achieved by this method was 74.14%.

Similar approach, that deals with the single sample problem from human perception point of view, was proposed in (Zhan et al., 2009) where modularized image was processed by 2D DCT to extract features, instead of (2D)2PCA. Gabor filters can be applied even to the image divided into several areas to reduce illumination impact as it is shown in (Nguyen & Bai, 2009).

Standard way to solve single sample problem is to use local facial representations. Conventional procedure in local methods is face image partitioning into several segments. In (Akbari et al., 2010), an algorithm based on single image per person, with input images segmented into 7 partitions was proposed. The moment feature vectors of a definite order for all images are extracted and distance measure is used to recognize the person.

Another way to get better results of recognition is a fusion of more biometrics. In (Ma et al., 2009) a new multi-modal biometrics fusion approach was presented. They used face and palmprint biometrics and combined the normalized Gaborface and Gaborpalm images at the pixel level. They presented a kernel PCA plus RBF classifier (KPRC) to classify the fused images. Using both face and palmprint samples, the average recognition results were improved from 42.60% and 52.36% (single-modal biometrics) to 87.01% (multi-modal biometrics).

In (Xie & Lam, 2006) novel Gabor-based kernel principal component analysis with doubly nonlinear mapping for human face recognition was proposed. The algorithm is evaluated using 4 databases: Yale, AR, ORL and YaleB database. The best of the proposed variations of the algorithm GW+DKPCA get very good results even under varying lighting, expression and perspective conditions.

(Kanan & Faez, 2010) presents a new approach for face representation and recognition based on Adaptively Weighted Sub-Gabor Array (AWSGA). The proposed algorithm utilizes a local Gabor array to represent faces partitioned into sub-patterns. It employs an adaptively weighting scheme to weight the Sub-Gabor features extracted from local areas based on the importance of the information they contain and their similarities to the corresponding local areas in the general face image. Experiments on AR and Yale databases show, that the proposed method significantly outperforms eigenfaces and modular eigenfaces in most of the benchmark scenarios under both ideal conditions and varying expressions and lighting conditions and this method achieves better results under partial occlusion conditions than the local probabilistic approach.

A novel feature extraction method named uniform pursuit (UP) was proposed in (Deng et al., 2010). A standardized procedure on the large-scale FERET and FRGC databases was applied to evaluate the one sample problem. Experimental results show that the robustness, accuracy and efficiency of the proposed UP method can compete successfully with the state-of-the-art one sample based methods.

In (Qiao et al., 2010), a new graph-based semi-supervised dimensionality reduction algorithm called sparsity preserving discriminant analysis (SPDA) based on SDA was developed. Experiments on AR, PIE and YaleB databases show that proposed method outperforms the SDA method.

Solution for single sample problem based on Fisherface method on generic dataset was presented in (Majumdar & Ward, 2008). The method was also extended to multiscale transform domains like wavelet, curvelet and contourlet. Results on Faces94 and the AT&T database show, that this approach outperforms SPCA and Eigenface Selection methods. Best results came from the Pseudo-fisherface method in the wavelet domain.

In (Gao et al., 2008), a method based on singular value decomposition (SVD) was used to evaluate the within-class scatter matrix so that the FLDA could be applied for face recognition with only one sample image in training set. The experiments on FERET, UMIST, ORL and Yale databases show, that the proposed method outperforms other state-of-the-art methods like E(PC)2A, SVD perturbation and different FLDA implementations.

A novel local appearance feature extraction method based on multi-resolution Dual Tree Complex Wavelet Transform (DT-CWT) was presented in (Priya & Rajesh, 2010). Experiments with ORL and Yale databases show, that this method and its block-based modification get very good results under illumination, perspective and expression variations conditions compared to PCA and global DT-CWT, while keeping low computational complexity.

In (Tan & Triggs, 2010) original LBP method used for face recognition was extended. More efficient preprocessing was proposed to eliminate illumination variances using LTP (local ternary patterns) – generalization and enhancement of the original LBP texture descriptor. By replacing the local histogram with a distance transform based similarity metrics the performance of the LBP/LTP face recognition was further improved. Experiments under difficult lighting conditions with Face Recognition Grand Challenge, Extended Yale-B, and CMU PIE databases provide results comparable to up to date methods.

Another extension of the LBP algorithm was presented in (Lei et al., 2008). The face image is first decomposed by multi-scale and multi-orientation Gabor filters. Local binary pattern analysis is then applied on the derived Gabor magnitude responses. Using FERET database with 1 image per person in the gallery, the method achieved results outperforming LBP, PCA and FLDA. To improve the recognition accuracy, it helps to add some synthetic samples of subject to the learning process. Standard procedures to create synthetic samples are the parallel deformation method (generate novel views of a single face image under different poses) (Tan et al., 2006), modification by noise or filtering original images. In (Xu & Yang, 2009) the feature extraction technique called Local Graph Embedding Discriminant Analysis(LGEDA) was proposed, where the imitated images were generated using a mean filter.

In (Su et al., 2010) an Adaptive Generic Learning (AGL) method was described. To better distinguish the persons with single face sample, a generic discriminant model was adopted. As a specific implementation of the AGL, a Coupled Linear Representation (CLR) algorithm was proposed to infer, based on the generic training set, the within-class scatter matrix and the class mean of each person given its single enrolled sample. Thus, the traditional Fisher’s Linear Discriminant (FLD) can be applied to one sample problem task. Experiments are taken on images from FERET, XM2VTS, CAS-PEAL databases and a private passport database. The results show, that the Adaptive Gabor-FLD outperforms other methods like E(PC)2A, LBP and other FLD implementations. The proposed method is related to methods using virtual sample generation although it does not explicitly generate any virtual sample.

Advertisement

3. Face recognition methods

We use various methods in order to deeply explore the behavior of face recognition methods for single sample problem and to compare the methods using multiple face samples - both real-world samples and virtually generated samples. Used methods are briefly introduced in this subchapter.

3.1. Methods based on principal component analysis - PCA (PCA, 2D PCA and KPCA)

3.1.1. Principal component analysis - PCA

One of the most successful techniques used in face recognition is principal component analysis (PCA). The method based on PCA is named eigenface and was pioneered by Turk and Pentland (Turk & Pentland, 1991). In this method, each input image must be transformed into one dimensional image vector and set of these vectors forms input matrix. So the main idea behind PCA is that each n-dimensional face image can be represented as a linearly weighted sum of a set of orthonormal basis vectors.

This standard statistical method can be used for feature extraction. Principal component analysis reduces the dimension of input data by a linear projection that maximizes the scatter of all projected samples (Bishop, 1995).

For classification of projected samples Euclidean distance or other metrics can be used. Mahalanobis Cosine (MahCosine) is defined as the cosine of the angle between the image vectors that were projected into the PCA feature space and were further normalized by the variance estimates (Beveridge et al., 2003).

3.1.2. Two-dimensional PCA – 2D PCA

PCA is well-known feature extraction method mostly used as a baseline method for comparison purpose. Several extensions of PCA have been proposed. A major problem of using PCA lies in computation of covariance matrix what is computationally expensive. This computation can be significantly reduced by computing PCA features for columns (or rows) without previous matrix-to-vector conversion. This approach is also called two dimensional PCA (Yang et al., 2004). Main idea behind 2D PCA is the projection of image columns (rows) onto covariance matrix computed as the average of covariance matrices of each column for all training images. Let A be an m by n image matrix and average image A¯ defined asA¯=1/MkAk, where Mis number of all k training images. Then covariance matrix can be calculated by

G=1Mk=1Mi=1m(A{i}kA(i))T(Ak(i)A(i))E1

Equation (1) reveals that the image covariance matrix can be obtained from the outer product of column (row) vectors of images, assuming the training images have zero mean.

For that reason, we claim that original 2D PCA works in the column direction of images. Result of feature extraction is then a matrix instead of a vector. Feature matrix has the same number of columns (rows) as width (height) of face image.

The extraction of image features is computationally more efficient using 2D PCA than PCA since the size of the image covariance matrix is quite small compared to the size of a covariance matrix in PCA (by using Turk & Pentlands optimization it depends on number of training images). 2D PCA is not only more efficient than PCA but it is possible to reach even higher recognition accuracy (Yang et al., 2004).

Despite its better efficiency, 2D PCA has also one disadvantage because it needs more coefficients for image representation than PCA. Because the size of the image covariance matrix for 2D PCA is equal to the width of images, which is quite small compared to the size of a covariance matrix in PCA, 2D PCA evaluates the image covariance matrix more accurately and computes the corresponding eigenvectors more efficiently than PCA.

3.1.3. Kernel PCA – KPCA

PCA is a linear algorithm that is not able to work with nonlinear data. Kernel PCA (Müller et al., 2001) is a method computing a nonlinear form of PCA. Instead of directly doing nonlinear PCA, it implicitly computes linear PCA in high-dimensional feature space that is in non-linear relation to input space.

3.2. Support vector machine - SVM

Support vector machines (SVM) (Asano, 2006; Hsu et al., 2003; Müller et al., 2001; Boser et al, 1992) are based on the concept of decision planes that define optimal boundaries. Its fundamental idea is very simple: the boundary is located to achieve the largest possible distance for the vectors of different sets. Example of this is shown in the Fig. 1. This figure illustrates linearly separable problem. In the case of linearly nonseparable problem, kernel methods are used. The concept of kernel method is a transformation of the vector space into a higher dimensional space.

Figure 1.

Optimal boundary of support vector machine

The kernel function is defined as follows:

K(x,x,)=Φ(x)TΦ(x,)E2

Kernel function is equivalent to the distance between x and x’ measured in the higher dimensional space transformed by a nonlinear mappingΦ.

3.3. Methods based on neural networks (MLP, RBF network)

Neural network (Bishop, 1995; Haykin, 1994; Oravec et al., 1998) is a massive parallel processor that is inspired by biological nervous systems. Neural network is able to learn and to adapt its free parameters (connections between neurons known as synaptic weights are adjusted during the learning process).

3.3.1. Multilayer perceptron

Multilayer perceptron (MLP) (Bishop, 1995; Haykin, 1994; Oravec et al., 1998) is a layered feedforward network consisting of input, hidden and output layers.

Multilayer perceptron operates with functional and error signals. The functional signal propagates forward starting at the network input and ending at the network output as an output signal. The error signal originates at output neurons during the learning and propagates backward. MLP is trained by backpropagation algorithm.

MLP represents nested sigmoidal scheme (Haykin, 1994), its form for single output neuron is

F(x,w)=ρ(jwjkρ(...ρ(iwlixi)...))E3

whereρ() is a sigmoidal activation function,woj is the synaptic weight from neuron j in the last hidden layer to the single output neuron o, and so on for the other synaptic weights, xi is the i-th element of the input vector x. The weight vector w denotes the entire set of synaptic weights ordered by layer, then neurons in a layer, and then number in a neuron.

3.3.2. Radial basis function network

Radial basis function network (RBF) (Oravec et al., 1998; Hlaváčková, 1993) is a feedforward network consisting of input, one hidden and output layer. Input layer distributes input vectors into the network, hidden layer represents RBFs hi. Linear output neurons compute linear combinations of their inputs. RBF network topology is shown in Fig. 2.

Figure 2.

RBF network topology

RBF network is trained in three steps:

  1. Determination of centers of the hidden neurons

  2. Computation of additional parameters of RBFs

  3. Computation of output layer weights.

RBF network from Fig. 2 can be described as follows (Mark, 1996):

f(x)=wo+i=1mwihi(x)E4

where x is the input of RB activation function hi and wi are weights. Output of network is a linear combination of RBFs.

3.4. Local binary patterns – LBP

Local binary patterns (LBP) were first described in (Ojala et al., 1996). It is a computationally efficient descriptor to capture the micro-structural properties and was proposed for texture classification. The operator labels the pixels of an image by thresholding the 3x3-neighbourhood of each pixel with the center value and considering the result as a binary number. Later the LBP operator has been extended to use circle neighborhoods of different sizes - the pixel values are bilinearly interpolated (Fig. 3).

Figure 3.

The extended LBP operator with circular neighborhood

Another extension uses just uniform patterns. A local binary pattern is called uniform if it contains at most two bitwise transitions from 0 to 1 or vice versa when the binary string is considered circular. For example, 00000000, 00011110 and 10000011 are uniform patterns. Such patterns represent important features on the image like corners or edges. Uniform patterns account for most of the pattern in images (Ojala et al., 1996).

A system using LBP for face recognition is proposed in (Ahonen et al., 2004, 2006). Image is divided into non-overlapping regions. In each region a histogram of uniform LBP patterns is computed, the histograms are concatenated into one histogram (see Fig. 4 for illustration), which represents features extracted from the image in 3 levels (pixel, region and whole image).

Figure 4.

Description of face using concatenated LBP histogram (image taken from (Marcel et al., 2007))

The χ2 metric is used as the distance metric for comparing the histograms:

x2(S,M)=r,i(S(i)M(i))2S(i)+M(i)E5

where S and M are the histograms to be compared and i is the i-th bin of histogram.

Advertisement

4. Face database

We used images selected from FERET image database (Phillips et al., 1998). FERET face images database is de facto standard database in face recognition research. It is a complex and large database, which contains more than 14126 images of 1199 subjects of dimensions 256 x 384 pixels. Images differ in head position, lighting conditions, beard, glasses, hairstyle, expression and age of subjects.

We worked with grayscale images from Gray FERET (FERET Database, 2001). We selected image set containing total 665 images from 82 subjects. It consists of all available subjects from whole FERET database that have more than 4 frontal images containing also corresponding eyes coordinates (i.e. largest possible set fulfilling these conditions from FERET database was chosen). The used image sets are visualized in Fig. 5.

Figure 5.

Visualization of subset of images from FERET used in our experiments

The images were preprocessed. Our preprocessing consists of

  • geometric normalization (aligning according to eye coordinates)

  • histogram equalization

  • masking (cropping an ellipse around the face)

  • resizing to 65x75pix

Fig. 6 shows an example of the original image and the image after preprocessing.

Figure 6.

Original images and corresponding images after preprocessing

Advertisement

5. Simulation results for 1 - 4 original training images per subject

In our experiments with original training images we compared the efficiency of several algorithms in scenario with 1 (single sample problem), 2, 3 and 4 images/subject. We carefully selected algorithms generally considered to play the major role in today face recognition research. Also standard PCA was included for comparison purposes. All these methods are briefly reviewed in subchapter 3. Face recognition methods.

Table 1.

Results for different training sets (dependence of face recognition accuracy in % with regard to number of samples per subject in the training set)

In each test with different number of images in training set we made 4 runs with different selection of the images into the training set: original one with choosing the first images alphabetically by name and 3 additional training/testing collections with randomly shuffled images. The final test results are the average from these 4 values.

Our results are summarized in Table 1 and in Fig. 7. All figures and tables in this chapter contain values whose meaning is recognition accuracy in % achieved on test sets. The notation n_train means n images (samples) per subject in training sets.

Figure 7.

Graphic comparison of the results for different training sets (dependence of face recognition accuracy in % with regard to number of samples per subject in the training set)

Presented results are summarized as follows:

Neural networks and SVM

For single sample per person training sets, methods based on neural networks (RBF network and MLP) and also SVM achieved less favorable results (below 70%). The extension of the training sets by second sample per person slightly increased face recognition test results for MLP and SVM methods. For RBF network, the second sample improved the result to the value above 85%. Impact of adding third sample per person into training sets caused a significant improvement of test results (for RBF and SVM above 90% accuracy was achieved). Adding more than four samples per person into training sets has only a minimal effect on increasing the face recognition results and has a negative impact on the computational and time complexity. The larger training sets the better recognition results were achieved.

PCA-based methods

PCA with Euclidean distance metric as a reference method shows that more images per subject in training set lead to more accurate recognition results, improving from 68% with 1 img./subj. to 89% with 4 img./subj. Although there was reported that 2D PCA can reach higher accuracy in term of precision, PCA slightly overcome 2D PCA in our experiments. However, 2D PCA still has big advantage in comparison to PCA which lies in faster training time due to using smaller covariance matrix. As it is shown in (Li-wei et al., 2005), 2D PCA is equal to block-based PCA and it means that it uses only several parts of covariance matrix used in PCA. In other words we lose information from rest of covariance matrix that can lead to worse recognition rates. KPCA achieved slightly better results compared to 2D PCA (KPCA is included for comparison purposes here and it will not be used further within this chapter).

PCA+SVM

PCA+SVM method is a two-stage setup including both feature extraction and classification. Features are first efficiently extracted by PCA with optimal truncating the vectors from the transform matrix. The parameters for the selection of the transformation vectors are based on our previous research (Oravec et al., 2010). The classification stage is performed by SVM. SVM model is created with the best parameters found using cross-validation on the training set. PCA+SVM has very good recognition rate even with 1 img./subj. and with 3 and 4 img./subj. it outperforms all other methods in our tests reaching 97% recognition rate with 4 img./subj.

LBP

In our experiments, we used local binary patterns method for face recognition in 3 different modifications. The image is divided into 5x5 or 7x7 blocks from which the concatenated histogram is computed. The “LBP 7x7w” modification adds also weighting of the histogram with different weights according to corresponding image regions. This weighting has been proposed in (Ahonen et al., 2004).

Results for all LBP methods are the best in our tests and were outperformed only slightly with PCA+SVM method with 3 and 4 img./subj. The main characteristic of LBP is that the recognition results are very good even for 1 img./subj.. From the graph in Fig. 7 we see that the recognition rates for the three LBP methods go parallel with each other. The LBP is starting with 83% reaching 94% accuracy with 4 img./subj. LBP 7x7 is approximately 1.5% better than the 5x5modification and the LBP 7x7w more than 2% better reaching almost 97% accuracy with 4 img./subj.

Within this chapter, we work with images of size 65x75 pixels after preprocessing. In Table 2, results for image size 130x150pix (FERET default standard) are shown for illustration. Generally, larger size of images can yield slightly better recognition rates.

Table 2.

Recognition rates for different LBP modifications

Advertisement

6. Simulation results for training sets enlarged by generating new samples

In the previous subchapter, we presented recognition results for methods trained by 1 img./subj. in training sets. We also presented the comparison to results for 2, 3 and 4 img./subj. in training sets, while 2nd, 3rd and 4th images were the original images, i.e. the images were real, taken from the original face database.

Herein we consider different situation: only 1 original sample is available and we try to enhance recognition accuracy by generating new samples to the training sets in artificial manner. Thus, we try to enlarge the training sets by generating new (virtual, artificial) samples. We propose to generate new samples by modifying single available original image in different ways – this is why we will use the term image modification (or modified image). Natural continuation of such approach leads to generating synthetic face images.

In our tests we use different modifications of available single per person images: adding noise, applying wavelet transform and performing geometric transformation.

6.1. Modifications of face images by adding Gaussian noise

Noise in face images can seriously affect the performance of face recognition systems (Oravec et al., 2010). Each image capturing generates digital or analog noise of diverse intensity. The noise is also generated while transmitting and copying analog images. Noise generation is a natural property for image scanning systems. Herein we use noise for generating modified samples of original image. In our modifications, we use Gaussian (Truax, 1999) noise.

Gaussian noise was generated using Gaussian distribution function

p(x)=12πσ2e(xμ)22σ2E6

where μ is the mean value of the required distribution and σ2 is a variance (Truax, 1999; Chiodo, 2006).

Figure 8.

Examples of images modified by Gaussian noise

Gaussian noise was applied on each image with zero mean and in two random intervals of variance. Examples of images degraded by Gaussian noise can be seen in Fig. 8. The labels 03-06noise and 08-16noise mean that the variance of Gaussian noise is random between values 0.03 - 0.06 and 0.08 - 0.16, respectively. The same notation is used also in presented graphs and tables (Tab. 3a and 3b, Fig. 9a and 9b). Noise parameters settings for our simulations were determined empirically. Training sets were created by noise modification of samples added to the original one (1+1noise, 1+2noise and 1+3noise).

Presented results for noise modifications shown in Tab. 3a, 3b and Fig. 9a, 9b are summarized as follows:

Neural networks and SVM

The improvement for RBF, MLP and SVM is clearly visible. In both noise modifications (03-06noise and 08-16noise), the most significant increase in accuracy of test results is achieved by RBF network (about 80% for 1+3 training sets). Similarly to the tests in subchapter 5, adding more samples into training sets has a constant effect on the recognition results.

Table 3.

Results for generating new face samples (modifications of original face samples by Gaussian noise – lower variance)

Table 4.

Results for generating new face samples (modifications of original face samples by Gaussian noise – higher variance)

PCA-based methods

The results of PCA and 2D PCA methods are only slightly affected when adding additional images with different amount of noise to the training set. The results with the noise images added are approximately 1% worse than the original recognition rate with 1 img./subj. Reason for this effect can be probably found in the fact that the transformation matrix computed from the training sample with added noise represents the variances in the space worse than after computing it from original images only. Adding samples to training set is also very uneconomical from the point of view of PCA methods since the time needed to compute the transform matrix grows.

Figure 9.

Graphic comparison of the results for generating new face samples (modifications of original face samples by Gaussian noise – lower variance)

Figure 10.

Graphic comparison of the results for generating new face samples (modifications of original face samples by Gaussian noise – higher variance)

PCA+SVM

The effect observed with PCA can be observed also with PCA+SVM method. Adding the noise images to the training set leads to worse results than with the original training set for about 1% for every scenario. The SVM classification model is influenced by the features extracted from noisy samples, but this accuracy drop is not dramatic.

LBP

The results of LBP methods are not influenced with the noisy samples at all. This has two reasons:

  • By LBP method no model or transformation is calculated from the training images, so there cannot be such global effect to the recognition results as with PCA or SVM.

  • The histograms of LBP patterns in noisy images change rapidly so the distance between the noisy image and the original image of the same person is higher than the distance between two original images of different persons. The consequence is that the minimal distances between the testing and training images do not change and the results are the same as without the noisy images in training set. See Table 4. for illustration of the distances between original and noisy images.

Table 5.

Distances between the LBP 7x7 histograms for original and noisy train images compared with the same and different subject (see Fig. 10 for illustration of the images compared)

Figure 11.

Illustration of images used in comparison in Table 4

Advertisement

6.2 Modifications of face images based on wavelets

Discrete wavelet transform DWT (Puyati et al., 2006; Sluciak & Vargic 2008) (notation wavelets is used in our tables and charts) is defined as follows:

DWT(j,k)f(x)Ψ(x2k)dxE7

where j is the power of binary scaling, k is a constant of the filter and function ψ is a basic wavelet, f(x) is a function which is to be transformed.

Our modifications of face images were done by three steps:

  1. Forward transform of image by DWT

  2. Setting horizontal, diagonal and vertical details in frequency spectrum

  3. Image reconstruction by inverse DWT

We used two types of wavelets: Reverse biorthogonal 2.4 (Vargic & Procháska, 2005) and Symlets 4 (Puyati et al., 2006) (Fig. 11.). These wavelets were chosen empirically – our aim was to produce slight change in the expression of a face. The training sets were created similarly to those with the noise modification (1+1, 1+2 and 1+3), see subchapter 6.1. An example of new samples is shown in Fig. 12.

Figure 12.

Wavelet function ψ: Reverse biorthogonal 2.4, Symlets 4

Figure 13.

Original image and three types of images modified by wavelet transform

Table 6.

Results for generating new face samples (modifications of original face samples by wavelet transform)

Presented results for wavelet modifications (shown in Table 5 and in Fig. 13) are summarized as follows:

Neural networks and SVM

Experiment with wavelet transform demonstrated improvement of one sample per person face recognition using neural network methods - RBF network and MLP. These methods confirmed increase of recognition rate with extending the training sets with images modified by wavelet transform. Improvement above 10% was achieved for RBF network with adding three samples per person (1+3_train) into training sets. On the other hand, SVM method achieved very low face recognition accuracy.

Figure 14.

Graphic comparison of the results for generating new face samples (modifications of original face samples by wavelet transform)

PCA-based methods

Experiments with extending the training set with images modified by wavelet transform show that there is only a small influence to the results of PCA and 2D PCA methods. The accuracy increases when adding the images with stronger wavelet modification. The accuracy of recognition results is only 1% higher than original 1img./subj. The modified images do not cause any significant change in recognition and so there is almost no gain by adding new sample.

PCA+SVM

In contrast to PCA, the effect of decreasing accuracy can be seen when also SVM is involved. When 3 images modified by wavelets are added to the training set, the recognition result is almost 30% worse than using the original image only. In this case, only 50% accuracy can be obtained.

LBP

The effect of the wavelet modifications to the LBP histogram is similar to that with the noise images, so the LBP results stay the same as with the original training set.

6.3. Modifications of face images based on geometry

One of the most successful approaches to samples generation is that based on geometric transformation. The idea is to learn some suitable manifolds and extend training set by new synthetic poses or expressions based on original image (Wen et al., 2003). Because generation of new samples is based on facial features and their position on the face, these features need to be localized at first.

After the all facial features are properly localized and represented by contour and middle points, the next step is to generate target expressions. Because the change of an expression involves moving detected feature points, there is a need to change texture information as well. Real expressions and direction of movements during the expression depends on strength of muscles contractions. We divided each face image into triangles according to direction of these contractions. Face features localization process and dividing into triangles (also called triangulation) is fully automated (unlike usual manual method described in (C.-kai Yang & Chiang, 2007)) using active shape models (Milborrow, 2008). Using active shape models produces very precise positions of facial features and facial boundaries. Result of triangulation is facial graph containing only triangles among detected points determining facial features.

Making use of rule based system, similar to system described in (Yang & Chiang, 2007), we generated different expressions from each training sample by moving location of points in the facial graph. Texture in each triangle containing moved points is then interpolated from original according new coordinates. This procedure with different rules creates new “smile” and “sad” expressions (Fig. 14) and represents more sophisticated approach to generating additional training samples.

Figure 15.

Example of image modified by geometric transformation: a) example of triangular division of face, b) original face image, c) synthetic smile expression, d) synthetic sad expression

Table 7.

Results for generating new face samples (modifications of original face samples by geometric transformation)

Simulation results for geometric modifications are summarized in Table 6 and Fig. 15. Only results for SMILE expression were included in the graph since it helps to improve recognition. It agrees with the fact that the face database contains more faces with smiles than sad faces. In this way it is also possible to present results consistent with other graphs – 1, 2 and 3 samples per face.

The results are summarized as follows:

Neural networks and SVM

Both RBF network and MLP achieved better recognition accuracy using SMILE face expression images (the increase compared with one sample per person about 10%). Tests with extending the training set by SMILE+SAD face expression images were most effective for MLP method (75.61%). For SVM method, these new samples caused the drop of recognition rate about 25%, similar to the wavelet transform.

Figure 16.

Graphic comparison of the results for generating new face samples (modifications of original face samples by geometric transformation)

PCA-based methods

Geometric transformation results show comparable influence as those of PCA and 2D PCA using wavelet modifications. The accuracy increases when adding samples with SMILE expression. The accuracy of recognition results is only 1% higher than original 1 img./subj. The modified images do not cause any significant change in recognition. An improvement could be expected when more face expressions is taken in account.

PCA+SVM

Adding one image modified by geometry into the training set (either SAD or SMILE modification) improved the recognition rate for only about 0.2-0.3% (adding SMILE transformation helps slightly more). Surprisingly, when both transformed images were added to the training set, the recognition rate drops almost 5%.

LBP

As expected, adding transformed images with artificial change of expression (SAD and SMILE emotion) to the training set improves recognition. LBP method reaches better results because the system is more resistant against change in expression. Better results are reached when both transformed images (SAD+SMILE) are used. When also the images in the test set are transformed (for every sample also distances for SAD and SMILE transformation are computed), the results are even better, yielding 87.22% accuracy for LBP 7x7w method with 1 img./subj.

6.4. Comments and summary for methods that are influenced significantly by enlarging training sets by adding modified samples

This subchapter deals with methods for which extending the training set by modified images influences recognition results significantly (compared to recognition using multiple original images). The modifications of images described above (noise, wavelets and geometric tranformations) may be most helpful to neural networks. The comparison of recognition results for original training sets and extended training sets for RBF network and MLP is shown in Fig. 16. In Fig. 16 (and similarly in Fig. 17), the horizontal axis represents the number of images per person in training sets: the meaning for method using original images is 1, 2,3 and 4 original images in the training set; the meaning for modified images is 1 original image, 1 original plus 1, 2, or 3 modified images. For RBF network, above 10% improvement using modified images was achieved. For MLP, geometric transformation was the most successful modification of face images (75.61%).

Figure 17.

Comparison of the results obtained using original and modified training images for RBF network and MLP (generated samples improve recognition)

Figure 17. shows the negative effects of adding newly generated samples into training sets. This effect is clearly visible for PCA+SVM and SVM, when training sets are extended by wavelet transform and geometric transformation.

Figure 18.

Comparison of the results obtained using original and modified training images for PCA+SVM and SVM (generated samples degrade recognition)

Advertisement

7. Conclusion

In this chapter, we considered relevant issues related to one sample per person problem in the area of face recognition. We focused mainly on recognition efficiency of several methods working with single and multiple samples per subject. We researched techniques for enlargement of the training set by new (artificial, virtual or nearly synthetic) samples, in order to improve recognition accuracy. Such samples can be generated in many ways – we concentrated on modifications of the original samples by noise, wavelets and geometric transformation. We proposed methods for modifying expression of a subject by geometric transformation and by wavelet transform. We examined the impact of these extensions on various methods (PCA, 2D PCA, SVM, PCA+SVM, MLP, RBF and LBP variants).

Methods such as PCA+SVM or LBP achieved recognition results above 80% for single sample per person in the training set. For these methods, adding new samples (modified images) did not help significantly. On the other hand, the utilization of the extended training sets for neural networks (MLP and RBF network) always increased the face recognition rate. This confirms that an appropriate extension of the input data set enhances the learning process and the recognition accuracy. Adding more than three new samples per person into the training sets has almost no influence on the recognition rate and has a negative impact on the computational and time complexity. The SVM method improved recognition accuracy only for extension of the training set by noise modification of images.

Experimental results for PCA and 2D PCA show only negligible influence of adding modified samples. We can conclude that the use of modified samples for PCA and 2D PCA has no added value, especially when samples are modified by Gaussian noise only.

PCA+SVM (two-stage method with PCA for feature extraction and SVM for classification) achieved very good results even for 1 img./subj. Adding any modified images to the training set did not improve the recognition rates, but the results were still one of the best from the compared methods.

Our experiments show that LBP is one of the most efficient state-of-the-art methods in face recognition. Adding noise and wavelet modified images to the training set does not have any effect on the recognition rates of LBP – unlike other methods that use the training sample to compute models or transformation matrices. This is caused by the nature of the method, where the histogram of LBP patterns of the noisy image differs too much from the original images. This can be also a disadvantage, when the images in the test set are corrupted with noise. On the other hand, adding images with transformed face expression helps and the system is more resistant to expression change in the images.

LBP for face recognition has obvious advantages such as state-of-the-art recognition rates even with 1 img./subj. in the training set, no need to train models or transformation matrices and good computational efficiency. But there is still potential to improve the results by possible modifications and optimization, which can be researched further: selection of LBP patterns, different preprocessing or modifications of LBP operator. The geometric transformation of images (emotional expression or head pose) and generating synthetic samples seem to be good ways how to improve the results. Further research is needed, since a simple extension of the training set with modified images does not always help.

We are currently working on a more sophisticated geometric transformation to cover more facial expressions. Although the results in section 6.3 show only a small improvement (with the exception of MLP where the improvement was significant), we suppose there is great potential of using samples with synthetic expression. The triangular model of face enables to extend the generation algorithm by other possibilities like generation of samples with different poses and illumination conditions. In the future, we also plan to publish modules generating new samples (with different expressions, poses and illumination) for our universal biometric system BioSandbox

Biosanbox project page – http://biosandbox.fei.stuba.sk

(used in our experiments).

Modification of images using wavelet transform has also large potential to generate new samples. One way to create new samples by wavelet transform is a fusion of two face images, where a new image is generated by applying the wavelet transform on two original images, followed by suitable manipulations of coefficients in a transformed space and finally merging images by inverse transform.

Using mean filter (Xu, J. & Yang, J., 2009) is another simple way of creating modified images. By using mean filter with different kernels (2x2, 3x3…15x15), we achieved results close to the modifications by wavelet transform.

Evaluating face recognition in single sample image per subject conditions reflects the real-world scenario. Also other effects such as various occlusions or lighting variation need to be taken into account when trying to reflect real conditions. We also need to test our methods using face databases that contain samples with these variations. Face databases such as ORL or AR could be used for this purpose.

For authentication and identification purposes, face recognition with 1 img./subj. only may not be enough, because its accuracy does not necessarily reach the required level. Therefore face recognition methods can be combined with different biometrics to form a multimodal system with much better characteristics than each of the biometrics itself (Ross & Jain, 2004).

Advertisement

Acknowledgments

Research described in this paper was done within the grants 1/0214/10 and 1/0961/11 of the Slovak Grant Agency VEGA. Portions of the research in this paper use the FERET database of facial images collected under the FERET program, sponsored by the DOD Counterdrug Technology Development Program Office. We would like to thank to our colleague Radoslav Vargic for valuable consultation regarding practical use of wavelets. We also thank to our student Ján Režnák for preparation of KPCA results.

References

  1. 1. AhonenT.HadidA.PietikäinenM. 2006 Face Description with Local Binary Patterns: Application to Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28 12 20372041 , TPAMI.2006.244
  2. 2. AhonenT.HadidA.PietikäinenM. 2004 Face Recognition with Local Binary Patterns, Proceedings of Computer Vision- ECCV, 978-3-54021-981-1 Prague, Czech Republic, May, 2004
  3. 3. AkbariR.BahaghighatM. K.MohammadiJ. 2010 Legendre Moments for Face Identification Based on Single Image per Person, 2nd Int. Conference on Signal Processing Systems (ICSPS), 978-1-42446-892-8 Dalian, August 2010
  4. 4. AsanoA. 2006 Pattern Information Processing, (lecture 2006 Autumn semester), Hiroshima University, Japan, Available from <http://laskin.mis.hiroshima-u.ac.jp/Kougi/06a/ PIP/PIP12pr.pdf>
  5. 5. BeveridgeR.BolmeD.TeixeiraM.DraperB. 2003 The CSU Face Identification Evaluation System User’s Guide, Version 5.0, Technical Report., Colorado State University, May 2003, Available from <http://www.cs.colostate.edu/evalfacerec/ algorithms/version5/faceIdUsersGuide.pdf>
  6. 6. BishopC. M. 1995 Neural Networks for Pattern Recognition, Oxford University Press, Inc., 0-19853-864-2 York
  7. 7. BoserB.GuyonI.VapnikV. 1992 A Training Algorithm for Optimal Margin Classifiers, Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, July 1992
  8. 8. DelacK.GrgicM.BartlettM. S. 2008 Eds. Recent Advances in Face Recognition, IN-TECH, Vienna, Retrieved from <http://intechweb.org/book.php?id=101>
  9. 9. DengW.HuJ.GuoJ.CaiW.FengD. 2010 Robust, Aaccurate and Efficient Face Recognition From a Single Training Image:A Uniform Pursuit Approach. Pattern Recognition, 43 5 May, 2010, 17481762 , 0031-3203
  10. 10. FERET Database. 2001 Available from: <http://www.itl.nist.gov/iad/humanid/feret/, NIST>
  11. 11. GaoQ. X.ZhangL.ZhangD. 2008 Face Recognition Using FLDA With Single Training Image Per Person, Applied Mathematics and Computation, 205 2 726734 , 0096-3003
  12. 12. HaykinS. 1994 Neural Networks- A Comprehensive Foundation, New York: Macmillan College Publishing Company, 0-02352-781-7
  13. 13. HlaváčkováK.NerudaR. 1993 Radial Basis Function Networks. Neural Network World, 3 1 93102
  14. 14. HsuC. W.ChangC. C.LinC. J. 2003 A Practical Guide to Support Vector Classification. Dept. of Computer Science and Inf. Engineering, National Taiwan University, April 15, 2010, Available from: <http://www.csie.ntu.edu.tw/~cjlin>
  15. 15. ChenS. C.LovellB. C. 2004 Illumination and Expression Invariant Face Recognition with One Sample Image, 17th Int. Conference on Pattern Recognition (ICPR’ 04)-1 0-76952-128-2 UK, Aug. 2004
  16. 16. ChenS. C.ZhangD. Q.ZhouZ. H. 2004 Enhanced (PC)2A For Face Recognition With One Training Image Per Person. Pattern Recognition Letters, 25 10 11731181 , 0167-8655
  17. 17. ChenS. C.LiuJ.ZhouZ. H. 2004 Making FLDA Applicable to Face Recognition with One Sample Per Person. Pattern Recognition, 37 (7), July 2004, 15531555 , 0031-3203
  18. 18. ChiodoK. 2006 Normal Distribution. NIST/SEMATECH e-Handbook of Statistical Methods, Retrieved from <http://www.itl.nist.gov/div898/handbook/eda/section3/ eda3661.htm>
  19. 19. KananH. R.FaezK. 2010 Recognizing Faces Using Adaptively Weighted Sub-Gabor Array From a Single Sample Image Per Enrolled Subject. Image and Vision Computing, 28 1 Jan. 2010, 438448 , 0262-8856
  20. 20. LeiZ.LiaoS.HeR.PietikäinenM.LiS. Z. 2008 Gabor Volume Based Local Binary Pattern for Face Representation and Recognition, 8th IEEE Int. Conf. on Automatic Face & Gesture Recognition 2008 FG ‘08, Amsterdam, 978-1-42442-153-4
  21. 21. LiS. Z.JainA. K. 2005 Eds. Handbook of Face Recognition, Springer, 038740595 New York
  22. 22. Li-WeiW.XiaoW.MingC.Ju-FuF. 2005 Is Two-dimensional PCA a New Technique ? Acta Automatica, 31 (5), 782787 , 0254-4156
  23. 23. MaLiW. J. S.YaoY. F.LanC.GaoS. Q.TangH.JingX. Y. 2009 Multi-modal Biometrics Pixel Level Fusion and KPCA-RBF Feature Classification for Single Sample Recognition Problem, NSFC, 978-1-42444-131-0
  24. 24. MajumdarA.WardR. K. 2008 Pseudo-fisherface Method For Single Image Per Person Face Recognition, ICASSP 2008, Las Vegas, NV, 978-1-42441-483-3
  25. 25. MarcelS.RodriguezY.HeuschG. 2007 On the Recent Use of Local Binary Patterns For Face Authentication. International Journal on Image and Video Processing Special Issue on Facial Image Processing, 2007. IDIAP-RR 0634
  26. 26. MarkJ. L. O. 1996 Introduction to Radial Basis Function Networks. Centre for Cognitive Science, University of Edinburgh, April 1996
  27. 27. MilborrowS. 2008 Locating Facial Features With an Extended Active Shape Model. Proc. of the 10th European Conf. on Computer Vision: Part IV, Marseille, France, Springer-Verlag, Nov. 2010, Retrieved from from http://www.springerlink.com/index/ 5t8hjm7j02qx6184.pdf
  28. 28. MüllerK. R.MikaS.RätschG.TsudaK.SchölkopfB. 2001 An Introduction to Kernel-Based Learning Algorithms. IEEE Trans. on Neural Networks, 12 2 March 200), 181201 , 1045-9227
  29. 29. NguyenH.BaiL. 2009 Local Gabor Binary Pattern Whitened PCA: A Novel Approach for Face Recognition from Single Image Per Person. Advances in Biometrics, 5558 269278 , 0030-2974 November 11, 2010, Retrieved from http://www.springerlink.com/index/t011q303142j7772.pdf
  30. 30. OjalaT.PietikäinenM.HarwoodD. 1996 A Comparative Study of Texture Measures with Classification Based on Feature Distributions. Pattern Recognition, 29 5159 , 0031-3203
  31. 31. OravecM.MazanecJ.PavlovicovaJ.EibenP.LehockiF. 2010 Face Recognition in Ideal and Noisy Conditions Using Support Vector Machines, PCA and LDA, In: Face Recognition, Milos Oravec (Ed.), 978-9-53307-060-5 INTECH, available from: http://sciyo.com/articles/show/title/face-recognition-in-ideal-and-noisy-conditions-using-support-vector-machines-pca-and-lda
  32. 32. OravecM.PolecJ.MarchevskýS. 1998 Neural Networks for Digital Signal Processing (in Slovak), Bratislava, Slovakia, 8-09675-039-9
  33. 33. PhillipsP. J.WechslerH.HuangJ.RaussP. 1998 The FERET Database and Evaluation Procedure For Face Recognition Algorithms, Image and Vision Computing, 16 5 295306 , 0262-8856
  34. 34. PriyaK. J.RajeshR. S. 2010 Dual Tree Complex Wavelet Transform Based Face Recognition with Single View, The Int. Conference on Computing, Communications and Information Technology Applications (CCITA-2010), 1994-460819944608
  35. 35. PuyatiW.WalairachtS.WalairachtA. 2006 PCA in Wavelet Domain For Face Recognition, The 8th International Conference Advanced Communication Technology, 455 Phoenix Park, 8-95519-129-4
  36. 36. QiaoL.ChenS.TanX. 2010 Sparsity preserving discriminant analysis for single training image face recognition. Pattern Recognition Letters 31 422429 , 0167-8655
  37. 37. QueD.ChenB.HuJ.AxY. 2008 A Novel Single Training Sample Face Recognition Algorithm Based on Modular Weighted (2D)2 PCA, 9th Int. Conference on Signal Processing, 2008. ICSP 2008, 15521555 , Beijing
  38. 38. QueD.ChenB.HuJ.AxY. 2008 A Novel Single Training Sample Face Recognition Algorithm Based on Modular Weighted (2D)2 PCA, 9th Int. Conf. on Signal Processing, ICSP 2008, 15521555 , 978-1-42442-178-7
  39. 39. SluciakO.VargicR. 2008 An Audio Watermarking Method Based on Wavelet Patchwork Algorithm, Proceedings of IWSSIP 2008, June 2008, Bratislava, Slovak Republic, 117120 , 978-8-02272-856-0
  40. 40. SuY.ShanS.ChenX.GaoW. 2010 Adaptive Generic Learning for Face Recognition from a Single Sample Per Person, Proceedings of Int. Conference on Computer Vision and Pattern Recognition, CVPR2010, 2699 2706
  41. 41. TanX.TriggsB. 2010 Enhanced Local Texture Feature Sets for Face Recognition under Difficult Lighting Conditions. IEEE Transactions on Image Processing, 19 6), 16351650 , 1057-7149
  42. 42. TanX.ChenS.ZhouZ. H.ZhangF. 2006 Face recognition from a single image per person: A survey. Pattern Recognition, 39 9), 17251745 . 0031-3203
  43. 43. TruaxB. 1999 ) Ed. Gaussian Noise In: Handbook for Acoustic Ecology, Available on: http://www.sfu.ca/sonic-studio/handbook/Gaussian_Noise.html Cambridge Street Publishing
  44. 44. TurkM.PentlandA. 1991 Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 3 1 Win. 1991, 7186 , Retrieved September 25, 2010, from http://portal.acm.org/citation.cfm?id=1326894#
  45. 45. VargicR.ProcháskaJ. 2005 An Adaptation of Shape Adaptive Wavelet Transform for Image Coding, EURASIP2005, Smolenice, Slovakia, June 2005
  46. 46. WenG.ShiguangS.XiujuanC.XiaoweiF. 2003 Virtual Face Image Generation for Illumination and Pose Insensitive Face Recognition, Proceedings of IEEE Int. Conference on Acoustics, Speech, and Signal Processing, ICASSP ‘03, pp. IV-7769 , 0-78037-663-3
  47. 47. WuJ.ZhouZ. H. 2002 Face Recognition with One Training Image Per Person. Pattern Recognition Letters, 23 14) 17111719 , 0167-8655
  48. 48. XieX.LamK. M. 2006 Gabor-Based Kernel PCA With Doubly Nonlinear Mapping for Face Recognition With a Single Face Image. IEEE Trans. on Image Processing, 15 9 1057-7149
  49. 49. XuJ.YangJ. 2009 Local Graph Embedding Discriminant Analysis for Face Recognition, School of Computer Science & Technology, Nanjing University of Science & Technology, Nanjing 210094, China, 2009
  50. 50. YangC. K.ChiangW. T. 2007 An Interactive Facial Expression Generation System. Multimedia Tools and Applications, 40 1 4160
  51. 51. YangJ.ZhangD.FrangiA. F.YangJ. Y. 2004 Two-dimensional PCA: a New Approach to Appearance-Based Face Representation And Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(1), 131137 , 0162-8828
  52. 52. ZhanC.LiW.OgunbonaP. 2009 Face Recognition from Single Sample Based on Human Face Perception. 24th International Conference Image and Vision Computing New Zealand, Wellington, 5661 , 978-1-42444-698-8
  53. 53. ZhangD.ChenS.ZhouZ. H. 2005 A New Face Recognition Method Based on SVD Perturbation for Single Example Image Per Person. Applied Mathematics and Computation, 163 2 895907

Notes

  • Biosanbox project page – http://biosandbox.fei.stuba.sk

Written By

Milos Oravec, Jarmila Pavlovicova, Jan Mazanec, Lubos Omelina, Matej Feder and Jozef Ban

Submitted: 27 October 2010 Published: 27 July 2011