Accuracy (%) comparisons on Yale B.
Abstract
Face recognition has been widely applied in fast video surveillance and security systems and smart home services in our daily lives. Over past years, subspace projection methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), are the well-known algorithms for face recognition. Recently, linear regression classification (LRC) is one of the most popular approaches through subspace projection optimizations. However, there are still many problems unsolved in severe conditions with different environments and various applications. In this chapter, the practical problems including partial occlusion, illumination variation, different expression, pose variation, and low resolution are addressed and solved by several improved subspace projection methods including robust linear regression classification (RLRC), ridge regression (RR), improved principal component regression (IPCR), unitary regression classification (URC), linear discriminant regression classification (LDRC), generalized linear regression classification (GLRC) and trimmed linear regression (TLR). Experimental results show that these methods can perform well and possess high robustness against problems of partial occlusion, illumination variation, different expression, pose variation and low resolution.
Keywords
- subspace projection
- principal component analysis
- linear discriminant analysis
- linear regression classification
- robust linear regression classification
- ridge regression
- improved principal component regression
- unitary regression classification
- linear discriminant regression classification
- generalized linear regression classification
- trimmed linear regression
1. Introduction
From the tragic 911 incident in 2011, more and more researches focus on the security issues with computational intelligence. How to avoid the tragic event from happening again and how to quickly identify the terrorists and suspects before or after the tragic event happens are very important. Therefore, the effectiveness of security is being examined almost everywhere. The discoveries of current security vulnerabilities along with the exploration of new methods should be constantly investigated to improve the security systems. Security measures with computational intelligence are used to improve the safety of our everyday lives.
The security issue is that we should recognize the wanted criminals and invaders from their biometric characteristics such as face, fingerprint, iris, palm and so on. Among these biometrics signals, the face image is easier and more direct to be captured by distanced cameras than others. For instance, the face images of any suspect who walks through the hotel lobby will be recorded by cameras. Hence, computer vision technologies with cameras can be applied to realize intelligent video surveillance systems. Since many face images of criminals and terrorists are available in the police department, they can be used to identify if the unknown face images are them from the distributed cameras. Thus, an efficient face recognition system could help to improve security. The face recognition systems would not only be helpful in identifying the criminals and terrorists, but also be used to search missing persons or identify the incident of weak person. Thus, face recognition systems with surveillance cameras have been already installed in many locations such as department store, airports and supermarkets. Besides, if the face recognition systems installed at home can timely detect the user’s facial expression, the smart service for the user can be properly introduced accordingly.
The goal of face recognition is to distinguish a specific identity and its outlook from face images. However, in realistic situations, such as video surveillance and access control, face recognition task might encounter great challenges such as different facial expressions, illumination variations, partial occlusions and even low resolution problems, which will degrade the face recognition performance and result in severe security complications. For example, the image captured by a CCTV camera at a distance would have a very low resolution which degrades the recognition performance significantly. Besides, in the testing phase, the face image is a factor which is out of control. In other words, the person may not be on a frontal pose and may not be a pure image, that is the person may be wearing glasses, hat, or mask, or even with some lighting influence and expressions. Over past years, subspace projection optimizations have been widely proposed to solve this problem with linear [1] and non-linear [2] approaches. The principal component analysis (PCA) [3, 4] and linear discriminant analysis (LDA) [5] are the two typical examples of linear transform approaches which attempt to seek a low-dimensional subspace for dimensionality reduction. The nonlinear projection approaches also have been used in many literatures like the kernel PCA (KPCA) [6] and kernel LDA (KLDA) [7] which can uncover the underlying structure when the samples lie on a nonlinear manifold structure in the image space.
Recently, the linear regression classification (LRC) proposed in 2010 by Naseem
The rest of this chapter is organized as follows. With the overview of fundamentals and facial representation, several famous face recognition algorithms are first presented in Section 2. Section 3 is dedicated to present several advances of subspace projection optimizations for robust face recognition technologies including RLRC, RR, improved principal component regression (IPCR), unitary regression classification (URC), linear discriminant regression classification (LDRC), generalized linear regression classification (GLRC) and trimmed linear regression (TLR). The performances of the aforementioned projection methods will be shown in Section 4. Finally, conclusions are drawn in Section 5.
2. Fundamentals of face recognition and representation
As shown in Figure 1, the typical face recognition system contains two major parts: face detection and face recognition. In this section, the face detection methods are first briefly introduced. Then, the well-known subspace project methods are reviewed. Finally, the similarity measures of image feature vectors are overviewed. Generally, the unknown data vector will be projected into a certain subspace, a similarity measure will be used to classify it. To narrow down the computation and increase the recognition accuracy, the first step of the recognition system, called face detection, is to detect and crop the face region from the image or video.
2.1. Face detection
The methods of face detection [17–20], can be separated into neural network, feature-based, and color-based approaches. Neural network approach [21] trains the facial class and non-facial class while a new image or video can be detected based on the prior training data. The well-known method is AdaBoost learning algorithm [22, 23]. Feature-based approach is to utilize the facial feature for detecting facial region. For example, the corresponding positions of the eyes, nose, and mouth are useful features; moreover, the shape of face, which is almost like an ellipse, can be included. Rule-based algorithm [24] and elliptical edge [25] are two popular feature-based methods. Color-based approach as [26] adopts the variance of skin color to detect if the region is face or not. For example, the face region in grayscale should not change immense while the eyes, mouth and hair should be darker than the other part of face.
Once the face areas are detected by a selected face detection method, their face images in size of
2.2. Subspace projection methods
The famous subspace projection methods, such as PCA and LDA are reviewed in the following sections.
2.2.1. Principal component analysis (PCA)
The PCA method is widely used for dimensionality reduction in the computer vision field, especially for face recognition technology. In the PCA, the data is represented as a linear combination of an orthonormal set of vectors that maximize the data scatter across all images. The first principal component represents the most variability of the image as possible while the second one represents the second most, and so on. The flow chart to find PCA transformation bases is shown in Figure 2. The main objective of the PCA is to reduce the dimension of the feature image
As shown in Figure 2, the derivations of the PCA transformation bases are stated in the following equations. First, the feature face image should remove the global mean to become:
where
After the computation of the feature face images, we can obtain
Based on the covariance matrix, the eigenvectors and eigenvalues can be retrieved by singular value decomposition (SVD) or eigen-decomposition as:
where
Finally, we can achieve the PCA features,
On the other hand, the testing image vector
And the similarity measure based on this feature data vector is calculated to determine the final result.
2.2.2. Linear discriminant analysis
Fisher proposed the LDA for recognition which is a kind of statistical analysis method like the PCA. But the difference is that the LDA can discriminate the different subjects even though the maximum variance subspaces among them are overlapped as shown in Figure 3. The goal of the LDA is that these projections onto a line will be well separated by disparate classes and be well concentrated by the same class.
Thus, the concept of LDA is to seek the optimal projection by maximizing the ratio of between-class and within-class scatter. Fisher utilizes a criterion to optimize this problem as:
where
where
In the same way, the testing image vector is projected onto LDA subspace by
And the final result can be determined by using similarity measure based on this feature vector.
2.3. Similarity measures
There exist three distance measures [27–29] such as the city block distance (Taxicab geometry,
and
where
However, these vectors satisfy the Cauchy-Schwarz inequality as:
To ignore the amplitudes of two feature data vectors, the similarity measure can be also defined by a cosine criterion as:
3. Advances of subspace projection optimization
In this section, the advances of subspace projection optimization are presented for robust face recognition system. Then, the well-known subspace projection methods including LRC, RLRC, RR, IPCR, URC, LDRC, GLRC and TLR are introduced.
3.1. Linear regression classification (LRC)
For applying the linear regression to estimate the class specific model, all
where
In the testing phase, if an unknown column vector
where
The regression coefficients,
For each class
By substituting Equation (19) into Equation (20), the predicted response vector
Theoretically, we can treat Equation (21) as a class-specific projection as:
where
In the LRC approach, the minimum reconstruction error is adopted for determining the final result. In other words, the distance between predicted response vector
3.2. Robust linear regression classification (RLRC)
The LRC has been claimed that classical statistical methods are robust, but they are only robust in the fact of true cases. Once the data distribution is in fact of false cases, the regression parameter under original least square estimation could be inaccurate. In other words, the original least square estimation is inefficient and can be biased in the presence of outliers. There exist several approaches for robust estimation like
where
and
3.3. Ridge regression (RR)
The goal of the RR is to find and minimize the residual errors and their penalty as:
where
3.4. Improved principal component regression (IPCR)
Multicollinearity denotes the interrelations among the independent variables. In the linear regression, the regression estimation could be imprecise because the multicollinearity phenomenon would inflate the variance and covariance. To overcome the problem of multicollinearity, various approaches have been proposed. IPCR is one of the powerful approaches.
The IPCR is a two-step classification method. In the first step, the PCAZ is adopted to transform the observed variables into the new decorrelated components. Then, the first
The PCAZ features,
In order to apply LRC to estimate class specific model, feature vectors should be grouped according to the class-membership. Hence, for the
The regression parameter vectors can be rewritten as a matrix form as:
3.5. Unitary regression classification (URC)
The total within-class projection error from all classes cannot be taken in previous mentioned methods for classification that would degrade the recognition accuracy. The URC is proposed to minimize the total within-class projection error from all classes for LRC to improve the robustness for pattern recognition.
Instead of original space, we hope to find a global unitary rotation
to achieve the total minimum projection error of all training data stated as:
where
By substituting
where
where
3.6. Linear discriminant regression classification (LDRC)
Although the previous methods including LRC, RLRC, and IPCRC can perform well on face recognition, we cannot guarantee that the projection subspace in LRC or IPCRC is most discriminatory. When the projection subspaces among the different subjects overlap, the recognition result would be incorrect. To obtain an effective discriminant subspace for LRC, the LRC with discriminant analysis is presented by maximizing the ratio of the between-class reconstruction error (BCRE) to the within-class reconstruction error (WCRE) by the LRC.
Mathematically, all images are collected from
where
where
With some algebraic deduction, the form becomes:
where
and
is inter-class and intra-class reconstruction error, respectively. In other words, the objective function can be represented as:
For solving the optimization problem, Equation (43) can be reformulated as the following:
where ϑ is a constant. The projection matrix,
where
3.7. Generalized linear regression classification (GLRC)
In real-world recognition applications, the input images generally have multiple components which can overcome the unexpected effects such as pose variations, limited image information and so on. For color face recognition, the GLRC with membership grade (MG) criteria is proposed to defend the unexpected effects.
Mathematically, each channel component is separately normalized and transformed to one column vector such that
for
For the test image, the
where
After solving the optimization problem, the regression vector can be expressed as:
In order to achieve optimal performance, the different components should be treated as unequally important. Thus, the absolute sum of prediction residual of the
where
where
where
The optimal total regression parameter vector,
The prediction,
For identity recognition, the minimum prediction error of the GRLC should be further designed to compute the similarity between the prediction vector
where
3.8. Trimmed linear regression (TLR)
For the occlusion situations, the previous methods including LRC, RLRC, IPCR, URC, LDRC, and GLRC are not suitable because the existing methods treat all pixels as equally import. Conversely, if the outliers can be detected and trimmed from the testing image and the corresponding training samples, the mechanism still can work. Hampel identifier [34, 35] for outlier detection is highly thought of by the researchers because it can make out the extreme values easily. An advantage of Hampel identifier is that it adopts median absolution deviation (MAD), which is a powerful measure in statistics, for removing the masking data. Mathematically, the Hampel identifier can be expressed as:
where Δ is a data set, media(Δ) denotes the media value of Δ data set. The number of 0.6745 is a probable error of standard deviation. When the ratio is larger than 2.24, the data will be abandoned. For example, there is a data set, [2, 3, 3, 4, 4, 250]. The sample mean is 44.33, sample variance is 100.76, sample median is 3.5,
and
respectively. We can observe that the Hampel identifier excludes the outlier easier than the other one.
For the face recognition, the error of estimation can be presented as:
where error is a zero mean distribution. In order to detect the occlusion part, each pixel should suffice the Hampel identifier estimation as:
where
The regression parameter vectors can be represented as:
4. Experimental results
In order to verify the recognition accuracy, the well-known databases including Yale B, AR, FERET, and FEI are utilized. In the experiments, we evaluate the mentioned method against low resolution problem coupled with facial expressions, illumination changes, pose variations, and partial occlusions.
4.1. Yale B database
The Yale B database contains 10 subjects [36, 37]. Each subject has 64 illumination images with 9 different poses. The Yale B can be divided into five subset based on angle of the light source directions as shown in Figure 4. In the experiments, the first subset with normal pose is used for training and the remaining subsets (Subset 2 to 5) with normal pose are utilized for testing. All images are cropped and resized to 30×25 pixels. Table 1 reveals that IPCRC performs better than the traditional subspace projection like PCA and LDA. Moreover, the IPCRC can also outperform the LRC, RLRC and RR. The reason is that the original subspace cannot represent the data distribution very well. Besides, PCA subspace is very sensitive to illuminant variations. However, IPCRC not only can transform to PCA subspace, but also can defend the illumination variations by removing the top
Methods | Subset 2 | Subset 3 | Subset 4 | Subset 5 |
---|---|---|---|---|
PCA | 89.81 | 47.04 | 21.90 | 15.26 |
LDA | 95.14 | 75.14 | 34.76 | 10.00 |
LRC | 100.00 | 100.00 | 91.86 | 52.11 |
RLRC | 100.00 | 100.00 | 92.86 | 60.00 |
RR | 100.00 | 100.00 | 92.57 | 53.68 |
IPCRC | 100.00 | 100.00 | 95.00 | 64.21 |
LDRC | 100.00 | 100.00 | 97.14 | 56.84 |
URC | 100.00 | 100.00 | 90.71 | 53.68 |
4.2. FERET database
Furthermore, we experiment on the FERET face database [38, 39] for the purpose of verifying the performance among the different subspace projections. In the experiments, we select four facial images including fa, fb, ql, and qr from 300 subjects as Figure 5. All images are converted, cropped, and downsampled to 30×25 pixels with grayscale. As the Figure 5 shown, the fa and fb samples are small pose and rotation changes; conversely, the ql and qr samples are major pose variations. In order to obtain a reliable result, cross-validation experimental procedure is adopted. In other words, three images per person are used for training while the fourth image is used for testing. Table 2 shows that the average recognition accuracy (ARA) in URC performs outstandingly. We can observe that the RLRC and IPCRC are highly sensitive to pose variations but in spite of these, methods perform well in noisy and illuminated face images, respectively.
PCA | LDA | LRC | RLRC | RR | IPCRC | URC | |
---|---|---|---|---|---|---|---|
fa | 80.67 | 87.33 | 94.00 | 91.67 | 85.67 | 92.33 | 96.00 |
fb | 81.00 | 84.33 | 92.33 | 90.00 | 83.67 | 91.00 | 95.33 |
ql | 65.67 | 63.00 | 71.00 | 69.33 | 63.33 | 66.00 | 73.00 |
qr | 68.33 | 72.00 | 75.00 | 74.33 | 70.00 | 68.67 | 84.33 |
ARA | 73.92 | 76.67 | 83.08 | 81.33 | 75.67 | 79.50 | 87.17 |
4.3. AR database
AR face database [40, 41] was conducted by Martinez and Benavente in 1998. This database contains 4000 mug shots of 126 subjects (70 males and 56 females) with different variations such as facial expressions, lighting changes and partial occlusions. For normal case, each subject contains 26 images in two sessions. The first session (AR1 ~ AR13), containing 13 photos, includes facial expression, different lighting changes, and partial occlusions (sunglasses and scarf) with lighting changes. The second session (AR14 ~ AR26) duplicates the same way of first session two weeks later as shown in Figure 6. In the experiments, 100 subjects are selected and all images are cropped and resized into 30×25 pixels with grayscale. We classify the images into four different expressions including neutral (AR4, AR14), happy (AR2, AR3), angry (AR1, AR17), and screaming (AR15, AR16) expressions. The single-one-expression training strategy is adopted to present the performance. For example, if neutral expression images are used for training, the happy, angry, and screaming expressions are used as query images. Table 3 reveals that the LDRC achieves the best performance in all cases. Moreover, we can observe that the happy expression images for training obtain higher performance than the others; conversely, the screaming expression images for testing can obtain lowest performance. On the other hand, the partial occlusion situations are used to discussion. In this experiments, the expression variation images (AR1~AR4, AR14~AR17) are utilized as training set, and testing sets are separated in two cases including sunglasses (AR8, AR21) and scarf (AR11, AR24). All images are cropped and resized into 42×30 pixels with grayscale. In the Table 4, we can observe two points. First, the TLRC can perform better than the other methods under sunglasses occlusion or scarf occlusion. Second, the upper bound occlusion seems to obtain higher performance than the lower bound occlusion. In other words, the mouth features are more useful than the eye features.
Training Set | Testing Set | PCA | LDA | LRC | RLRC | RR | IPCRC | LDRC | GLRC |
---|---|---|---|---|---|---|---|---|---|
N | H,A,S | 88.75 | 86.53 | 89.03 | 87.50 | 84.17 | 88.19 | 91.39 | 89.00 |
H | N,A,S | 90.97 | 89.17 | 92.08 | 91.67 | 82.33 | 92.08 | 93.61 | 54.33 |
A | N,H,S | 90.00 | 85.69 | 90.00 | 89.17 | 83.33 | 89.44 | 90.42 | 89.83 |
S | N,H,A | 88.19 | 84.58 | 87.78 | 86.53 | 77.88 | 87.58 | 90.42 | 55.50 |
Training Set | Testing Set | PCA | LDA | SRC | LRC | RLRC | RR | IPCRC | GLRC | TLRC |
---|---|---|---|---|---|---|---|---|---|---|
AR1~ AR4; AR14~ AR17 |
Sunglasses (AR8, AR21) | 42.5 | 20.5 | 87.0 | 65.5 | 47.5 | 59.0 | 44.5 | 90.5 | 100.0 |
Scarf (AR11, AR24) |
7.0 | 33.5 | 59.5 | 12.5 | 9.5 | 10.5 | 6.0 | 35.5 | 94.5 |
4.4. FEI database
The FEI face database [42, 43] contains 200 subjects (100 males and 100 females). Each subject has 14 images with different pose variations (image1~image10), facial expressions (image11~image12), and illumination variations (image13~image14) as shown in Figure 7. In the experiments, all images are resized to 24×20 pixels with grayscale and the “leave-one-out strategy” is adopted. From Table 5, it can be seen that the IPCRC is more robust to severe lighting variation (image 14) and URC is good at facial profiles (image 1, image 10). All in all, the ARA of URC performs the best.
PCA | LDA | LRC | RLRC | RR | IPCRC | LDRC | URC | GLRC | ||
---|---|---|---|---|---|---|---|---|---|---|
Test Image | 1 | 91.0 | 91.5 | 92.0 | 89.0 | 89.0 | 86.0 | 95.0 | 95.0 | 97.0 |
2 | 99.5 | 99.0 | 100.0 | 100.0 | 100.0 | 99.0 | 99.5 | 100.0 | 100.0 | |
3 | 97.0 | 99.0 | 99.5 | 99.5 | 99.5 | 99.5 | 99.0 | 100.0 | 100.0 | |
4 | 98.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
5 | 97.5 | 99.5 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
6 | 96.5 | 99.5 | 99.0 | 99.0 | 99.0 | 99.0 | 99.5 | 100.0 | 99.5 | |
7 | 99.5 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
8 | 99.0 | 99.5 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
9 | 97.0 | 99.5 | 99.5 | 99.5 | 99.5 | 99.5 | 100.0 | 100.0 | 100.0 | |
10 | 79.5 | 83.0 | 83.5 | 78.0 | 78.0 | 72.0 | 86.0 | 91.5 | 91.5 | |
11 | 98.0 | 100.0 | 99.5 | 99.5 | 99.5 | 99.5 | 100.0 | 99.5 | 100.0 | |
12 | 97.0 | 99.5 | 99.0 | 99.0 | 99.0 | 97.5 | 99.0 | 98.5 | 99.5 | |
13 | 47.0 | 87.5 | 97.0 | 97.5 | 97.5 | 99.0 | 98.5 | 99.5 | 94.5 | |
14 | 23.5 | 39.5 | 79.5 | 91.0 | 91.0 | 92.5 | 83.0 | 88.5 | 77.5 | |
ARA | 92.04 | 96.73 | 97.62 | 97.00 | 97.00 | 96.23 | 97.11 | 98.77 | 97.11 |
4.5. Discussions
From the experimental results, we can observe that IPCRC has a good performance under illumination situation. The reason is that the first
5. Conclusions
In this chapter, we presented several subspace projection methods for robust face recognition to deal with different practical situations such as pose variations, lighting changes, facial expressions, and partial occlusions.
For illumination variation task in face recognition, an improved principal component classification can be used to solve the multicollinearity problem and can perform better recognition accuracy than the original linear regression and RR. For the pose variations, a URC has been presented to minimize the total within-class projection error from all classes for LRC to improve the robustness for pattern recognition. Moreover, a LDRC has been proposed to overcome facial expressions by maximizing the ratio of the BCRE to the WCRE by the LRC. For the partial occlusions, a trimmed regression classification is used to remove unreliable pixels by the Hampel identifier. Finally, experimental results have revealed the comparisons with different subspace projection optimizations.
References
- 1.
Yang, J.; Zhang, D.; Frangi, A.F.; Yang, J-Y. Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2004; 26 (1):131–137. - 2.
Shawe-Taylor, J.; Cristianini, N., editors. Kernel methods for pattern analysis. Cambridge University Press Inc., 2004. ISBN: 0521813972. - 3.
Turk, M.; Pentland, A. Eigenfaces for recognition. Journal of Cognitive Neuroscience. 1991; 3 (1):71–86. - 4.
Belhumeur, P.N.; Hespanha, J.P.; Kriegman, D. Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1997; 19 (7):711–720. - 5.
Martínez, A.M.; Kak, A.C. Pca versus lda. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001; 23 (2):228–233. - 6.
Schölkopf, B., Smola, A.; Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation. 1998; 10 (5):1299–1319. - 7.
Yang, M.H. Kernel eigenfaces vs. kernel fisherfaces: face recognition using kernel methods. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition; 2002. p. 0215. - 8.
Naseem, I.; Togneri, R.; Bennamoun, M. Linear regression for face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2010; 32 (11):2106–2112. - 9.
Naseem, I.; Togneri, R.; Bennamoun, M. Robust regression for face recognition. Pattern Recognition. 2012; 45 (1):104–118. - 10.
Xue, H.; Zhu, Y.; Chen, S. Local ridge regression for face recognition. Neurocomputing. 2009; 72 (4):1342–1346. - 11.
Huang, S.M.; Yang, J.F. Improved principal component regression for face recognition under illumination variations. IEEE Signal Processing Letters. 2012; 19 (4):179–182. - 12.
Huang, S.M.; Yang, J.F. Linear discriminant regression classification for face recognition. IEEE Signal Processing Letters. 2013; 20 (1):91–94. - 13.
Huang, S.M.; Yang, J.F. Unitary regression classification with total minimum projection error for face recognition. IEEE Signal Processing Letters. 2013; 20 (5):443–446. - 14.
Chou, Y.T.; Yang, J.F. Identity recognition based on generalised linear regression classification for multi-component images. IET Computer Vision. 2016; 10 (1):18–27. - 15.
Chou, Y.T.; Yang, J.F. Object recognition based on generalized linear regression classification in use of color information. In: 2014 IEEE Asia Pacific Conference on In-Circuits and Systems; 2014. 272–275. - 16.
Lai, J.; Jiang, X. Robust face recognition using trimmed linear regression. In: ICASSP; 2013. 2979–2983. - 17.
Yang, G.; Huang, T.S. Human face detection in a complex back-ground. Pattern Recognition. 1994; 27 (1):53–63. - 18.
Hotta, K.; Kurita, T.; Mishima, T. Scale invariant face detection method using higher-order local autocorrelation features extracted from log-polar image. In: Third IEEE International Conference on Automatic Face and Gesture Recognition; 1998. 70–75. - 19.
Han, C.C.; Liao, H.Y.M.; Yu, G.J.; Chen, L.H. Fast face detection via morphology-based pre-processing. Pattern Recognition. 2000; 33 (10):1701–1712. - 20.
Fasel, B. Fast multi-scale face detection. IDIAP-Com-04-1998. 1998. - 21.
Rowley, H.A.; Baluja, S.; Kanade, T. Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1998; 20 (1):23–38. - 22.
Viola, P.; Jones, M.J. Robust real-time face detection. International Journal of Computer Vision. 2004; 57 (2):137–154. - 23.
Lienhart, R.; Maydt, J. An extended set of haar-like features for rapid object detection. In: 2002 International Conference on Image Processing; 2002. 1–900. - 24.
Kotropoulos, C.; Pitas, I. Rule-based face detection in frontal views. In: 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing; 1997. 2537–2540. - 25.
Birchfield, S. An elliptical head tracker. In: IEEE Conference Record of the Thirty-First Asilomar Conference on Signals, Systems & Amp; 1997. 1710–1714. - 26.
Wu, H.; Chen, Q.; Yachida, M. Face detection from color images using a fuzzy pattern matching method. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1999; 21 (6):557–563. - 27.
Orfanidis, S. SVD, PCA, KLT, CCA, and All That. Rutgers University Electrical and Computer Engineering Department, Optimum Signal Pro-cessing. 2007;1–77. - 28.
Golub, G.H.; Van Loan, C.F., editors. Matrix Computations. JHU Press. 2012. - 29.
Watkins, D.S., editors. Fundamentals of Matrix Computations. John Wiley & Sons Inc.. 2004. ISBN: 978-0-470-52833-4. - 30.
Huber, P.J., editors. Robust Statistics. Berlin Heidelberg: Springer. 2011. - 31.
Heyde, C.C., editors. Quasi-Likelihood and Its Application: A General Approach to Optimal Parameter Estimation. Springer-Verlag New York Berlin Heidelerg Inc.. 1997. ISBN: 0-387-98225-6. - 32.
Fraiman, R.; Meloche, J.; García-Escudero, L.A.; Gordaliza, A.; He, X.; Maronna, R.; Yohai, V.J.; Sheather, S.J.; McKean, J.W.; Small, C.G.; Wood, A.; Fraiman, R.; Meloche, J., editors. Multivariate L-Estimation. Test. 1999; 8(2): 255-317. - 33.
Hampel, F.R.; Ronchetti, E.M.; Rousseeuw, P.J.; Stahel, W.A., editors. Robust Statistics: The Approach Based on Influence Functions. John Wiley & Sons Inc.. 2011. ISBN: 9781118186435. - 34.
Wilcox, R.R., editors. Applying Contemporary Statistical Techniques. Elsevier Inc.. Gulf Professional Publishing. 2003. ISBN: 978-0-12-751541-0. - 35.
Wilcox, R.R., editors. Introduction to Robust Estimation and Hypothesis Testing. Academic Press. 2012. - 36.
Georghiades, A.S.; Belhumeur, P.N.; Kriegman, D. From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001; 23 (6):643–660. - 37.
Georghiades, A.S.; Belhumeur, P.N.; Kriegman, D. Extended Yale Face Database B. Available from: http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ - 38.
Phillips, P.J.; Moon, H.; Rizvi, S.A.; Rauss, P.J. The FERET evaluation methodology for face-recognition algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000; 22 (10):1090–1104. - 39.
Phillips, P.J.; Moon, H.; Rizvi, S.A.; Rauss, P.J. The FERET Database. Available from: http://www.itl.nist.gov/iad/humanid/feret/feret_master.html - 40.
Martinez, A.M. The AR face database. In: CVC Technical Report #24; 1998. - 41.
Martinez, A.M. The AR Face Database. Available from: http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html - 42.
OLIVEIRA; JR, L. L.; Thomaz, C. E. Captura e alinhamento de imagens: Um banco de faces brasileiro. Relatório de iniciação científica, Depto. Eng. Elétrica da FEI, São Bernardo do Campo, SP, 2006. 10: 1-10. - 43.
OLIVEIRA; JR, L. L.; Thomaz, C. E. FEI Face Database. Available from: http://www.fei.edu.br/~cet/facedatabase.html