Open access peer-reviewed chapter

Face Recognition Based on Texture Descriptors

Written By

Jesus Olivares-Mercado, Karina Toscano-Medina, Gabriel Sanchez-Perez, Mariko Nakano Miyatake, Hector Perez-Meana and Luis Carlos Castro-Madrid

Submitted: 25 October 2017 Reviewed: 22 March 2018 Published: 05 November 2018

DOI: 10.5772/intechopen.76722

From the Edited Volume

From Natural to Artificial Intelligence - Algorithms and Applications

Edited by Ricardo Lopez-Ruiz

Chapter metrics overview

1,114 Chapter Downloads

View Full Metrics


In this chapter, the performance of different texture descriptor algorithms used in face feature extraction tasks are analyzed. These commonly used algorithms to extract texture characteristics from images, with quite good results in this task, are also expected to provide fairly good results when used to characterize the face in an image. To perform the testing task, an AR face database, which is a standard database that contains images of 120 people, was used, including 70 images with different facial expressions and 30 with sunglasses, and all of them with different illumination intensity. To train the recognition system from one to seven images were used for each person. Different classifiers like Euclidean distance, cosine distance, and support vector machine (SVM) were also used, and the results obtained were higher than 98% for classification, achieving a good performance in verification task. This chapter was also compared with other schemes, showing the effectiveness of all of them.


  • face recognition
  • face verification
  • texture
  • LBP
  • WBP
  • SVM

1. Introduction

Nowadays, face recognition is a non-intrusive biometric method in which the data acquisition is easy and can be carried out with or without the cooperation of the person under analysis. The face can be considered as the easiest way to recognize a person, increasing the acceptance of this kind of systems and their applications [1, 2, 3]. These systems consist of two tasks: identity verification, where the system verifies if the identity of the person is that which he/she claims it to be, and identification task, where the system determines the identity of the person among all the people in a database. Thus, the recognition task covers both tasks–identification and verification [4, 5].

Several problems must be considered in the development of a face recognition system such as illumination changes, facial expressions, and partial occlusions. It is because these kinds of changes can harm the accuracy of a face recognition system [6]. The changes in lighting conditions have received significant attention [6]. Because of that, a lot of systems were proposed in the last years, trying to reduce these problems [6]. Some systems proposed to this end are based on image processing techniques such as histogram equalization [6, 7] and contrast-limited adaptive histogram equalization (CLAHE) [8]. Another way to solve the problem of illumination changes is the development of different high-performance methods to solve these kinds of changes such as the eigenphases approach [7, 8, 9, 10, 11]. Also, are useful some methods based on frequency transforms like discrete cosine transform [12, 13, 14], discrete Gabor transform [15, 16, 17], discrete wavelet transforms [18, 19, 20, 21], and discrete Haar transform [22]. Additional methods that could be applied are the eigenfaces [23, 24] which use the principal component analysis (PCA) [25, 26], modular PCA-based face recognition methods [27], the Fisherfaces approach [28], and the Laplacianfaces [29].

The local binary pattern (LBP) operator [30] has recently been proposed in several applications. The principal advantages of this algorithm are that it has a good computational performance and presents a good support when the images have gray-level changes. Because of that LBP can be applied for image characterization in several pattern recognition tasks [31]. This algorithm can be used for face characterization because the face images have a lot of little patterns which can be characterized using the LBP [30]. Several LBP variations have been proposed such as: the holistic LBP histogram (hLBPH) [30], the spatially enhanced LBP histogram (eLBPH) [32], holistic LBP Image algorithm (hLBPI) [32] and decimated image window binary pattern (WBP) [33]. All of these algorithms are based on the original LBP algorithm, but the computational complexity of the hLBPI and WBP are lower than the others providing also a good performance as shown in this chapter.

In recent years, the interest in the face recognition schemes has increased because of its potential implementation in mobile devices, which generally have a limited computational power. Hence, this chapter presents a comparison of the texture descriptors like hLBPI and WBP. Finally, some classification methods, like SVM, Euclidean distance, and cosine distance, are used to perform the recognition. In this chapter, these algorithms were evaluated with different illumination and facial expression changes.

The remainder of this chapter is organized as follows: Section 2 presents the description of the evaluated system. Section 3 presents the evaluation results. Finally, Section 4 provides the conclusions of this research.


2. Evaluated system

Figure 1 shows the block diagram of the evaluated face recognition system, where, firstly, the system receives the face image under analysis. It is then fed into an interpolation stage (Any other method can be applied.). Next, the texture descriptor algorithm is applied to characterize the image. Finally, the feature matrix is fed into the classification stage.

Figure 1.

(a) Block diagram of the evaluated face recognition scheme, (b) illustration of the evaluated face recognition scheme.

2.1. Texture descriptors

In this chapter were used two texture descriptors, the hLBPI and the WBP. The hLBPI algorithm, introduced by Ojala et al. [34], is based on the original LBP method. This algorithm uses masks of 3 × 3 pixels. There is a neighborhood, as is shown in Figure 2a, where all neighbors are compared with the central pixel where each of these pixels are labeled with a 0 if their values are smaller than the central pixel; otherwise, they are labeled as 1 (Figure 2b). Next, the label of each pixel is multiplied by 2 p , where p is the position of each pixel in the neighborhood from 0 to 7 (Figure 2c). Finally, all values are added to get the label that will be positioned in the place of the central pixel as shown in Figure 2d. This algorithm obtains 128 different values for the central pixel. These steps apply to an image to obtain a LBP matrix.

Figure 2.

Example of implementation of LBP in a neighborhood of 3 × 3 pixels.

After obtaining the LBP matrix, can estimate an L-dimensional feature vector, where L is the total number of training images of N × M pixels, requires 8 NM additions and 8 NM comparisons for LBP estimation. The LBP matrix is then arranged in a column vector, X , of size NM which is multiplied by the matrix Φ , of size L × NM obtained from the PCA analysis. Thus, the estimation of the feature vector, given by Y = Φ X , requires NML additions and NML multiplications. Then, the feature vector estimation requires 24 + L NM additions, 16 + L NM multiplications and 8 NM comparisons. Thus, assuming that all operations represent the same computational cost, the hLBPI algorithm requires approximately 48 + 2 L NM operations. On the other hand, the WBP is an algorithm that reduces the computational complexity of the original hLBPI without a big loss of accuracy in the recognition. Firstly, the image is reduced with bicubic interpolation using a factor of 9. Then the image is divided into lxm non-overlapping windows of 3 × 3 pixels such that the input size of the original image M × N could be represented as 3 l × 3 m . Then, the WBP is defined as follows:

WBP j k = p = 0 P 1 s I j , k x y g c 2 p E1

where x = 3 j + r , y = 3 k + q , r = 1 , 2 , , 8 , q = 1 , 2 , , 8 , j = 1 , 2 , , M / 3 , k = 1 , 2 , , N / 3 , I j , k x y ; represents the j k -th block of 3 × 3 pixels of the down sampled input image, g c is the central pixel of the same block and s I j , k x y g c = 0 if I j , k x y < g c and 1 otherwise. Finally, the label of each pixel is multiplied by 2 p , where p is the position of each pixel in the neighborhood from 0 to 7. Next the feature matrix obtained from Eq. (1) is rearranged in a vector of size MN / 9 which is fed into the classification stage. The main advantage of this algorithm is that the face image can be characterized by a small non-overlapping window of 3 × 3 -pixel instead of the overlapped windows used by the original hLBPI.

A WBP example is shown in Figure 3. First, the original image is divided into windows of 3 × 3 pixels (Figure 3a). Figure 3b shows the result of the comparison of neighboring pixels and Figure 3c shows the result of its respective conversion to decimal values. The matrix resulting from the sum of the decimal values (WBP image) is shown in Figure 3d. The size is reduced to 1 / 9 of the original image. This matrix is then introduced in the classifier for training or recognition. The computational complexity of this algorithm includes the estimation of the LBP coefficients using non-overlapping blocks of 3 × 3 pixels, which require 8 NM / 81 additions and 8 NM / 81 comparisons. Thus, assuming that the three operations have similar complexity, the proposed algorithm requires 16 NM / 81 operations.

Figure 3.

WBP example.

A comparison of the computational complexity of other recently proposed methods for feature vector estimation during the testing operation is shown in Figure 4.

Figure 4.

Computational complexity of different algorithms.

2.2. Classification stage

After obtaining the feature vectors, the next step is the classification stage, which involves performing the identification or verification task. In this chapter, in the training stage, the K-means was used to obtain a template by averaging the training images of each class. During the identification task, three classifiers are used. These are SVM, the Euclidean distance, and cosine distance, classifying the image under analysis as belonging to the N-class with the smaller distance or the largest probability in the SVM case. During the verification task, the system validates the identity of the person under analysis with a given threshold, and these results are only obtained by the SVM. In this chapter, two different distances were evaluated, the Euclidean distance given by:

d st = x s y t x s y t T , E2

and the cosine distance is given by.

d st = 1 x s y t T x s x s T y t y t T , E3

where x s is the estimated feature vector of the image under analysis and y t is the center of the t-th class.


3. Evaluation results

In this chapter, the system was evaluated using the identification and verification tasks, wherein identification of the system is required to determine the identity of the person under analysis by comparing the facial characteristics stored in a database with the face characteristics extracted and the verification task, where the system must define if the identity corresponds with the person she/he claims to be [4]. In both tests, the results are compared with another algorithm like the eigenphase [11], Laplacianfaces [29], Fisherfaces, and eigenfaces [28], all of them used in the classification task with the SVM and k-means with Euclidean and cosine distance, the AR Face Database [35] was used for all tests.

The AR database was expanded with four additional images for each one in the original AR database. These images are shown in Figure 5, where Figure 5a is the original image and Figure 5b–e show the resulting images of the illumination variations.

Figure 5.

Effects of illumination transformation applied to form the extended AR database.

After the expansion, the AR database has 12,000 face images in total, each person has 100 images and the database has 120 persons, where 65 are males and 55 females. The database was divided into two sets, the AR(A) which has 70 images with illumination and expression changes and the AR(B) which has 30 images with partial occlusion using sunglasses and also illumination changes. Figure 6 shows some examples of these images.

Figure 6.

Examples of (a) images from the AR(A) set. (b) Images from the AR(B) set.

In real-world applications, the number of training images and recognition accuracy are strongly related; as more images are used for training, the recognition accuracy improves. However, in real applications the number of training images is limited. Figure 7 shows the performance of hLBPI (Figure 7a) and WBP (Figure 7b) with different number of training images using three different classifiers.

Figure 7.

The identification rate obtained using different numbers of training images.

3.1. Identification

Figure 8 shows the recognition performance of the texture descriptors hLBPI and WBP compared with the other classical methods, all of them using the set AR(A) and seven training images for each person.

Figure 8.

Recognition performance of the evaluating approach using: SVM, Euclidean distance, and cosine distance in the classification stage. The performance of hLBPI, eigenphases, eigenfaces, Laplacianfaces, and Fisherfaces are also shown for comparison.

An important evaluation to obtain also is the ranking of identification, where the ranking (n) denotes the probability that an image belongs to one of n classes with highest probability. That is, a ranking of 10 is the probability that the image belongs to one of the 10 most likely persons. Figures 9 and 10 present the ranking evaluation with the set AR(A) and set AR(B).

Figure 9.

Ranking evaluation of image set AR(A).

Figure 10.

Ranking evaluation of image set AR(B).

In all cases, the training was done using seven images per person belonging to either the AR(A), while the recognition system was tested with images that were not used for training from the AR(A) and AR(B) sets respectively.

3.2. Verification

In the case of verification, the percentage of error is divided into two: false acceptance and false rejection. False acceptance occurs when an individual claims to be the person who is not and this is mistakenly accepted by the system. False rejection occurs when an individual provides their identity and the system erroneously rejects this statement.

Figure 11 shows the receiver operating characteristics (ROC) when the hLBPI and WBP algorithms are used for the verification task using the set AR(A). Figure 11 shows that both algorithms have a similar performance with a false rejection and false acceptance, and they are very low.

Figure 11.

ROC of proposed and conventional hLBPI algorithms.

The performance of most face verification systems depends on a suitable selection of the threshold value to diminish both the false acceptance and the false rejection. After many tests, it was concluded that this relationship, this threshold, is used to decide if the person is who she/he claims to be; next, it is explained, how the threshold can be obtained with an exponential function as shown in Figure 12, where you can select the point that has the lowest value of both false acceptance and false rejection. Thus, we can assume that the false acceptance probability is given by:

P fa Th = e αTh E4

Figure 12.

Relationship between the false acceptance rate and the threshold value.


P fr = e β P fa E5

Then from Eqs. (4) and (5) it follows that α = Ln P fa Th / Th where P fa Th is the false acceptance percentage, calculated from a threshold Th . Hence, for a desired false acceptance probability P o the suitable value of Th is given.

Th = Ln P o / α E6

Figure 13 shows the results of an exponential function and some experimental results of the relationship between the false acceptance rate and false rejection rate in this chapter.

Figure 13.

Relationship between the false acceptance rate and false rejection rate.


4. Conclusions

This chapter presented the application of different texture descriptors for tasks ranging from feature extraction to face recognition, which are based on the LBP algorithm, specifically the hLBPI and WBP. The evaluation results demonstrate that these algorithms provide good recognition rates. In most situations, the accuracy of recognition with the WBP is slightly lower than the accuracy with the hLBPI because the feature vector estimation of the WBP does not require PCA and uses non-overlapping blocks. This fact results in an important computational complexity reduction of approximately 2 NML / 9 relative to hLBPI, where L is the feature vector size. If an L with a big value is used, it produces a more exact feature vector, although this causes the computational complexity to increase. Also, the evaluation results were compared with other methods, such as the Eigenfaces, Laplacianfaces, and Fisherfaces. Another point that can be observed is that, as in all recognition systems, the accuracy percentage increases when a greater number of training images are used, so one could look for ways to generate training images based on an original image to increase the number of images, and then the system may be able to recognize face images in other types of environments, such as with a lot of lighting or with partial face occlusions. The results obtained with the set A shown in Figure 8 where the system has a good performance both with images with facial expressions and with lighting, as well as when there are images with a partial occlusion of the face such as sunglasses as shown in Figure 9. In both cases, a recognition rate higher than 90% is obtained.

The evaluation results using the AR databases demonstrate that these algorithms provide good results also when it performs identity verification tasks, providing a theoretical criterion which allows selecting the threshold such that the system be able to provide a previously specified false acceptance or false rejection rate.



We thank the National Science and Technology Council of Mexico and to the Instituto Politecnico Nacional for the financial support during the realization of this chapter.


  1. 1. Kung SY, Mak M-W, Lin S-H. Biometric Authentication: A Machine Learning Approach. New York: Prentice Hall Professional Technical Reference; 2005
  2. 2. El-Bakry HM, Mastorakis N. Personal identification through biometric technology. In: Proc. of the WSEAS International Conference on Applied Mathematics and Communications; 2009. pp. 325-340
  3. 3. Li SZ, Jain AK. Handbook of Face Recognition. New York: Springer; 2011
  4. 4. Chellappa R, Sinha P, Phillips PJ. Face recognition by computers and humans. Computer. 2010;43:46-55
  5. 5. Gao Y, Leung MK. Face recognition using line edge map. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002;24:764-779
  6. 6. Ruiz-del-Solar J, Quinteros J. Illumination compensation and normalization in eigenspace-based face recognition: A comparative study of different pre-processing approaches. Pattern Recognition Letters. 2008;29:1966-1979
  7. 7. Ramirez-Gutierrez K, Cruz-Perez D, Olivares-Mercado J, Nakano-Miyatake M, Perez-Meana H. A face recognition algorithm using eigenphases and histogram equalization. International Journal of Computers. 2011;5:34-41
  8. 8. Benitez-Garcia G, Olivares-Mercado J, Aguilar-Torres G, Sanchez-Perez G, Perez-Meana H. Face identification based on contrast limited adaptive histogram equalization (CLAHE). In: Proc. of International Conference on Image Processing, Computer Vision and Pattern Recognition; 2011
  9. 9. Zaeri N. Eigenphases for corrupted images. In: Proc. of the International Conference on Advances in Computational Tools for Engineering Applications; 2009. pp. 537-540
  10. 10. Olivares-Mercado J, Hotta K, Takahashi H, Nakano-Miyatake M, Toscano-Medina K. Improving the eigenphase method for face recognition. IEICE Electronics Express. 2009;6:1112-1117
  11. 11. Benitez-Garcia G, Olivares-Mercado J, Sanchez-Perez G, Nakano-Miyatake M, Perez-Meana H. A sub-block-based eigenphases algorithm with optimum sub-block size. Knowledge-Based Systems. 2012;37:415-426
  12. 12. Sharkas M. Application of DCT blocks with principal component analysis for face recognition; In: Proc. of the WSEAS International Conference on Signal, Speech and Image Processing; 2005. pp. 107-111
  13. 13. Dabbaghchian S, Ghaemmaghami MP, Aghagolzadeh A. Feature extraction using discrete cosine transform and discrimination power analysis with a face recognition technology. Pattern Recognition. 2010;43:1431-1440
  14. 14. Ajit Krisshna NL, Deepak VK, Manikantan K, Ramachandran S. Face recognition using transform domain feature extraction and PSO-based feature selection. Applied Soft Computing Journal. 2014;22:141-161
  15. 15. Aguilar-Torres G, Toscano-Medina K, Sanchez-Perez G, Nakano-Miyatake M, Perez-Meana H. Eigenface-Gabor algorithm for feature extraction in face recognition. International Journal of Computers. 2009;3:20-30
  16. 16. Owusu E, Zhan Y, Mao RQ. An SVM-AdaBoost facial expression recognition system. Applied Intelligence. 2014;40:536-454
  17. 17. Qin H, Qin L, Xue L, Yu C. Gabor-based weighted region covariance matrix for face recognition. Electronics Letters. 2012;48:992-993
  18. 18. Hu H. Variable lighting face recognition using discrete wavelet transform. Pattern Recognition Letters. 2011;32:1526-1534
  19. 19. Dai D-Q, Yan H. Wavelets and face recognition. In: Delac K, Grgic M, editors. Face Recognition. Viena: I-Tech; 2007. pp. 59-74
  20. 20. Eleyan A, Özkaramanli H, Demirel H. Complex wavelet transform-based face recognition. EURASIP Journal on Advances in Signal Processing. 2008;2008:195
  21. 21. Delac K, Grgic M, Grgic S. Face recognition in JPEG and JPEG2000 compressed domain. Image and Vision Computing. 2009;27:1108-1120
  22. 22. Gautam K, Quadri N, Pareek A, Choudhary S. A face recognition system based on back propagation neural network using Haar wavelet transform and morphology. Lecture Notes in Electrical Engineering. 2014;298:87-94
  23. 23. Jirawatanakul J, Watanapa S. Thai face cartoon detection and recognition using eigenface model. Advances in Materials Research. 2014;1341-1397
  24. 24. Hou YF, Pei W, Yan-Wen Chong Y, Chun-Hou ZC. Eigenface-based sparse representation for face recognition. In: Intelligent Computing Theories and Technology. Vol. 7096. Berlin Heidelberg: Springer; 2013. pp. 457-465
  25. 25. Shlens J. A Tutorial on Principal Component Analysis. arXiv preprint arXiv:1404.1100, 2014
  26. 26. Zhang YX. Artificial neural networks based on principal component analysis input selection for clinical pattern recognition analysis. Talanta. 2007;73:68-75
  27. 27. Gottumukkal R, Asari VK. An improved face recognition technique based on modular PCA approach. Pattern Recognition Letters. 2004;25:429-436
  28. 28. Belhumeur PN, Hespanha JP, Kriegman DJ. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1997;19:711-720
  29. 29. He X, Yan S, Hu Y, Niyogi P, Zhang H-J. Face recognition using laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005;27:328-340
  30. 30. Ahonen T, Hadid A, Pietikainen M. Face description with local binary patterns: Application to face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2006;28:2037-2041
  31. 31. Xia W, Yin S, Ouyang P. A high precision feature based on LBP and Gabor theory for face recognition. Sensors. 2013;13:4499-4513
  32. 32. Yang B, Chen S. A comparative study on local binary pattern (LBP) based face recognition: LBP histogram versus LBP image. Neurocomputing. 2013;120:365-379
  33. 33. Benitez-Garcia G, Olivares-Mercado J, Toscano-Medina K, Sanchez-Perez G, Nakano-Miyatake M, Perez-Meana H. A low complexity face recognition scheme based on down sampled local binary patterns. International Arab Journal of Information Technology, Accepted for publication. 2016
  34. 34. Ojala T, Pietikainen M, Harwood D. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In: Proc. of the IAPR Int. Conference on Computer Vision and Image Processing; Vol. 1. 1994. pp. 582-585
  35. 35. Martinez AM. AR face database. CVC Technical Report 24, 1998

Written By

Jesus Olivares-Mercado, Karina Toscano-Medina, Gabriel Sanchez-Perez, Mariko Nakano Miyatake, Hector Perez-Meana and Luis Carlos Castro-Madrid

Submitted: 25 October 2017 Reviewed: 22 March 2018 Published: 05 November 2018