Open access peer-reviewed chapter

Face Recognition: Issues, Methods and Alternative Applications

Written By

Waldemar Wójcik, Konrad Gromaszek and Muhtar Junisbekov

Submitted: 03 February 2016 Reviewed: 09 March 2016 Published: 06 July 2016

DOI: 10.5772/62950

From the Edited Volume

Face Recognition - Semisupervised Classification, Subspace Projection and Evaluation Methods

Edited by S. Ramakrishnan

Chapter metrics overview

12,243 Chapter Downloads

View Full Metrics

Abstract

Face recognition, as one of the most successful applications of image analysis, has recently gained significant attention. It is due to availability of feasible technologies, including mobile solutions. Research in automatic face recognition has been conducted since the 1960s, but the problem is still largely unsolved. Last decade has provided significant progress in this area owing to advances in face modelling and analysis techniques. Although systems have been developed for face detection and tracking, reliable face recognition still offers a great challenge to computer vision and pattern recognition researchers. There are several reasons for recent increased interest in face recognition, including rising public concern for security, the need for identity verification in the digital world, face analysis and modelling techniques in multimedia data management and computer entertainment. In this chapter, we have discussed face recognition processing, including major components such as face detection, tracking, alignment and feature extraction, and it points out the technical challenges of building a face recognition system. We focus on the importance of the most successful solutions available so far. The final part of the chapter describes chosen face recognition methods and applications and their potential use in areas not related to face recognition.

Keywords

  • face recognition
  • biometric identification
  • methods
  • applications
  • image processing

1. Introduction

Recent advances in automated face analysis, pattern recognition and machine learning have made it possible to develop automatic face recognition systems to address these applications. On the one hand, recognising face is natural process, because people usually do it effortlessly without much conscious. On the other hand, application of this process in area of computer vision remains a difficult problem. Being part of a biometric technology, automated face recognition has a plenty of desirable properties. They are based on the important advantage—non‐invasiveness. The various biometric methods can be distinguished into physiological (fingerprint, DNA, face) and behavioural (keystroke, voice print) categories. The physiological approaches are more stable and non‐alterable, except by severe injury. Behavioural patterns are more sensitive to human overall condition, such as stress, illness or fatigue.

The brief analysis of the face detection techniques using effective statistical learning methods seems to be crucial as practical and robust solutions.

Figure 1 points out the basic elements of the typical face recognition system.

Figure 1.

Crucial elements of the typical face recognition system.

Face detection performance is a key issue, so techniques for dealing with non‐frontal face detection are discussed. Subspace modelling and learning‐based dimension reduction methods are fundamental to many current face recognition techniques. Discovering such subspaces so as to extract effective features and construct robust classifiers stands another challenge in this area. Face recognition has merits of both high accuracy and low intrusive, so it has drawn the attention of the researches in various fields from psychology, image processing to computer vision.

The first stage is face detection in the acquired image that is regardless of scale and location. It often uses an advanced filtering procedure to distinguish locations that represent faces and filters them with accurate classifiers. It is notable that all translations, scaling and rotational variations have to be dealt in the face detection phase. For example, regarding to [1,2], facial expressions and hairstyle changes or smiling and frowning face still stands important variations during pattern recognition stage.

In the next step, anthropometric data set‐based system predicts the approximate location of the principal features such as eyes, nose and mouth. Of course, whole procedure is repeated to predict the subfeatures, relative to principal features, and verified with collocation statistic to reject any mislocated features.

Dedicated anchor points are generated as the result of geometric combinations in the face image and then it starts the actual process of recognition. It is carried out by finding local representation of the facial appearance at each of the anchor points. The representation scheme depends on approach. In order to deal with such complication and find out the true invariant for recognition, researchers have developed various recognition algorithms.

There are several boundaries for current face recognition technology (FERET). In [3,4] was provided early benchmark of face recognition technologies. While under ideal conditions, performance is excellent, under conditions of changing illumination, expression, resolution, distance or aging, performance decreases significantly. It is the fact that face recognition systems are still not very robust regarding to deviations from ideal face image. Another problem is an effective way of storing and access granting to facial code (or facial template) stored as a set of features and extracted from image or video.

Considering roughly presented elements above of the complex process of face recognition, a number of limitations and imperfections can be seen. They require clarification or replacing by new algorithms, methods or even technologies.

In this chapter, we have discussed face recognition processing, including major components such as face detection, tracking, alignment and feature extraction, and it points out the technical challenges of building a face recognition system. We focus on the importance of the most successful solutions available so far.

The final part of the chapter describes chosen face recognition methods and applications and their potential use in areas not related to face recognition.

The need for this study is justified by an invitation to participate in the further development of a very interesting technology, which is face recognition.

Despite the fact, there is continual performance improvement regarding several face recognition technology areas, and it is worth to note that current applications also impose new requirements for its further development.

Advertisement

2. Previous methods

2.1. Classical face recognition algorithms

There has been a rapid development of the reliable face recognition algorithms in the last decade. The traditional face recognition algorithms can be categorised into two categories: holistic features and local feature approaches. The holistic group can be additionally divided into linear and nonlinear projection methods.

Many applications have shown good results of the linear projection appearance‐based methods such as principal component analysis (PCA) [5], independent component analysis (ICA) [6], linear discriminate analysis (LDA) [7,8], 2DPCA [9] and linear regression classifier (LRC) [10].

However, due to large variations in illumination conditions, facial expression and other factors, these methods may fail to adequately represent the faces. The main reason is that the face patterns lie on a complex nonlinear and non‐convex manifold in the high‐dimensional space.

In order to deal with such cases, nonlinear extensions have been proposed like kernel PCA (KPCA), kernel LDA (KLDA) [11] or locally linear embedding (LLE) [12]. The most nonlinear methods using the kernel techniques, where the general idea consists of mapping the input face images into a higher‐dimensional space in which the manifold of the faces is linear and simplified. So the traditional linear methods can be applied.

Although PCA, LDA and LRC are considered as linear subspace learning algorithms, it is notable that PCA and LDA methods focus on the global structure of the Euclidean space, whereas LRC approach focuses on local structure of the manifold.

These methods project face onto a linear subspace spanned by the eigenface images. The distance from face space is the orthogonal distance to the plane, whereas the distance in face space is the distance along the plane from the mean image. These both distances can be turned into Mahalanobis distances and given probabilistic interpretations [13].

Following these, there have been developed: KPCA [14], kernel ICA [15] and generalised linear discriminant analysis [16].

Despite strong theoretical foundation of kernel‐based methods, the practical application of these methods in face recognition problems, however, does not produce a significant improvement compared with linear methods.

Another family of nonlinear projection methods has been introduced. They inherited the simplicity from the linear methods and the ability to deal with complex data from the nonlinear ones. Among these methods, it is worth to underline: LLE [17] and locality preserving projection (LPP) [18]. They produce a projection scheme for training data only, but their capability to project new data items is questionable.

In the second category, local appearance features have certain advantages over holistic features. These methods are more stable to local changes such as expression, occlusion and misalignment. The common representative method names local binary patterns (LBPs) [19,20]. The neighbouring changes around the central pixel in a simple but effective way are described by LBP. It is invariant monotonic intensity transformation and supports small illumination variations. Many LBP variants are proposed to improve the original LBP such as histogram of Gabor phase patterns [21] and local Gabor binary pattern histogram sequence [22,23]. Generally, the LBP is utilised to model the neighbouring relationship jointly in spatial, frequency and orientation domains [22].

It allows to explore efficiently discriminant and robust information in the pattern. Further development of the mentioned subspace approaches represents discriminant common vectors (DCVs) approach [24].

The DCV method collects the similarities among the elements in the same class and drops their dissimilarities. Thus, each class can be represented by a common vector computed from the within scatter matrix.

In case of testing an unknown face, the corresponding feature vector is computed and associated to the class with the nearest common vector. Sometimes, kernel discriminative common vectors [25] or improved discriminative common vectors and support vector machine (SVM) [26] are introduced for the face recognition task.

Similarly to the LLE method, neighbourhood preserving projection (NPP) and orthogonal NPP (ONPP) are introduced in [27,28]. These approaches preserve the local structure between samples. To reflect the intrinsic geometry of the local neighbourhoods, they use data‐driven weights by solving a least‐squares problem. ONPP forces the mapping to be orthogonal and then solves an ordinary eigenvalue problem. NPP requires solving a generalised eigenvalue problem, regarding to imposing a condition of orthogonality on the projected data.

Block diagram of the traditional face recognition approaches is presented in Figure 2.

Figure 2.

Traditional face recognition algorithms.

However, it is still unclear how to select the neighbourhood size and how to assign optimal values for other hyper‐parameters; for them, sparsity preserving projections [29,30] and LPPs [31] are also applied for face recognition.

In [32], a multi‐linear extension of the LDA method called discriminant analysis with tensor representation is proposed. It is different from preserving projection methods and implements discriminant analysis directly on the natural tensorial data to preserve the neighbourhood structure of tensor feature space. Another method of supervised and unsupervised multi‐linear NPP (MNPP) for face recognition is presented in [33]. A survey of multi‐linear methods can be found in [11]. They operate directly on tensorial data rather than vectors or matrices and solve problems of tensorial representation for multidimensional feature extraction and recognition. Multiple interrelated subspaces are obtained in the MNPP method by unfolding the tensor over different tensorial directions. The order of the tensor space determines the number of subspaces derived by MNPP [34,35].

2.2. Artificial Neural Networks in face recognition

In [11,36,37], artificial neural networks are used to solve nonlinear problem. To recognise human faces, a non‐convergent chaotic neural network is suggested in [38].

A radial basis function neural network integrated with a non‐negative matrix factorisation to recognise faces is presented in [39]. Moreover, for face and speech verifications, [40] utilise a momentum back propagation neural network. Non‐negative sparse coding method to learning facial features using different distance metrics and normalised cross‐correlation for face recognition is applied in [41].

A posterior union decision‐based artificial neural network approach is proposed in [33,34]. It has elements of both neural networks and statistical approaches and replenishes methods for recognising face images with partial distortion and occlusion.

Unfortunately, this approach, like other statistical‐based methods, is inaccurate to model classes given only a single or a small number of training samples [42,43].

2.3. Gabor wavelet‐based solutions

Gabor wavelets have been widely used for face representation by face recognition researchers [44,45,46], and Gabor features are recognised as better representation for face recognition in terms of (rank‐1) recognition rate [47]. Moreover, it is demonstrated to be discriminative and robust to illumination and expression variations [48]. When only one sample image per enrolled subject is available, [49] propose adaptively weighted sub‐Gabor array for face representation and recognition.

Moreover, two kinds of strategies to capture Gabor texture information: Gabor magnitude‐based texture representation (GMTR) and Gabor phase‐based texture representation (GPTR), are proposed in [50].

Gamma density to model the Gabor magnitude distribution characterises GMTR approach. The GPTR is characterised by the generalised Gaussian density for modelling the Gabor phase distribution. It allows the estimated model parameters to be served as texture representation of the face.

The Gabor wavelet applied at fixed positions, in correspondence of the nodes of a square‐meshed grid superimposed to the face image, is presented in [51]. Each subpattern of the partitioned face image is defined as the extracted Gabor features that belong to the same row of the square‐meshed grid which are then projected to lower dimension space by Karhunen–Loeve transform. The obtained features of each subpattern, which are weighted using genetic algorithm (GA), are used to train a Parzen Window Classifier. Finally, matching process is done by combining the classifiers using a weighted sum rule.

The learning approach based on Gabor features and kernel supervised Laplacian faces for face recognition under the classifier fusion framework is introduced in [52]. The Gabor features obtained from each channel as a new sample of the same class are used to adopt the classifier fusion strategy. Such approach is useful for improving the performance of the recognition results.

Histogram of Gabor phase feature is proposed in [53]. In [54,55,56,57,58], the patch‐based histograms of local patterns are concatenated together to form the representation of the face image via learned local Gabor patterns. The feature representation problem by providing a learning method instead of simple concatenation or histogram feature is presented in [59]. In [60], the Gabor features were adopted for the sparse representation (SR)‐based classification and a Gabor occlusion dictionary was learned under the well‐known SR framework.

The main drawback of Gabor‐based methods is that the dimensionality of the Gabor feature space is significantly high since the face images are convolved with a bank of Gabor filters.

To overcome this problem, Adaboost algorithm [61] and entropy and genetic algorithms (GA) [62] are used to select the most discriminative Gabor features.

However, selecting the most useful method from so many Gabor features is very time‐consuming [61]. Furthermore, extracting the Gabor features is computationally intensive, so the features are currently useless for real‐time applications [63]. A simplified version of Gabor wavelets is introduced in [64]. Unfortunately, the simplified Gabor features are more sensitive to lighting variations in reference to the original Gabor features.

2.4. Face descriptor‐based methods

Local feature‐based face image description provides a global description. So local features of the image are evaluated in the neighbouring pixels and then aggregated to form the final global description [65,66]. This is unlike global methods in which the entire image is utilised to produce each feature, where the first steps start with the description of the face realised at a pixel level by making use of the local neighbourhood of each pixel. Then, the image is divided into a number of subregions, and from each subregion, a local description is formed as a histogram of the pixel level descriptions calculated in the previous step. Next, the information of the regions is combined into the final descriptor by concatenating the partial histograms [67,68].

To determine image descriptors that are able to improve classification performance of multi‐option recognition as well as pair matching of face images seems to be a complex problem [65,69,70].

Learning the most discriminant local features that can minimise the difference of the features between images of a same individual and maximise that between images from other people depending on the nature of these descriptors, which compute an image representation from local patch statistics stands the main idea of the approach.

The face verification accuracy ranked on the LFW benchmark after face verification using multiple local descriptors designed to capture statistics of local patch similarities is proposed in [34]. Enhancing the face recognition performance by introducing the discriminative learning into three steps of LBP‐like feature extraction is presented in [71].

The discriminant image filters, the optimal soft sampling matrix and the dominant patterns are all learned from images. The general advantage of these methods is compact, highly discriminative and easy to extract learning‐based descriptor. These methods are discriminative and robust to illumination and expression changes.

2.5. 3D‐based face recognition

As 3D capturing process is becoming cheaper and faster [72], it is commonly thought that the use of 3D sensing has the potential for greater recognition accuracy than 2D. The advantage behind using 3D data is that depth information does not depend on pose and illumination, and therefore, the representation of the object does not change with these parameters, making the whole system more robust. 3D‐based techniques can achieve better robustness to pose variation problem than 2D‐based ones. A comprehensive survey of the 3D face recognition approaches is presented in [73].

A method for face recognition across variations in pose, which combines deformable 3D models with a computer graphics simulation of projection and illumination, can be found in [74]. In this method, faces are represented by model parameters for 3D shape and texture. Their 3D morphable models are combined with spherical harmonics illumination representation [75] to recognise faces under arbitrary unknown lighting.

Using facial symmetry to handle pose variation in 3D face recognition is presented in [76], where an automatic landmark detector is used. It helps to estimate pose and detects occluded areas for each facial scan. Subsequently, an annotated face model is registered and fitted to the scan. During fitting, facial symmetry is used to overcome the challenges of missing data [77].

There is a generic 3D elastic model for pose invariant face recognition proposed in [29]. It is constructed for each subject in the database using only a single 2D image by applying the 3D generic elastic model (3DGEM) approach. Each 3D model is subsequently rendered at different poses within a limited search space about the estimated pose, and the resulting images are matched against the test query. Finally, the distances between the synthesised images and test query are computed by using a simple normalised correlation matcher to show the effectiveness of the pose synthesis method to real‐world data.

In [78], a geometric framework for analysing 3D faces, with the specific goals of comparing, matching and averaging their shapes, is proposed to represent facial surfaces by radial curves emanating from the nose tips.

3D face recognition approach based on local geometrical signatures called facial angular radial signature (ARS) that can approximate the semi‐rigid region of the 3D face is proposed in [79]. The authors employed KPCA to map the raw ARS facial features to mid‐level features to improve the discriminating power. Finally, the resulting mid‐level features are combined into one single feature vector and fed into the SVM to perform face recognition [80, 81, 82, 83, 84, 85, 86].

The drawback of using 3D data in face recognition is that these face recognition approaches need all the elements of the system to be well calibrated and synchronised to acquire accurate 3D data (texture and depth maps). The existing 3D face recognition approaches rely on a surface registration or on complex feature (surface descriptor) extraction and matching techniques. They are, therefore, computationally expensive and not suitable for practical applications. Moreover, they require the cooperation of the subject making them not useful for uncontrolled or semi‐controlled scenarios where the only input of the algorithms will be a 2D intensity image acquired from a single camera.

2.6. Video‐based face recognition

The analysis of video streams of face images has received increasing attention in biometrics [87]. An immediate advantage in using video information is the possibility of employing redundancy present in the video sequence to improve still image systems. Although significant amount of research has been done in matching still face images, the use of videos for face recognition is relatively less explored [88]. The first stage of video‐based face recognition (VFR) is to perform re‐identification, where a collection of videos is cross‐matched to locate all occurrences of the person of interest [89].

Generally, VFR approaches can be classified into two categories based on how they leverage the multitude of information available in a video sequence: (i) sequence based and (ii) set based, where at a high level, what most distinguishes these two approaches is whether or not they utilise temporal information [90, 91].

The formulation of a probabilistic appearance‐based face recognition approach is extended in [92]. Originally, it was defined to do recognition from a single still image as previously explained, to work with multiple images and video sequences. In [93], there is the constrained subspace spanned from face images of a clip into a convex hull and then calculate the nearest distance of two convex hulls as the between‐set similarity. Thus, each test and training example is a set of images of a subject's face, not just a single image, so recognition decisions need to be based on comparisons of image sets.

In [94], VFR task is converted into the problem of measuring the similarity of two image sets, where the examples from a video clip construct one image set. The authors consider face images from each clip as an ensemble and formulate VFR into the joint sparse representation (JSR) problem. In JSR, to adaptively learn the sparse representation of a probe clip, they simultaneously consider the class‐level and atom‐level sparsity, where the former structures the enrolled clips using the structured sparse regulariser and the latter seeks for a few related examples using the sparse regulariser.

In order to identify the most important advantages and imperfections, discussed above methods are summarised in Table 1.

No.  Method  Advantages  Disadvantages 
1. Classical face
recognition
algorithms
Focuses on local structure
of the manifold.
These methods project
face onto linear subspace spanned by the eigenface images. The distance from face space is
orthogonal to the plane
of mean image, so may be
easily turned to Mahalanobis distances
with probabilistic interpretation
These methods may fail to
adequately represent faces when
large variations in illumination facial
expressions and other factors occur. Regarding to [34], applying kernel‐based nonlinear methods do not produce a
significant improvement comparing to linear methods. LLE, LLP and LBP brought simple and effective
way to describe neighbouring changes in face
description. Subspace approaches
were applied in DCV‐ and SVM‐based methods. Preserving the local structure between samples is the domain of NPP and ONPP methods.
The problem is that it is still
unclear how to select the neighbourhood size or assign optimal values for them
2. Artificial neural networks Radial basis function artificial
neural network is naturally integrated
with non‐negative
matrix factorisation.
Also other
approaches for process
simplification regarding to
ANNs native linearisation feature and computation speed up.
Ideal solution, especially
for recognising face
images with partial distortion and occlusion
The main disadvantage
of this approach is
requirement of greater number of
training samples (instead one or limited number). It is inaccurate in the
same way like other statistically based methods
3. Gabor wavelets The Gabor wavelets exhibit
desirable characteristics of capturing
salient visual properties like spatial localisation orientation
selectivity and spatial
frequency. Different
biometrics applications
favour this approach
The drawback of the
Gabor‐based methods is
significantly high dimensionality of
the Gabor feature space since
face image is convolved with a
bank of Gabor filters.
Approach is computationally
intensive and impractical for real‐time applications.
Additionally, simplified
Gabor features are sensitive
to lightning variations
4. Face descriptor‐based methods The main idea behind developing
image descriptors is to learn
the most discriminant local
features that minimise difference between
images of the same individual
and maximise that between images from the other people.
These methods are
discriminative and robust
to illumination and expression
changes. They offer compact,
easy to extract and highly discriminative descriptor
Approach is computationally intensive during descriptor extraction stage, but encouraging simplicity and performance in reference to online applications
5. 3D‐based
face
recognition
Extend traditional
2D capturing process
and has greater
potential for accuracy.
The depth information does not
depend on the pose
and illumination
making solution more robust
Require all the elements of the 3D face recognition system to be well calibrated and synchronised to existing 3D data. Computationally expensive and not suitable for practical applications
6. Video‐based recognition The main advantage
of the approach is
possibility of employing redundancy
present in video
to improve still image systems
Relatively poorly investigated.
Multiply problems with measuring similarity of two (or more) images

Table 1.

Face recognition methods overview.

Methods indicated in the Table 1 illustrate the evolution of face recognition technology. The huge potential of face descriptor‐based methods ought to be emphasised, regarding to the fact the local descriptor idea has been recently recognised as the most crucial design framework for face identification and verification tasks [34].

Advertisement

3. Face recognition applications

Many published works mention numerous applications in which face recognition technology is already utilised including entry to secured high‐risk spaces such as border crossings as well as access to restricted resources [95, 96, 97]. On the other hand, there are other application areas in which face recognition has not yet been used. The potential application areas of face recognition technology can be outlined as follows [34]:

  • Automated surveillance, where the objective is to recognise and track people [98].

  • Monitoring closed circuit television (CCTV), the facial recognition capability can be embedded into existing CCTV networks, to look for lost children or other missing persons or tracking known or suspected criminals.

  • Image database investigations, searching image databases of licensed drivers, benefit recipients and finding people in large news photograph and video collections [99, 100], as well as searching in the Facebook social networking web site [101].

  • Multimedia environments with adaptive human computer interfaces (part of ubiquitous or context aware systems, behaviour monitoring at childcare or centres for old people, recognising customers and assessing their needs) [102].

  • Airplane‐boarding gate, the face recognition may be used in places of random checks merely to screen passengers for further investigation. Similarly, in casinos, where strategic design of betting floors that incorporates cameras at face height with good lighting could be used not only to scan faces for identification purposes, but possibly to afford the capture of images to build a comprehensive gallery for future watch‐list, identification and authentication tasks [103].

  • Sketch‐based face reconstruction, where law enforcement agencies in the world rely on practical methods to help crime witnesses reconstruct likenesses of faces [104]. These methods range from sketch artistry to proprietary computerised composite systems [105, 106, 107].

  • Forensic applications, where a forensic artist is often used to work with the eyewitness in order to draw a sketch that depicts the facial appearance of the culprit according to his/her verbal description. This forensic sketch is used later for matching large facial image databases to identify the criminals [108, 109]. Yet, there is no existing face recognition system that can be used for identification or verification in crime investigation such as comparison of images taken by CCTV with available database of mugshots. Thus, utilising face recognition technology in the forensic applications is a must as discussed in [110, 111].

  • Face spoofing and anti‐spoofing, where a photograph or video of an authorised person's face could be used to gain access to facilities or services. Hence, the spoofing attack consists in the use of forged biometric traits to gain illegitimate access to secured resources protected by a biometric authentication system [112, 113]. It is a direct attack to the sensory input of a biometric system, and the attacker does not need previous knowledge about the recognition algorithm. Research on face spoof detection has recently attracted an increasing attention [114], introducing few number of face spoof detection techniques [115, 116, 117]. Thus, developing a mature anti‐spoofing algorithm is still in its infancy and further research is needed for face spoofing applications [118, 119].

There have been envisaged many applications for face recognition, but most of commercial ones exploit only superficially the great potential of this technology. Most of the applications are notable limited in their ability to handle pose, lighting changes or aging.

In reference to access control, face verification during face‐based PC logon has become feasible, but seems to be very limited. Naturally, such PC verification system can be extended in the future for authentic single sign‐on to multiple networked services or transaction authorisation or even for access to encrypted files. For example, banking sector is rather conservative in deploying such a biometrics. They estimated high risk in loosing customers disaffected by being falsely rejected than they might gain in fraud prevention. It is the reason for robust passive acquisition systems development with low false rejection.

The most of physical access control systems uses face recognition combination with other biometrics, for example speaker identification and lip motion [120].

One of the most interest in face recognition in application domain is associated with surveillance. Regarding to the generous type of information it contains, video is the medium of choice for surveillance. For applications that require identification, face recognition is the best biometric for video data. The biggest advantage of this approach is passive participation of subject (human). The whole process of recognition and identification can be carried out without the person's knowledge.

Although the development of face recognition surveillance systems has already begun, the technology seems to not accurate enough. It also brings additional problems concerning highly extensive perception in the data gathering and computing side of such complex solutions.

Another future domain, where face recognition is expected to become important, is area of pervasive or ubiquitous computing. Computing devices equipped with sensors become more widespread in reference to together networking. Such approach will allow envisage a future where the most of everyday objects are going to have some computational power, allowing to precisely adapt their behaviour to various factors including time, user, user control or host.

This vision assumes easy information exchange, also including images between devices of different types.

Currently, the most of devices have simple user interface, controlled only by active commands on the part of the user. Some of the devices are able to sense environment and acquire information about the physical word and the people within their region of interest. One of the crucial part of smart devices of human awareness is knowing the identity of the users close to a device, even currently implemented in several smartphones with different results. It is important when contributed with other biometrics regarding to passive nature of face recognition.

Advertisement

4. Conclusion

Face recognition is still a challenging problem in the field of computer vision. It has received a great deal of attention over the past years because of its several applications in various domains. Although there is strong research effort in this area, face recognition systems are far from ideal to perform adequately in all situations form real world. Paper presented a brief survey of issues methods and applications in area of face recognition. There is much work to be done in order to realise methods that reflect how humans recognise faces and optimally make use of the temporal evolution of the appearance of the face for recognition.

References

  1. 1. Lin S.: ‘An introduction to Face Recognition Technology', Informing Science, 2000, 3, pp.1‐6.
  2. 2. An, L., Kafai, M., Bhanu, B.: ‘Dynamic Bayesian network for unconstrained face recognition in surveillance camera networks', IEEE J. Emerg. Sel. Top. Circuits Syst., 2013, 3, (2), pp. 155–164.
  3. 3. Philips, P. J., Moon H., Rauss P., Rizivi S.: ‘The FERET September 1996 Database and Evaluation Procedure', Audio‐ and Video‐based Biometric Person Authentication, Lecture Notes in Computer Science, vol. 1206, 395‐402, Springer 1997.
  4. 4. Liao, S., Lei, Z., Yi, D., Li, S.: ‘A benchmark study of large‐scale unconstrained face recognition'. Int. Joint Conf. on Biometrics (IJCB 2014), Florida, USA, 2014, pp. 1–8.
  5. 5. Turk, M., Pentland, A.: ‘Eigenfaces for recognition', J. Cogn. Neurosci., 1991, 3, (1), pp. 71–86.
  6. 6. Bartlett, M.S., Movellan, J.R., Sejnowski, T.J.: ‘Face recognition by independent component analysis', IEEE Trans. Neural Netw., 2002, 13, (6), pp. 1450–1464.
  7. 7. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: ‘Eigenfaces vs. _sherfaces: recognition using class speci_c linear projection', IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 711–720.
  8. 8. Hu, H., Zhang, P., De la Torre, F.: ‘Face recognition using enhanced linear discriminant analysis', IET Comput. Vis., 2010, 4, (3), pp. 195–208.
  9. 9. Yang, J., Zhang, D., Frangi, A.F., Yang, J.‐Y.: ‘Two‐dimensional PCA: a new approach to appearance‐based face representation and recognition', IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26, (1), pp. 131–137.
  10. 10. Naseem, I., Togneri, R., Bennamoun, M.: ‘Linear regression for face recognition', IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (11), pp. 2106–2112.
  11. 11. Lu, J., Plataniotis, K.N., Venetsanopoulos, A.N.: ‘Face recognition using kernel direct discriminant analysis algorithms', IEEE Trans. Neural Netw., 2003, 14, (1), pp. 117–126.
  12. 12. He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.‐J.: ‘Face recognition using Laplacian faces', IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (3), pp. 328–340.
  13. 13. Perlibakas, V.: ‘Distance measures for PCA‐based face recognition', Pattern Recognit. Lett., 2004, 25, (6), pp. 711–724.
  14. 14. Kim, K.I., Jung, K., Kim, H.J.: ‘Face recognition using kernel principal component analysis', IEEE Signal Process. Lett., 2002, 9, (2), pp. 40–42.
  15. 15. Bach, F., Jordan, M.: ‘Kernel independent component analysis', J. Mach. Learn. Res., 2003, 3, pp. 1–48.
  16. 16. Ji, S., Ye, J.: ‘Generalized linear discriminant analysis: a uni_ed framework and ef_cient model selection', IEEE Trans. Neural Netw., 2008, 9, (10), pp. 1768–1782.
  17. 17. Roweis, S., Saul, L.: ‘Nonlinear dimensionality reduction by locally linear embedding', Science, 2000, 290, (5500), pp. 2323–2326.
  18. 18. Xiaofei, H., Partha, N.: ‘Locality preserving projections'. Int. Conf. on Advances in Neural Information Processing Systems (NIPS'03), 2003, pp. 153–161.
  19. 19. Ahonen, T., Hadid, A., Pietikäinen, M.: ‘Face description with local binary patterns: application to face recognition', IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (12), pp. 2037–2041.
  20. 20. Suruliandi, A., Meena, K., Reena, R.R.: ‘Local binary pattern and its derivatives for face recognition', IET Comput. Vis., 2012, 6, (5), pp. 480–488.
  21. 21. Zhang, B., Shan, S., Chen, X., Gao, W.: ‘Histogram of Gabor phase patterns (HGPP): a novel object representation approach for face recognition', IEEE Trans. Image Process., 2007, 16, (1), pp. 57–68.
  22. 22. Yang, B., Chen, S.: ‘A comparative study on local binary pattern (LBP) based face recognition: LBP histogram versus LBP image', Neurocomputing, 2013, 22, pp. 620–627.
  23. 23. Zhang, B., Gao, Y., Zhao, S., Liu, J.: ‘Local derivative pattern versus local binary pattern: face recognition with high‐order local pattern descriptor', IEEE Trans. Image Process., 2010, 19, (2), pp. 533–544.
  24. 24. Cevikalp, H., Neamtu, M., Wilkes, M., Barkana, A.: ‘Discriminative common vectors for face recognition', IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (1), pp. 4–13.
  25. 25. Jing, X.‐Y., Yao, Y.‐F., Yang, J.‐Y., Zhang, D.: ‘A novel face recognition approach based on kernel discriminative common vectors (KDCV) feature extraction and RBF neural network', Neurocomputing, 2008, 71, pp. 3044–3048.
  26. 26. Wen, Y.: ‘An improved discriminative common vectors and support vector machine based face recognition approach', Expert Syst. Appl., 2012, 39, (4), pp. 4628–4632.
  27. 27. Kokiopoulou, E., Saad, Y.: ‘Orthogonal neighborhood preserving projections: a projection based dimensionality reduction technique', IEEE Trans. Pattern Anal. Mach. Intell., 2007, 29, (12), pp. 2143–2156.
  28. 28. [Yanwei, P., Lei, Z., Zhengkai, L., Nenghai, Y., Houqiang, L.: ‘Neighborhood preserving projections (NPP): a novel linear dimension reduction method', Lect. Notes Comput. Sci., 2005, 3644, pp. 117–125.
  29. 29. Prabhu, U., Jingu, H., Marios, S.: ‘Unconstrained pose‐invariant face recognition using 3D generic elastic models', IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (10), pp. 1952–1961.
  30. 30. Qiao, L., Chena, S., Tan, X.: ‘Sparsity preserving projections with applications to face recognition', Pattern Recognit., 2010, 43, (1), pp. 331–341.
  31. 31. Jiwen, L., Yap‐Peng, T.: ‘Regularized locality preserving projections and its extensions for face recognition', IEEE Trans. Syst. Man Cybern. B, Cybern., 2009, 40, (3), pp. 1083–4419.
  32. 32. Lu, J., Plataniotis, K.N., Venetsanopoulos, A.N., Stan, Z.L.: ‘Ensemble‐based discriminant learning with boosting for face recognition', IEEE Trans. Neural Netw., 2006, 17, (1), pp. 166–178.
  33. 33. Abeer, A.M., Woo, W.L., Dlay, S.S.: ‘Multi‐linear neighborhood preserving projection for face recognition', Pattern Recognit., 2014, 47, (2), pp. 544–555.
  34. 34. Hassaballah M., Aly S.: ‘Face recognition: challenges, achievements and future directions', IET Computer Vision, 2015, Vol. 9, Iss. 4, pp. 614–626.
  35. 35. Lu, H., Plataniotis, K.N., Venetsanopoulos, A.N.: ‘A survey of multilinear subspace learning for tensor data', Pattern Recognit., 2011, 44, (7), pp. 1540–1551.
  36. 36. Pang, S., Kim, D., Bang, S.Y.: ‘Face membership authentication using SVM classi_cation tree generated by membership‐based LLE data partition', IEEE Trans. Neural Netw., 2005, 16, (2), pp. 436–446.
  37. 37. Zhang, B., Zhang, H., Ge, S.: ‘Face recognition by applying wavelet subband representation and kernel associative memory', IEEE Trans. Neural Netw., 2004, 15, pp. 166–177.
  38. 38. Li, G., Zhang, J., Wang, Y., Freeman, W.J.: ‘Face recognition using a neural network simulating olfactory systems', Lect. Notes Comput. Sci., 2006, 3972, pp. 93–97.
  39. 39. Zhou, W., Pu, X., Zheng, Z.: ‘Parts‐based holistic face recognition with RBF neural networks', Lect. Notes Comput. Sci., 2006, 3972, pp. 110–115.
  40. 40. Park, C., Ki, M., Namkung, J., Paik, J.K.: ‘Multimodal priority veri_cation of face and speech using momentum back‐propagation neural network', Lect. Notes Comput. Sci., 2006, 3972, pp. 140–149.
  41. 41. Bhavin, J.S., Martin, D.L.: ‘Face recognition using localized features based on nonnegative sparse coding', Mach. Vis. Appl., 2007, 18, (2), pp. 107–122.
  42. 42. Jiwen, L., Yap‐Peng, T., Gang, W.: ‘Discriminative multimanifold analysis for face recognition from a single training sample per person', IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (1), pp. 39–51.
  43. 43. Singh, R., Vatsa, M., Noore, A.: ‘Face recognition with disguise and single gallery images', Image Vis. Comput., 2009, 72, (3), pp. 245–257.
  44. 44. Gu, W., Xiang, C., Venkatesh, Y., Huang, D., Lin, H.: ‘Facial expression recognition using radial encoding of local Gabor features and classi_er synthesis', Pattern Recognit., 2012, 45, (1), pp. 80–91.
  45. 45. Serrano, S., Diego, I., Conde, C., Cabello, E.: ‘Recent advances in face biometrics with Gabor wavelets: a review', Pattern Recognit. Lett., 2010, 31, (5), pp. 372–381.
  46. 46. Shen, L., Bai, L.: ‘A review on Gabor wavelets for face recognition', Pattern Anal. Appl., 2006, 9, (2–3), pp. 273–292.
  47. 47. Zhang, W.C., Shan, S.G., Chen, X.L., Gao, W.: ‘Are Gabor phases really useless for face recognition?', Int. J. Pattern Anal. Appl., 2009, 12, (3), pp. 301–307.
  48. 48. Zhao, S., Gao, Y., Zhang, B.: ‘Gabor feature constrained statistical model for ef_cient landmark localization and face recognition', Pattern Recognit. Lett., 2009, 30, (10), pp. 922–930.
  49. 49. Kanan, H., Faez, K.: ‘Recognizing faces using adaptively weighted sub‐Gabor array from a single sample image per enrolled subject', Image Vis. Comput., 2010, 28, (3), pp. 438–448.
  50. 50. Yu, L., He, Z., Cao, Q.: ‘Gabor texture representation method for face recognition using the Gamma and generalized Gaussian models', Image Vis. Comput., 2010, 28, (1), pp. 177–187.
  51. 51. Nanni, L., Maio, D.: ‘Weighted sub‐Gabor for face recognition', Pattern Recognit. Lett., 2007, 28, (4), pp. 487–492.
  52. 52. Zhao, Z.‐S., Zhang, L., Zhao, M., Hou, Z.‐G., Zhang, C.‐S.: ‘Gabor face recognition by multi‐channel classi_er fusion of supervised kernel manifold learning', Neurocomputing, 2012, 97, pp. 398–404.
  53. 53. Zhang, B., Shan, S., Chen, X., Gao, W.: ‘Histogram of Gabor phase patterns: a novel object representation approach for face recognition', IEEE Trans. Image Process., 2007, 16, (1), pp. 57–68.
  54. 54. Xie, S., Shan, S., Chen, X., Chen, J.: ‘Fusing local patterns of Gabor magnitude and phase for face recognition', IEEE Trans. Image Process., 2010, 19, (5), pp. 1349–1361.
  55. 55. Xu, Y., Li, Z., Pan, J.‐S., Yang, J.‐Y.: ‘Face recognition based on fusion of multi‐resolution Gabor features', Neural Comput. Appl., 2013, 23, (5), pp. 1251–1256.
  56. 56. Chai, Z., Sun, Z., Mndez‐Vzquez, H., He, R., Tan, T.: ‘Gabor ordinal measures for face recognition', IEEE Trans. Inf. Forensics Sec., 2014, 9, (1), pp. 14–26.
  57. 57. Liu, C., Wechsler, H.: ‘Gabor feature based classi_cation using the enhanced _sher linear discriminant model for face recognition', IEEE Trans. Image Process., 2002, 11, (4), pp. 467–476.
  58. 58. Liu, C., Wechsler, H.: ‘Independent component analysis of Gabor features for face recognition', IEEE Trans. Neural Netw., 2003, 14, (4), pp. 919–928.
  59. 59. Ren, C.‐X., Dai, D.‐Q., Li, X., Lai, Z.‐R.: ‘Band‐reweighed Gabor kernel embedding for face image representation and recognition', IEEE Trans. Image Process., 2014, 32, (2), pp. 725–740.
  60. 60. Yang, M., Zhang, L., Shiu, S., Zhang, D.: ‘Gabor feature based robust representation and classi_cation for face recognition with Gabor occlusion dictionary', Pattern Recognit., 2013, 46, (7), pp. 1865–1878.
  61. 61. Serrano, A., de Diego, I., Conde, C., Cabello, E.: ‘Analysis of variance of Gabor _lter banks parameters for optimal face recognition', Pattern Recognit. Lett., 2011, 32, (15), pp. 1998–2008.
  62. 62. Perez, C., Cament, L., Castillo, L.E.: ‘Methodological improvement on local Gabor face recognition based on feature selection and enhanced Borda count', Pattern Recognit., 2011, 44, (4), pp. 951–963.
  63. 63. Oh, J., Choi, S., Kimc, C., Cho, J., Choi, C.: ‘Selective generation of Gabor features for fast face recognition on mobile devices', Pattern Recognit. Lett., 2013, 34, (13), pp. 1540–1547.
  64. 64. Choi, W.‐P., Tse, S.‐H., Wong, K.‐W., Lam, K.‐M.: ‘Simpli_ed Gabor wavelets for human face recognition', Pattern Recognit., 2008, 41, (3), pp. 1186–1199.
  65. 65. Chen, J., Shan, S., He, C., et al.: ‘WLD: a robust local image descriptor', IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (9), pp. 1705–1720.
  66. 66. Jabid, T., Kabir, M., Chae, O.: ‘Facial expression recognition using local directional pattern (LDP)'. IEEE Int. Conf. on Image Processing (ICIP), Hong Kong, China, 2010, pp. 1605–1608.
  67. 67. Bereta, M., Karczmarek, P., Pedrycz, W., Reformat, M.: ‘Local descriptors in application to the aging problem in face recognition', Pattern Recognit., 2013, 46, (10), pp. 2634–2646.
  68. 68. Bereta, M., Pedrycz, W., Reformat, M.: ‘Local descriptors and similarity measures for frontal face recognition: a comparative analysis', J. Vis. Commun. Image Represent., 2013, 24, (8), pp. 1213–1231.
  69. 69. Cao, Z., Yin, Q., Tang, X., Sun, J.: ‘Face recognition with learning‐based descriptor'. IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 2010, pp. 2707–2714.
  70. 70. Farajzadeh, N., Faez, K., Pan, G.: ‘Study on the performance of moments as invariant descriptors for practical face recognition systems', IET Comput. Vis., 2011, 4, (4), pp. 272–285.
  71. 71. Lei, Z., Pietikäinen, M., Stan, Z.L.: ‘Learning discriminant face descriptor', IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (2), pp. 289–302.
  72. 72. Ira, K.‐S., Ronen, B.: ‘3D face reconstruction from a single image using a single reference face shape', IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (2), pp. 394–405.
  73. 73. Bowyer, K., Chang, K.P., Flynn, P.: ‘A survey of approaches and challenges in 3D and multi‐modal 3D + 2D face recognition', Comput. Vis. Image Underst., 2006, 101, (1), pp. 1–15.
  74. 74. Blanz, V., Vetter, T.: ‘Face recognition based on _tting a 3D morphable model', IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, (9), pp. 1063–1074.
  75. 75. Zhang, L., Samaras, D.: ‘Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics', IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (3), pp. 351–363.
  76. 76. Passalis, G., Panagiotis, P., Theoharis, T., Kakadiaris, I.A.: ‘Using facial symmetry to handle pose variations in real‐world 3D face recognition', IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (10), pp. 1938–1951.
  77. 77. Lei, Y., Bennamoun, M., El‐Sallam, A.: ‘An ef_cient 3D face recognition approach based on the fusion of novel local low‐level features', Pattern Recognit., 2013, 46, (1), pp. 24–37.
  78. 78. Drira, H., Ben Amor, B., Srivastava, A., Daoudi, M., Slama, R.: ‘3D face recognition under expressions, occlusions and pose variations', IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (9), pp. 2270–2283.
  79. 79. Lei, Y., Bennamoun, M., Hayat, M., Guo, Y.: ‘An ef_cient 3D face recognition approach using local geometrical signatures', Pattern Recognit., 2014, 47, (2), pp. 509–524.
  80. 80. Andrea, F.A., Michele, N., Daniel, R., Gabriele, S.: ‘2D and 3D face recognition: a survey', Pattern Recognit. Lett., 2007, 28, (14), pp. 1885–1906.
  81. 81. Cai, L., Da, F.: ‘Estimating inter‐personal deformation with multi‐scale modelling between expression for three‐dimensional face recognition', IET Comput. Vis., 2012, 6, (5), pp. 468–479.
  82. 82. Chen, Q., Yao, J., Cham, W.K.: ‘3D model‐based pose invariant face recognition from multiple views', IET Comput. Vis., 2007, 1, (1), pp. 25–34.
  83. 83. Lu, X., Jain, A.K.: ‘Matching 2.5D face scans to 3D models', IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (1), pp. 31–43.
  84. 84. Al‐Osaimi, F., Bennamoun, M., Mian, A.: ‘An expression deformation approach to non‐rigid 3D face recognition', Int. J. Comput. Vis., 2009, 81, (3), pp. 302–316.
  85. 85. Bronstein, A.M., Bronstein, M.M., Kimmel, R.: ‘Three‐dimensional face recognition', Int. J. Comput. Vis., 2005, 64, (1), pp. 5–30.
  86. 86. Chang, K., Bowyer, K., Flynn, P.: ‘An evaluation of multimodal 2D + 3D face biometrics', IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (4), pp. 619–624.
  87. 87. Marin‐Jimenez, M., Zisserman, A., Eichner, M., Ferrari, V.: ‘Detecting people looking at each other in videos', Int. J. Comput. Vis., 2014, 106, (3), pp. 282–296.
  88. 88. O'Toole, A., Harms, J., Snow, S., Hurst, D., Pappas, M., Abdi, H.: ‘A video database of moving faces and people', IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (5), pp. 812–816.
  89. 89. Poh, N., Chan, C.H., Kittler, J., et al.: ‘An evaluation of video‐to‐video face veri_cation', IEEE Trans. Inf. Forensics Sec., 2010, 24, (8), pp. 781–801.
  90. 90. Best‐Rowden, L., Klare, B., Klontz, J., Jain, A.: ‘Video‐to‐video face matching: establishing a baseline for unconstrained face recognition'. Biometrics: Theory, Applications and Systems (BTAS), Washington DC, USA, 2013.
  91. 91. Barr, J., Boyer, K., Flynn, P., Biswas, S.: ‘Face recognition from video: a review', Int. J. Pattern Recognit. Artif. Intell., 2012, 26, (5), pp. 53–74.
  92. 92. Zhang, Y., Martinez, A.: ‘A weighted probabilistic approach to face recognition from multiple images and video sequences', Image Vis. Comput., 2006, 24, (6), pp. 626–638.
  93. 93. Cevikalp, H., Triggs, B.: ‘Face recognition based on image sets'. IEEE Int. Conf. Computer Vision and Pattern Recognition (CVPR'10), San Francisco, CA, USA, 2010, pp. 2567–2573.
  94. 94. Cui, Z., Chang, H., Shan, S., Ma, B., Chen, X.: ‘Joint sparse representation for video‐based face recognition', Neurocomputing, 2014, 135, (5), pp. 306–312.
  95. 95. Anil, K., Arun, A., Karthik, N.: ‘Introduction to biometrics’ (Springer, New York, USA, 2011.
  96. 96. Phillips, P.J., Flynn, P.J., Scruggs, T., et al.: ‘Overview of the face recognition grand challenge'. IEEE Conf. Computer Vision and Pattern Recognition, San Diego, CA, USA, 2005, pp. 947–954.
  97. 97. Stan, Z.L., Jain, A.: ‘Handbook of face recognition’ (Springer, New York, USA, 2005).
  98. 98. Kamgar‐Parsi, B., Lawson, W., Kamgar‐Parsi, B.: ‘Toward development of a face recognition system for watchlist surveillance', IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (10), pp. 1925–1937.
  99. 99. Ortiz, E., Becker, B.: ‘Face recognition for web‐scale datasets', Comput. Vis. Image Underst., 2013, 108, pp. 153–170.
  100. 100. Ozkana, D., Duygulu, P.: ‘Interesting faces: a graph‐based approach for finding people in news', Pattern Recognit., 2010, 43, (5), pp. 1717–1735.
  101. 101. Pinto, N., Stone, Z., Zickler, T., Cox, D.: ‘Scaling up biologically‐inspired computer vision: a case study in unconstrained face recognition on Facebook'. IEEE Computer Vision and Pattern Recognition, Workshop on Biologically Consistent Vision, Colorado Springs, USA, 2011, pp. 35–42.
  102. 102. Best‐Rowden, L., Han, H., Otto, C., Klare, B., Jain, A.: ‘Unconstrained face recognition: identifying a person of interest from a media collection'. Technical Report, Technical Report MSU‐CSE‐14‐1, Michigan State University, 2014.
  103. 103. Introna, L., Wood, D.: ‘Picturing algorithmic surveillance: the politics of facial recognition systems', Surveillance Soc., 2004, 2, (2/3), pp. 177–198.
  104. 104. Han, H., Klare, B., Bonnen, K., Jain, A.: ‘Matching composite sketches to face photos: a component‐based approach', IEEE Trans. Inf. Forensics Sec., 2013, 8, (1), pp. 191–204.
  105. 105. Gao, X., Zhong, J., Tian, C.: ‘Sketch synthesis algorithm based on E‐Hmm and selective ensemble', IEEE Trans. Circuits Syst. Video Technol., 2008, 18, (4), pp. 487–496.
  106. 106. Tang, X., Wang, X.: ‘Face photo‐sketch synthesis and recognition', IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (11), pp. 1955–1967.
  107. 107. Tang, X., Wang, X.: ‘Face sketch recognition', IEEE Trans. Circuits Syst. Video Technol., 2004, 14, (1), pp. 50–57.
  108. 108. Brendan, F.K., Zhifeng, L., Anil, K.J.: ‘Matching forensic sketches to mug shot photos', IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (3), pp. 639–646.
  109. 109. Klum, S., Han, H., Jain, A., Klare, B.: ‘Sketch based face recognition: forensic vs. composite sketches'. Sixth IAPR Int. Conf. Biometrics (ICB'13), Madrid, Spain, 2013, pp. 1–8.
  110. 110. Jain, A., Klare, B., Park, U.: ‘Face matching and retrieval in forensics applications', IEEE Multimedia, 2012, 19, (1), pp. 20–28.
  111. 111. Jain, A.K., Klare, B., Park, U.: ‘Face recognition: some challenges in forensics'. IEEE Int. Conf. on Automatic Face Gesture Recognition and Workshops (FG 2011), Santa Barbara, CA, USA, 2011, pp. 726–733.
  112. 112. Erdogmus, N., Marcel, S.: ‘Spoo_ng in 2D face recognition with 3D masks and anti‐spoo_ng with kinect'. The IEEE Sixth Int. Conf. Biometrics: Theory, Applications and Systems (BTAS 2013), Washington, DC, USA, 2013, pp. 1–6.
  113. 113. Marcel, S., Nixon, M., Li, S.: ‘Handbook of biometric anti‐spoo_ng: trusted biometrics under spoo_ng attacks’ (Springer, New York, USA, 2014).
  114. 114. Määttä, J., Hadid, A., Pietikäinen, M.: ‘Face spoofing detection from single images using texture and local shape analysis', IET Biometrics, 2012, 1, (1), pp. 3–10.
  115. 115. Chingovska, I., Anjos, A., Marcel, S.: ‘On the effectiveness of local binary patterns in face anti‐spoo_ng'. IEEE Int. Conf. Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 2012, pp. 1–7.
  116. 116. de Freitas Pereira, T., Anjos, A., De Martino, J., Marcel, S.: ‘LBP‐TOP based countermeasure against face spoo_ng attacks'. Int. Workshop on Computer Vision with Local Binary Pattern Variants (ACCV), Daejeon, Korea, 2012, pp. 121–132.
  117. 117. Zhang, Z., Yan, J., Liu, S., Lei, Z., Yi, D., Li, S.Z.: ‘A face antispoo_ng database with diverse attacks'. Fifth IAPR Int. Conf. on Biometrics (ICB), New Delhi, India, 2012, pp. 26–31.
  118. 118. Chingovska, I., Rabello dos Anjos, A., Marcel, S.: ‘Biometrics evaluation under spoo_ng attacks', IEEE Trans. Inf. Forensics Sec., 2014, 9, (12), pp. 2264–2276.
  119. 119. Hadid, A.: ‘Face biometrics under spoo_ng attacks: vulnerabilities, countermeasures, open issues, and research directions'. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 2014, pp. 113–118.
  120. 120. Orubeondo A. : ‘A New Face for Security', InfoWorld.com.

Written By

Waldemar Wójcik, Konrad Gromaszek and Muhtar Junisbekov

Submitted: 03 February 2016 Reviewed: 09 March 2016 Published: 06 July 2016