Open access peer-reviewed chapter

Multi-Features Assisted Age Invariant Face Recognition and Retrieval Using CNN with Scale Invariant Heat Kernel Signature

Written By

Kishore Kumar Kamarajugadda and Movva Pavani

Submitted: 03 April 2022 Reviewed: 14 April 2022 Published: 11 June 2022

DOI: 10.5772/intechopen.104944

From the Annual Volume

Artificial Intelligence Annual Volume 2022

Edited by Marco Antonio Aceves Fernandez and Carlos M. Travieso-Gonzalez

Chapter metrics overview

183 Chapter Downloads

View Full Metrics

Abstract

Face recognition across aging emerges as a significant area among researchers due to its applications such as law enforcement, security. However, matching human faces with different age gaps is still bottleneck due to face appearance variations caused by aging process. In regard to mitigate such inconsistency, this chapter offers five sequential processes that are Image Quality Evaluation (IQE), Preprocessing, Pose Normalization, Feature Extraction and Fusion, and Feature Recognition and Retrieval. Primarily, our method performs IQE process in order to evaluate the quality of image and thus increases the performance of our Age Invariant Face Recognition (AIFR). In preprocessing, we carried out two processes that are Illumination Normalization and Noise Removal that have resulted in high accuracy in face recognition. Feature extraction adopts two descriptors such as Convolutional Neural Network (CNN) and Scale Invariant Heat Kernel Signature (SIHKS). CNN extracts texture feature, and SIHKS extracts shape and demographic features. These features plays vital role in improving accuracy of AIFR and retrieval. Feature fusion is established using Canonical Correlation Analysis (CCA) algorithm. Our work utilizes Support Vector Machine (SVM) to recognize and retrieve images. We implement these processes in FG-NET database using MATLAB2017b tool. At last, we validate performance of our work using seven performance metrics that are Accuracy, Recall, Rank-1 Score, Precision, F-Score, Recognition rate and computation time.

Keywords

  • age-invariant face recognition
  • image quality evaluation
  • pose normalization
  • multiple feature extraction
  • recognition and retrieval

1. Introduction

As one of the most significant topics in computer vision and pattern recognition, face recognition attains much attention from both academic and industries over recent decades [1, 2]. With the evolution of neural networks, general face recognition technology emerged as a noteworthy area among researchers [3, 4, 5]. However, identifying face images across widespread range of ages is shortcoming due to human face appearance changes affected by aging process [6, 7]. In order to achieve human face recognition under difference ages, Age-Invariant Face Recognition (AIFR) approach is progressed [8]. AIFR recognizes faces using facial features extracted from human images. AIFR method uses three different models such as generative, discriminative [9], and deep learning methods [10]. Generative approaches are based on the age progression methods in regard to converting the probe image into the same age as that of gallery image [11]. However, generative schemes have several shortcomings [12]. Optimizing the recognition performance in generative model is not easier task. Estimating the accurate results in generative model is highly difficult since it cannot handle aging impact. Discriminative approaches [13] are introduced to resolve discrepancy of generative scheme [14]. It develops feature matching using local descriptors [15] in AIFR. Multiple descriptors-based AIFR is introduced to extract features from periocular region [16]. In this, two descriptors are used to extract features that are Scale-Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF).

In order to achieve better result in AIFR, deep learning method is integrated with discriminative approach [17]. In deep learning, Convolutional Neural Network (CNN) algorithm plays vital role in recognizing face with different aging images [18]. Large age gap verification is performed by injecting features in deep networks [19]. Here, deep CNN is used to recognize face where texture features are considered. Aging model-based face recognition with different aging images is introduced under deep learning method [20]. Here, CNN descriptor is utilized to match image with different aging images.

From the aforesaid studies, we determine that there are still many issues present in recognizing face with aging progress. The issues are discussed as follows:

  • Preprocessing is not effective in most of the chapter that reduces performance of the system.

  • Pose normalization is not considered in existing AIFR, which is highly significant. Since, AIFR datasets such as MORPH, FG-NET, etc., contain different pose images.

  • Existing feature extraction procedures lack in extracting features from important regions that tend to reduce recognition rate.

  • Face recognition algorithms are not up to the level to handle large dataset and thus reduce the result of accuracy.

These problems impose confines on the present AIFR systems and also complicate the recognition and retrieval task especially under different aging images.

1.1 Research contribution

In order to tackle abovementioned issues, our work contributes the following processes:

  • In order to reduce time wastages in preprocessing, we initially execute novel Image Quality Evaluation (IQE) method, which estimates Image Quality Metric (IQM) for each image. If IQM value is below Image Quality Threshold (IQT), then only preprocessing is performed for that image or else directly gone into the pose normalization process.

  • Preprocessing is performed to reduce uncertainties in upcoming face recognition processes such as feature extraction, recognition, and retrieval. For this purpose, we implement two processes such as illumination normalization and noise removal. Illumination normalization adopts DGC-CLAHE and noise removal adopts ASBF algorithm.

  • Pose normalization is significant to diminish difficulties present in feature extraction and thus enhances the recognition and retrieval performance.

  • Our work extracts feature from three regions that are periocular, nose, and mouth in order to increase recognition rate. Here, two descriptors are utilized that are CNN and SIHKS, which perform better than other existing descriptors such as LBP, SIFT, etc.

  • In order to reduce recognition and retrieval time, we fuse features after extraction using CCA.

  • Recognition and retrieval are performed through SVM algorithm, which performs well even with unstructured, semistructured data such as text, images, and trees.

1.2 Research outline

Outline of this chapter is summarized as follows: Section 2 deliberates state-of-the-art works existing in AIFR with their limitations. Section 3 exemplifies problems occurring in previous works related to AIFR. Section 4 explains brief study of our proposed work with our proposed algorithms. Section 5 illustrates numerical results obtain from our simulation environment and also compares it with existing methods. Finally, section 6 concludes our contribution and also provides comment on our future work.

Advertisement

2. Related work

This section discusses the state-of-the-art work related to AIFR along with their limitations. In this, we discussed works that comprise preprocessing, feature extraction, recognition, and retrieval processes.

Kishore et al. [21] have suggested Periocular Region-Based AIFR Using Local Binary Pattern. In this, three sequential processes are executed to recognize faces that are preprocessing, feature extraction, and classification. In preprocessing, enhancement and denoising processes are employed in each facial image. Local Binary Pattern (LBP) descriptor [22] was used to extract features from the periocular region [23] of the given face image. Periocular region contains eyes, eye lashes, and eye brow parts of the face. Chi-square distance was used as classifier to recognize face after feature extraction. Chi-square distance doesn’t recognize face accurately since it is highly sensitive to the sample size.

Nanni et al. [24] have introduced ensemble of texture descriptor and preprocessing techniques to recognize image effectually. Four face recognition processes are performed that are preprocessing, feature extraction, feature transform, and classification. Preprocessing executes three techniques that are adaptive single index retinex (AR) in order to enhance scene detail and color enhancement in darker area. Anisotropic smoothing and different of Gaussian (DoG) are algorithms executed to normalize the illumination field. Features are extracted using two descriptors that are Patterns of the Oriented Edge Magnitudes (POEM) and the Monogenic Binary Coding (MBC). At last, different distance functions are used to recognize face. Accuracy of face recognition was very less due to poor feature extraction mechanism. Chi et al. [25] have offered temporal nonvolume preserving approach to facial age progression and AIFR. In preprocessing, face region was detected and aligned based on the fixed position of the eyes and mouth corners. And then it maps the texture features of the test image with the trained image in order to verify images. Here, deep CNN algorithm was utilized to map features. In this, preprocessing step doesn’t perform effective processes such as normalization, noise removal that tend to reduce system performance.

Bor et al. [26] have introduced Cross Age Reference Coding (CARC) for AIFR. Initially, it executes face detection algorithm in order to detect face region in image. And it extracts features from the detected region for which it utilizes high-dimensional LBP algorithm. LBP extracts 59 local features from the detection regions. In this, Principal Component Analysis (PCA) algorithm was used to reduce dimensionality of extracted feature. After that, CARC recognizes face using local features transformation. More analysis is required on feature extraction since it plays vital role in AIFR. Yali et al. [27] have pointed out distance metric optimization driven CNN for AIFR. Here, two models are integrated that are feature learning and distance metric learning. This integration is achieved through CNN algorithm with parameters optimized using network propagation algorithm. CNN learns features using the convolution layer and recognizes face using the distance metric. Finally, recognized images are retrieved effectually. Herein, recognition rate was very less due to ineffective feature extraction.

Pournami et al. [28] have offered deep learning and multiclass SVM algorithm to recognize face. Here, preprocessing was performed to increase the accuracy of the face recognition where image resizing was performed. CNN feature descriptor was used to extract features from the given image. Here, fully connected layer extracts features from the image and then features are given as input to the multiclass SVM classifier. Resizing only performed in preprocessing thus introduced more noise in extracted feature. Garima et al. [29] have suggested techniques for face verification across different age progression with large age gap. Initially, image normalization was performed where RGB image was converted into the grayscale image and the image is rotate as the eyes are aligned horizontally. In this, face features are extracted using Center Symmetric Local binary Pattern (CSLBP) algorithm. And also weighted K-nearest Neighbor (K-NN) algorithm was used to recognize face from extracted features. K-NN doesn’t perform well for large dataset and thus reduces the accuracy of face recognition. Saroj et al. [30] have pointed out pyramid binary pattern for age-invariant face verification. In this, pyramid binary pattern was used to extract texture feature. Texture features are given as input to the PCA in order to reduce dimensionality of the extracted features. And then, classification was performed through SVM algorithm. Here, texture feature was only extracted to classify the face with age invariant. Thus it reduces accuracy in face recognition since dataset contains different images with large age gap.

Mrudula et al. [31] have offered face recognition across aging using GLBP features. Preprocessing performs three sequential processes that are image resizing, RGB to gray, and illumination normalization. Here, combined feature descriptor was used to extract features from the given image. LBP and Gabor descriptors are combined, which was known as GLBP descriptor. During classification, PCA was used to reduce feature dimensionality and K-NN algorithm was used to recognize face across aging. Herein, GLBP descriptor introduces high false-positive rate in age-invariant face recognition. Zhen et al. [32] have pointed out local polynomial contrast binary patterns for face recognition. Polynomial filters are used to extract the attributes from the given image. In this, LBP descriptor was used to extract texture from the given image. Fisher Linear Discriminant (FLD) algorithm is used to reduce dimension of extracted features. Here, extracted features are classified using nearest neighbor classifier to recognize given image in training set. Nearest neighbor classifier consumes more time to classify image since all the work is performed in testing stages only.

Mohanraj et al. [33] have suggested ensemble of CNN for face recognition in order to resolve aging, pose variation, and low-resolution problem. Preprocessing was established to resize the given image. After that, features are extracted using three different CNN algorithms. Features are concatenated and given to the classifier in order to predict the person. Here, random forest classifier is used to recognize the face. Noise removal was not performed in preprocessing and thus reduces the accuracy of face recognition. Rupali et al. [34] have introduced component-based face recognition. Here, three face components are considered that are nose, lips, and ears. Preprocessing is performed to resize the image and features are extracted using CNN algorithm. Features are extracted from nose and face regions that are given to the FLD algorithm to reduce the dimensions. These features are given to KNN classifier in order to predict the image. In KNN, initial K value prediction is complex that leads to ineffective results. Venkata et al. [35] have pointed out real-time face recognition using deep learning and LBP. During preprocessing, it resizes the given image. In this, LBP was used to extract features from the given images. Extracted features are given to the CNN in order to provide weight to each feature. CNN provides weight in order to estimate the matched face with the training images. Here, texture feature only extracted to recognize face across aging that tends to reduce recognition rate.

Mohsen et al. [36] have offered age-based human face image retrieval using zernike moments. In this, Zernike moment was used to extract features from the images. Here, Zernike moment utilizes Zernike Basis Function (ZBF), which captures both local and global featured fro face image. And, Multi-Layer Perceptron (MLP) algorithm was used to recognize age in training image. Accurate result was not obtained in MLP classifier, thus reducing the recognition rate. Danbei et al. [37] have offered face aging synthesis application based on feature fusion. Initially, face detection was performed and feature points are positioned. For this purpose, triangulation and affine transformations are used, which position the feature points. Here, facial texture features are extracted to recognize face across aging. Extracted features are fused in order to recognize face with the training images effectually. More analysis is required on facial recognition since it describes up to feature fusion process.

Advertisement

3. Problem statement

Kishore et al. [38] have offered Hybrid Local Descriptor (HLD) and LDA-assisted K-Nearest Neighbor classification in AIFR. Here, Gaussian filter was used to reduce noise that results in information degradation, since it removes fine details of the image and resultant image is blurred. WLD-based feature extraction loses more information due to lack of pixel consideration. K-NN-based classification requires more time due to absence of training phase and finding good similarity measure is also difficult. Muhammad et al. [39] have introduced Demographic Features (DF)-assisted AIFR and retrieval. In this, feature extraction takes more time, since each feature was extracted in three individual CNNs. Position and orientation of the object were ignored in hidden layer of CNN that result in less accuracy in feature extraction and recognition. Chenfei et al. [40] have pointed out Coupled Auto Encoder (CAN) algorithm based feature extraction in AIFR. Herein, feature extraction was not effective due to lack of texture and shape-oriented features. In CAN, data relationships are not considered that affect classification results and weight computation is also very difficult. Huiling et al. [41] have introduced Identity Inference Model (IIM)-based age subspace learning to recognize image in AIFR. Herein, wLBP-based feature extraction was used that results in less accuracy, since it contains more noise in extracted features due to absence of noise removal process. Fahad et al. [42] have introduced Composite Temporal Spatio (CTS) modeling in order to recognize image in AIFR. Here, preprocessing was required to improve the accuracy in age-invariant face recognition, since image database contains illumination, pose variation, etc. Naïve Bayes–based classification results are always biased one, since it doesn’t rely on class conditional dependency.

Advertisement

4. Proposed work

This section briefly describes our proposed method in detail along with the description of utilized algorithm.

4.1 System overview

Our Multi-Feature-assisted AIFR (MF-AIFR) method tackles problems that are present in the previous AIFR works. For this purpose, MF-AIFR establishes the five consecutive processes that are IQE, Preprocessing, Pose Normalization, Feature Extraction and Fusion, Feature Recognition and Retrieval as depicted in Figure 1. Our work novelty is present in the IQE method, since previous AIFR method doesn’t concentrate on the quality evaluation. In order to save time, MF-AIFR performs IQE where images that are not satisfied IQT only given to the preprocessing step or else it is directly given to the pose normalization process. During preprocessing, MF-AIFR performs two processes that are illumination normalization using DGC-CLAHE and noise removal using ASBF algorithm. Pose normalization is executed to enhance feature extraction performance where EA-AT algorithm is utilized. Multiple features are extracted from the three different regions of face image that are periocular, mouth, and nose in regard to enhancing accuracy result. Here, two descriptors are executed that are CNN for texture feature and SIHKS for demographic and shape features extraction. Here, demographic features comprise age, gender, and race. Extracted features are fused using CCA in accord to reduce the complex recognition process. For recognition and retrieval, MF-AIFR pursues SVM algorithm, which has high scalability compared with other machine learning algorithm.

Figure 1.

Architecture for proposed work.

Figure 1 illustrates the architecture for our proposed work. The process depicted in architecture is described briefly in upcoming sections.

4.1.1 Image quality evaluation (IQE)

Reducing computation time in AIFR and retrieval is noteworthy in order to achieve efficient performance. For this purpose, MF-AIFR performs novel IQE, which estimates IQM for each image. IQM comprises subsequent metrics that are Brightness Evaluation, Region Contrast Evaluation, Edge Blur Evaluation, Color Quality Evaluation, and Noise Evaluation.

These metrics are designated as follows:

Brightness Evaluation: It defines the image darkness degree in order to get easy viewing. It can be measured as follows:

IBD=k1=1N1k2=0255hk1k2×k2sNE1

Where hk1k2 represents the pixel quantity of the gray value k2 in the histogram of thek1th image block. sindicates the parameter where s=3. Nindicates number of sample blocks.

Region Contrast Evaluation: This metric is used to distinguish difference images effectually. It can be measured through below expression:

ICD=k=1N1IkmaxIkmin/Ikmax+IkminNE2

Where Ikmax and Ikmin represent the maximum and minimum gray values of the kth image block.

Edge Blur Evaluation: It defines the clearness of the image for easy analysis. This metric can be measured as follows:

IEBD=maxIarctanIi1j1Ii2j2Wid12E3

Where denotes set of image blocks, Ii1j1 and Ii2j2 represent the gray values of first and second image blocks. And, Wid12 represents the width of the edge spread points such as i1j1 and i2j2.

Color Quality Evaluation: This metric defines the image quality in terms of the color. It can be measured using following expression:

ICQD=i=1lσiClE4

Where σiC represents the ith standard deviation of the component intensity in the HSV color space. lindicates channel number, l=3.

Noise Evaluation: This metric can be used to measure noise present in the image. It can be measured using following expression:

IND=σnIBDE5

Where σn represents the standard deviation of the image block. It can be estimated using below expression,

σn=a×lgb×255IBD×minIσiE6

Where a and b are constant values.

Using above parameters, we estimate IQM for each image. It can be measured as follows:

IQM=IBD+ICQD+ICDIND+IEBDE7

After computing IQM, this value is compared with the IQT in order to select whether next process is preprocessing or pose normalization for given image.

Nep,i=IQMi>IQTPose NormalizationIQMi<IQTPreprocessingE8

Where Nep,i represents the next process for image i. Using the above condition, we select next process for each given image and thus avoid time wastages in performing preprocessing for all images. In the meantime, it also reduces the total computation time for recognition and retrieval.

4.1.2 Preprocessing

MF-AIFR performs preprocessing in order to enhance the recognition rate in simulation results. For this purpose, we perform two processes in preprocessing that are illumination normalization and Noise removal.

4.1.2.1 Illumination normalization

Illumination normalization is performed in order to enhance the image quality and also avoid negative effects of the image. MF-AIFR adopts DGC-CLAHE algorithm for illumination normalization. Proposed DGC-CLAHE performs better than existing CLAHE method. It enhances both luminance and contrast of the image adaptively. Our DGC-CLAHE algorithm performs dual gamma correction, which enhances the dark areas of the image. This algorithm adaptively sets the clip points of each image, which depends on the dynamic range of each block of the image. In this, first gamma correction is executed to boost the entire luminance present in the image block. Second gamma correction is executed to adjust the contrast in very dark region in order to avoid overenhancement in bright regions.

Initially, DGC-CLAHE sets clip point adaptively based on the dynamic range, which can be expressed as follows:

β=pdr1+τgmaxR+α100σAv+cE9

Where p represents number of the pixels in each block, dr denotes dynamic range in this block. τandα represent the constant parameters, which are used to control the weight of dynamic range and entropies. σindicates the standard deviation of the block; Av indicates mean value; and c represents the small value in order to avoid division by 0. Rrepresents entire dynamic range of the image. gmaxrepresents maximum pixel value of the image. After completing setting up of clip points, dual gamma corrections are performed.

DGC-CLAHE defines enhancement weight for the global gray levels of the blocks by first gamma correction (γ1), which can be expressed as follows:

We=GrmaxGrref1γ1E10

Where Grmax indicates maximum gray value of the image, and Grref indicates reference gray value of the image. First (γ1) and second gamma (γ2) corrections are represented as follows:

γ1=lno+cdfωGrl8E11
γ2=1+cdfωGrl2E12

Where o indicates constant, cdfω indicates cumulative distribution function weight, and Grl represents gray level of the image. γ1 and γ2 are increased by Grl in order to avoid under enhancement in darker region of the image. The first and second gamma correction setting based normalization provides better result in image with nonuniform illumination. Thus, it enhances the image effectually, which in turn increases recognition rate.

4.1.2.2 Noise removal

Noise removal is substantial process in face recognition in regard to enhancing recognition accuracy. For this purpose, our MF-AIFR utilizes ASBF algorithm to remove noise from given image. Proposed ASBF algorithm preserves fine details of the image while removing noise and also sharpens the image. ASBF algorithm is used to remove universal noises such as impulse and Gaussian.

In ASBF algorithm, noisy pixel is detected using Sorted Quadrant Median Vector (SQMV), which incorporates significant features such as edge or texture information. Our ASBF algorithm executes three sequential processes as depicted in Figure 2. Initially, Adaptive Median Filter (AMF) is used to identify the corrupted pixels in the image. Secondly, the edge of the image is preserved using edge detector, which accurately predicts the edge existence in the current window. Noise detector is used to classify the noise into impulse and Gaussian. Switching Bilateral Filter (SBF) contains ranging filter, which switches the modes between impulse and Gaussian based on noise detector result.

Figure 2.

ASBF function blocks.

4.1.2.2.1 AMF

Existing noise filtering algorithm utilizes constant window size such as 3*3, which may fail to distinguish noisy and noise-free pixel accurately and thus results in blur output image. In order to avoid this drawback, our AMF adaptively changes the window size based on the number of noisy pixels present in given image.

4.1.2.2.2 Noise detector

Noise detector is used to predict whether pixel is filtered by SBF Gaussian (SBFg) or SBF impulse (SBFi). Let us consider S1 and S2, which are binary control signals where S1 is generated by AMF and S2 is generated by noise detector. Then the filtered image is represented as follows:

fp=SBFgS1=1^S2=1SBFiS1=1^S2=0nfpS1=0^S2=0E13

At last, pixel with Gaussian and impulse noises are classified based on the above discussed conditions. These outputs are given as input to the SBF with SQMV.

SBF with SQMV: SBF switches its mode based on the classification results from the noise detector. Here, SQMV scheme is used to predict the optimum median effectively even in the larger window. SMQV detects noisy pixel by estimating difference between current pixel and reference median pixel. If difference is large, then current pixel is considered as the noisy pixel. Let us consider ρi,j as the current pixel and ρi+s,j+t as the pixels in a 2N+1×2N+1 window surroundingρi,j.

The output from the SBF filter is expressed as follows:

Oi,j=m=nnt=nnWsrmtρi+s,j+tm=nnt=nnWgmtWsrmtE14
Where,Wg=eim2+jt22σs2E15
Wsr=eIρi+s,j+t2σR2E16

Where I represents the reference median for impulse noise (S1=1and S2=1 ) and I=ρi,j for Gaussian noise (S1=1 and S2=0).

From the above discussions, we conclude that our proposed ASBF removes not only Gaussian noise but also impulse noise while keeping the image fine details and images. This way of performing preprocessing increases the accuracy in AIFR.

4.1.2.3 Pose normalization

Pose normalization is substantial process to increase accuracy in face recognition. Since, our database FG-NET contains different pose images and thus requires pose normalization before entering into feature extraction and retrieval. Our MF-AIFR carried out EA-AT algorithm in order to correct the different poses into the frontal view and thus increases the feature extraction efficiency. EA-AT algorithm initially estimates pose angle of given image using Euler Angle. Then, estimated angle is provided to the Affine Transformation to get frontal view of the given image. Euler angles are three angles in order to describe the orientation of the face with respect to the fixed coordinate.

Figure 3 illustrates the Euler angle with their coordinates in Z vector. Three angles are describes as follows: Yaw, Pitch, and Roll. In this, yaw angle (α) is estimated using below expression:

Figure 3.

Euler angles representation.

α=arccosZ3E17

Where Z2 and Z3 represent the Z vectors of the given image.

Roll angle (φ) can be estimated using below expression:

φ=arccos(Z21Z32)E18

Pitch angle (τ) can be estimated using below expression:

τ=arccosY31Z32E19

These three angles are given as input to the affine transformation algorithm in order to rotate into the correct view. There exist four basic affine transformations that are illustrated as follows:

  • Translate—It moves a set of point in fixed distance in x and y.

  • Scale—It scales the set of points in up or down directions.

  • Rotate—It rotates the set of points about the origin.

  • Shear—It offsets a set of points in distance proportional to their x and y coordinates.

In mathematical form, an affine transformation of Nn is a map of F:NnNn

Fs=Lts+sNnE20

Where, Lt indicates the linear transformation of Nn, and defines the translation vector inNn. A rotation performed in the affine transformation is illustrated as follows:

Rotation aboutxaxis:10000cosθxsinθx00sinθxcosθx00001E21
Rotation aboutyaxis:cosθx0sinθy00000sinθy0cosθy00001E22
Rotation aboutzaxis:cosθzsinθzsinθzcosθz000000001001E23

Whereθx,θy, and θz represent the rotations about three axes known as Euler angles. This way of rotation in affine transformation results in frontal view of the given image. After completing pose normalization, we crop the image in order to change the size of all images into same one.

4.1.2.4 Feature extraction and fusion

Feature extraction and fusion are a major part of this work in order to produce optimum results in AIFR. Our MF-AIFR extracts multiple features from three set of regions. We extract images from three regions that are periocular, nose, and mouth. Since, these three regions are significant to recognize the image across aging. From these regions, we extract three type of features that are texture, shape, and demographic, which are briefed in Table 1. Here, texture feature is extracted using the CNN descriptor, and SIHKS descriptor is used to extract the shape and demographic-related features.

FeaturesFeature descriptionTypes of features
TextureTexture feature represents the surface characteristics of the imageContrast, Dissimilarity, Entropy, Homogeneity, Correlation, and Angular Second Moment
ShapeShape features represents the physiological identity of given imageBoundary of the periocular, nose, and mouth regions, Convexity, and Solidity
DemographicDemographic features represent the individual uniqueness of the given image.Race, Age, and gender

Table 1.

Features description.

4.1.2.4.1 Texture feature extraction

Our MF-AIFR utilizes CNN descriptor for texture feature extraction since it provides robust performance in learning features layer by layer. CNN applies multiple filters on the raw input image in order to extract high-level features. Here, we extract six texture features in given image such as contrast, dissimilarity, entropy, homogeneity, correlation, and angular second moment. These features are described as follows: In CNN, three different types of layers are present that are Convolutional layer, Polling layer, and Fully connected layer.

4.1.2.4.2 Convolutional layer

It gathers image from the input layer, which is made up of a set of learnable filters. In our work, convolutional layer comprises six filters in order to generate feature map. Six filters in the convolutional layer generate six feature maps. The feature map is the consequence of the every filter that convolved through whole image. Convolution operation can be described as follows:

xjl=afiMlxjl1fijl+E24

Where af describes the activation function, j represents the specific convolution feature map, l represents the layer in the CNN, fij indicates the filter, bj represents the feature map bias, and Ml is a selection of feature map.

4.1.2.4.3 Pooling layer

It is used to perform downsampling operation in order to reduce the spatial size of the convolutional layers. Polling operation is implemented on the pixel values captured by the pooling mask. The pooling operation is described as follows:

pjl=afCjlpoolpjl1+bjlE25

Where pjl represents the result of the pooling region applied on the jth region in the input image. pjl1describes the jth region of interest captured by the pooling mask in previous layer. Cjlindicates the trainable coefficient.

4.1.2.4.4 Fully connected layer

Fully connected layer is used to extract the features that are obtained in the preceding layers. The results obtained in the last convolutional and pooling layer are given as input to the fully connected layer in order extract features.

4.1.2.4.5 Shape and demographic feature extraction

Shape and demographic features are extracted using SIHKS algorithm. Shape features are boundary of the eye, nose and mouth, Convexity, and Solidity. Demographic features comprise age, race, and gender information. Here, race feature represents the skin tone of the face image. These features plays key role in recognizing face across aging.

Proposed SIHKS descriptor performs better than HKS algorithm since conventional method has drawback such as sensitivity to scale especially to the global scale. Hence, we proposed SIHKS algorithm, which performs better in scale invariance, and it is able perform at any point even at scale selection is impossible. In addition to it, it also performs well extracting shape and demographic-oriented features compared with other shape feature descriptor. SIHKS extracts features using three steps that are listed as follows:

  • Logarithmical sampling in time t. It can be expressed using below equation.

ht=hxtE26

Where ht represents logarithm sampling of heat kernel signature.

  • Taking logarithm of heat signature with time variations. It can be described as the below equation,

ht=ht+sWithht=loght+1htE27

Where ht+s represent the shift in the heat kernel signature.

  • Taking discrete time Fourier transform of heat signature. It can be expressed as below equation,

Fhtw=Hw=Hwe2πws=Fht+swE28

With the above steps, our SIKHS estimates scale-invariant quantity Hw at each point x without performing scale selection. Using this quantity, our SIKHS algorithm estimates the shape and demographic-oriented features effectually.

Figure 4 illustrates the texture feature extraction in CNN with their significant layers such as convolutional layer, pool layer, and fully connected layer.

Figure 4.

Feature extraction in CNN.

4.1.2.4.6 Feature fusion

Feature fusion is estimated to reduce extracted feature dimension of extracted features such as shape, texture, and demographic features. This dimensionality reduction will result in better performance in face recognition, which the process of recognition and retrieval is easier. For this purpose, our MF-AIFR algorithm utilizes CCA algorithm, which performs effectively in feature fusion. Feature fusion is defined as the combination of multiple feature vectors into single feature vector. Proposed CCA is a statistical tool for recognizing linear relationship among sets of features vectors in order to determine the inter subject covariances. Canonical covariates of the given feature vectors are obtained using below expression,

A1T=uX1T:A2T=vX2T:A3T=dX3TE29

Where A1,A2,A3 represent the canonical covariates of the feature vectors X1,X2,X3, which indicates texture, shape, and demographic features. And, u,v,d describe the eigen vectors of the features.

4.1.3 Recognition and retrieval

Recognition and retrieval are final process in our MF-AIFR, which is performed by utilizing SVM algorithm. Here, we select SVM algorithm to correctly recognize the face cross aging and also retrieve the recognized image for given input image. Figure 5 illustrates the input and output space models of the SVM algorithm.

Figure 5.

SVM input and feature space representation.

Proposed SVM algorithm performs well in even unstructured and semistructured data. It addition to it, SVM also scales relatively well to high dimensionality of database. SVM gets input as fused features from previous process obtained using CCA algorithm. SVM is the binary classification method that discovers the optimal linear decision surface based on the concept of structural risk minimization. The decision surface represents the weighted combination of the elements present in the training set. These elements are illustrated as the support vectors and characterize the boundary between two different classes. The output of the SVM algorithm is a set of support vectorsSi, coefficient weightswe, class labels yi of the support vectors, and constant term z

The linear surface is represented as follows:

k.z+b=0E30

Where k represents the weight factor and b represents the bias term and z represents the training or testing data. These two parameters are used separate the hyperplane position and orientation. The weight factor k is calculated using below expression,

k=i=1NsweiyiSiE31

Kernel function plays vital role in SVM, which classifies features effectually. In MF-AIFR, we use Radial Basis Function (RBF) kernel. RBF performs well compared with other kernel functions. It doesn’t require any prior knowledge about data. It can be expressed as follows:

rvvi=eδvvi2E32

Here, δ represents the regularization parameter, and vvi represents the different between feature vectors. By utilizing RBF kernel function, our MF-AIFR method recognizes and retrieves the images that are same as given test image. For example, if we give an input as image of person “A” at the age of 33, then it retrieves of the person “A” image from the age 2 to 60, since our FG-NET database contains subjects with the age from 0 to 69.

Advertisement

5. Experimental study

To characterize the performance of the proposed MF-AIFR, this section is divided into four aspects such as dataset description, simulation setup, application scenario, results, and discussion.

5.1 Dataset description

This section deliberates dataset information used in this chapter. Here, we utilize FG-NET database to perform face recognition and retrieval. Face and gesture recognition NETwork (FG-NET) aging database was released in the year of 2004 in an attempt to support research activities regarding the changes in the facial appearance caused by aging. FG-NET database comprises 1002 images from 82 different subjects. Each subject comprises 6–18 images with the age ranging between the newborns to the 69-year-old subjects. Our FG-NET database contains considerable variations such as poses and illuminations.

Table 2 illustrates the details of the FG-NET dataset briefly. Dataset contains 34 male subjects and 48 female subjects’ images. Each subject has 1–12 images across their age progression.

ParametersValues# Images
# subjects821002
#Males34Max (1–12) per subject
#Females48Max (1–12) per subject

Table 2.

Dataset description.

Different age bands present in the FG-NET dataset are represented in Table 3. FG-NET dataset comprises subjects from the age of 0 to 69 years old.

FactorsAges
0–56–1011–1516–2021–2526–3031–3536–4041–4546–69
#Subjects75707168463830241910
#Images233178164155816238312634

Table 3.

Different age bands of FG-NET dataset.

5.2 Experimental setup

Our proposed MF-AIFR is implemented in MATLAB R2017b tool with C programming language. Our MATLABR2017b is executed in windows operating system. MATLAB is a multi-paradigm statistical computing environment developed by MathWorks. MATLAB permits matrix manipulations, implementation of algorithms plotting of functions and data, creation of user interfaces, and interfacing with programs written in other languages, which include C, C++, C#, JAVA, and Python.

5.3 Performance metrics

To evaluate performance of the MF-AIFR, we consider following metrics that are described as follows:

  • Accuracy: It is defined as the ratio of correct classification with respect to that of total images. The accuracy is measured based on the succeeding expression:

Accuracy=#of correct classificationTotal imagesE33

  • Recall: It is defined as the proportions of the cases that are correctly classified by a class. Recall is also called as True Positive (TP) cases. It can be illustrated as follows:

Recall=TPTP+FNE34

Where TP is defined as the positive cases that are correctly labeled as positive. FN is defined as the noise samples that are incorrectly labeled as negative.

  • Precision: It is designated as the ratio of the number correctly classified samples with all classified samples. Precision is also called as positive predictive value. It can be designated as follows:

Precision=TPTP+FpE35

Where Fp represents the number of noise lesions correctly detected as samples.

  • F-Score: It is the combination of precision and recall. It can be calculated as follows:

FScore=2RecallPrecisionRecall+PrecisionE36

  • Recognition Rate: It is evaluated based on the features of the face image. It describes the face recognition ability in AIFR.

  • Rank 1 Score: It represents the Cumulative Matching Result (CMR) for given image. It can be used to detect the correctly matched score for given FG-NET database images.

  • Computation Time: It is designated as the total time required to retrieve the images for given input. This metric illustrates the efficacy of the proposed work in terms of time.

5.4 Comparative analysis

This compares the simulation results of the MF-AIFR with existing methods such as HLD, DF, and CAN. Here, we compare results using six performance metrics that are Accuracy, Recall, Precision, Recognition Rate, Rank-1 Score, and F-Score. Table 4 illustrates the comparisons of previous methods with their strength, weakness, and research statements.

5.4.1 Impact on accuracy

Accuracy metric is one of the significant metrics to evaluate the performance of the proposed work. This metric defines the how accurate our MF-AIFR in terms of correct classification of images. The performance of this metric is evaluated by alternating the number of images.

Figure 6 demonstrates that comparisons on accuracy of the MF-AIFR with respect to the existing methods such as CAN, DF, and HLD. These comparisons show that our MF-AIFR achieves better performance compared with the existing methods. Since, our method utilizes better feature descriptors such as CNN and SIHKS. Both algorithms extract features effectually from three regions that are periocular, nose, and mouth. This selected region plays a key role in recognizing face across aging. And CNN and SIHKS provide robust performance even in high-dimensional dataset. As a result, our method achieves high accuracy as 95%. By contrast, CAN and DF method attain less accuracy compared with our method due to its poor feature extraction procedures since it doesn’t concentrate on the vital regions such as periocular, nose, and mouth. Meanwhile, HLD obtains high accuracy compared with both CAN and DF method due to its feature extraction from periocular region, which plays significant role in face recognition across aging. Though, it achieves less accuracy compared with our method due to its poor descriptor algorithm since it loses large amount of information during feature extraction.

Figure 6.

Comparisons on accuracy.

Table 5 illustrates the average simulation results comparison of accuracy with the existing and proposed methods.

ReferenceKey concentrationStrengthWeaknessResearch statements
AccuracyRecallPrecisionRecognition rateF-ScoreRank 1-score
Kishore et al. [41]HLD-AIFR & RetrievalAdopts large datasetsIt removes fine details of the image and resultant image is blurred.
Feature extraction loses more information due to lack of pixel consideration.
LowMediumLowLowMediumLow
Muhammad et al. [42]DF-AIFR & RetrievalBetter demographic EstimationTakes more time in feature extractionVery LowLowLowVery LowLowVery Low
Chenfei et al. [13]CAN-AIFRComplexity is lessData relationships are not considered that affects the recognition results.LowMediumVery LowLowLowLow
Huiling et al. [15]IIM-AIFRFlexible to large datasetMore noise in extracted features due to absence of noise removalMediumLowVery LowMediumVery LowLow
Fahad et al. [9]CTS-AIFRRecognition time is lessNaïve Bayes based recognition results are always biased one, since it doesn’t rely on class conditional dependency.Very LowMediumVery LowVery LowVery LowLow

Table 4.

Comparisons on previous methods in AIFR.

MethodsAccuracy (%)
HLD80.2
DF73.2
CAN67.6
MF-AIFR90.2

Table 5.

Accuracy comparisons [average].

From the above comparison, it is noticed that our method achieves better accuracy percentage as 90.2% compared with the existing methods.

5.4.2 Impact on recall

Recall is used to evaluate the performance of the MF-AIFR in terms of the correct recognition of face image. Recall performance is evaluated by changing the number of images.

Figure 7 shows that our MF-AIFR achieves less recall percentage compared with other methods.

Figure 7.

Comparisons on recall.

Since, our MF-AIFR correctly recognizes the face as per given test image, thus reduces false detection of face images. Reason for this is that our method executes pose normalization before entering into the feature extraction process. Pose normalization enhances the feature extraction efficiency. Thus it leads to correct identification and retrieval of the test image. As a result, our MF-AIFR achieves less recall percentages compared with existing methods. Whereas existing methods such as DF and CAN achieves high recall percentages due to lack of pose normalization and complex feature extraction procedures. In the meantime, HLD method reduces recall percentage compared with DF and CAN methods since it doesn’t follow complex feature extraction procedures. Still, recall of HLD is high compared with MF-AIFR due to lack of pose normalization and information degradation in noise removal process. Table 6 designates the average simulation results comparison of recall with the existing and proposed methods.

MethodsRecall (%)
HLD75.6
DF87
CAN80
MF-AIFR70

Table 6.

Recall comparisons [average].

From the above comparison results, it is seen that our MF-AIFR method achieves less recall percentage as 70% compared with the existing methods.

5.4.3 Impact on precision

Precision is used to measure performance of our work in terms of relevance instances retrieved compared with the total images. Precision performance is measured via altering the number of image.

Figure 8 depicts that MF-AIFR achieves high precision percentages compared with existing methods. MF-AIFR performs preprocessing process before entering into the feature extraction and recognition process. Preprocessing performs illumination normalization and noise filtering since our FG-NET dataset contains illumination and noises in images. These two processes enhance the quality of the image that tends to easy the feature extraction and recognition process. CAN and DF methods achieves less precision due to lack of preprocessing such as noise removal and illumination normalization. Likewise, HLD also obtains less precision owing to fine detail removal in Gaussian-based noise filtering. Since Gaussian filter doesn’t concentrate on fine details of the image, which results in blur image.

Figure 8.

Comparisons on precision.

Table 7 designates the average simulation results comparison of precision with the existing and proposed methods. From the above comparison, we conclude that MF-AIFR achieves better precision percentage as 90.6% compared with existing methods.

MethodsPrecision (%)
HLD81.6
DF71.6
CAN65
MF-AIFR90.6

Table 7.

Precision comparisons [average].

5.4.4 Impact on F-score

F-Score metric considers both false positive and false negative values in account to estimate performance of this work. The performance of this metric is simulated by varying the number of images.

Figure 9 illustrates that comparison on F-Score result of MF-AIFR with existing methods such as DF, CAN, and HLD. From this figure, it is noticed that our method achieves high F-Score compared with existing methods. Our MF-AIFR uses two descriptors such as CNN and SIHKS to extract texture, shape, and demographic features. Here, SIHKS descriptor performs very well in scale invariance and also provides better extraction results even when scale selection is impossible. It extracts shape and demographic features effectually, which plays substantial role in face recognition across aging. At the same time, CAN and DF methods attain less F-Score owing to the absence of significant feature extraction such as texture and shape features. Meanwhile, HLD also attains less F-Score since it doesn’t concentrate on shape features extraction and thus reduces the face recognition and retrieval efficiency.

Figure 9.

Comparisons on F-score.

Table 8 describes the average simulation results comparison of F-Score with the existing and proposed methods. From the above comparison, we observed that MF-AIFR method achieves high F-Score percentage as 87.2% compared with existing methods.

MethodsF-Score (%)
HLD78.6
DF71.6
CAN59.6
MF-AIFR87.2

Table 8.

F-score comparisons [average].

5.4.5 Impact on recognition rate

Recognition rate is used to measure the ability of MF-AIFR in terms of the face recognition. It can be measured through changing the number of features.

Figure 10 designates the comparisons on recognition rate of MF-AIFR with respect to the existing methods CAN, DF, and HLD methods. From this figure, it is observed that our MF-AIFR attains high recognition rate compared with existing method. We propose SVM algorithm for recognition and retrieval. It performs well in recognition even in high dimensionality of dataset. In addition to it, we also perform feature fusion before entering into the recognition and retrieval process.

Figure 10.

Comparisons on recognition rate.

Feature fusion reduces the dimension of feature vectors and thus tends to enhance the performance of SVM algorithm. Therefore, our method achieves better recognition rate compared with existing method. Meanwhile, DF method has less recognition rate compared with other methods due to lack of effective recognition and retrieval processes since it simply ranks the images. Likewise, CAN also attains less recognition rate compared with our method since it isn’t able to establish data relationship between different features. Meantime, HLD method attains less recognition rate due to usage of KNN for recognition. KNN takes more time, and discovering similarity measure is tedious.

Table 9 defines the average simulation results comparison of recognition rate with the existing and proposed methods. Above comparison illustrates that recognition rate of MF-AIFR is higher than that of other existing methods.

MethodsRecognition rate (%)
HLD87
DF69.2
CAN79.2
MF-AIFR92.2

Table 9.

Recognition rate comparisons [average].

5.4.6 Impact on rank-1-score

Rank-1 Score considers the performance of cumulative match for given images in proposed work. It represents the efficacy of our work in terms of recognition and retrieval.

Figure 11 exhibits comparisons on rank-1 score results with respect to the existing methods. From this figure, it is seen that our MF-AIFR attains high rank-1 score compared with the existing methods. Our proposed DGC-CLAHE algorithm based illumination normalization performs well compared with existing CLAHE; it enhances the fine details of the image. ASBF-based noise filtering also provides better performance in noise removal, which sharpens the image. This way of preprocessing results in high matching results in face recognition. At the same time, existing methods such as DF and CAN attain less rank 1 score since it doesn’t use effective algorithm for preprocessing and thus reduce the quality of given image drastically. Likewise, HLD also attains less rank 1 score compared with our method. Since, it doesn’t perform illumination normalization and noise filtering also not effective. From this analysis, we conclude that our MF-AIFR attains better results in rank 1-score compared with other methods.

Figure 11.

Comparisons on rank 1-score.

Table 10 signifies average simulation results comparison of rank 1-score with the existing and proposed methods. From the above comparison, we prove that our MF-AIFR method achieves higher rank 1 score percentage as 89.8% compared with existing methods.

MethodsRank 1 score
HLD79.6
DF73
CAN65
MF-AIFR89.8

Table 10.

Rank 1 score comparisons [average].

5.4.7 Impact on computation time

Performance of the computation time is evaluated by varying the number of images. This metric must be low in order to attain better performance in image retrieval across aging.

Figure 12 depicts the comparisons on computation time results with respect to the existing methods. It is noticed that our MF-AIFR method achieves less computation time compared with the existing methods such as CAN, DF, and HLD. MF-AIFR performs IQE process before entering into the preprocessing step. The images that are not satisfying IQT only undergone preprocessing; otherwise it is directly given to the pose normalization step. Thus it reduces the time wastages in performing preprocessing for all input images. In addition to it, our work also reduces time in feature extraction and classification by using effective algorithms such as CNN, SIHKS, and SVM. These algorithms require less time to process the given inputs. As a result, MF-AIFR achieves less computation time. In the meantime, existing methods such as CAN and DF attain high computation time compared with other methods. Since it performs preprocessing for all images and also doesn’t utilize effective algorithm to process the given input image and thus leads to increase in computation time. Likewise, HLD also attains high computation time compared with MF-AIFR since it performs preprocessing for all images regardless of their quality.

Figure 12.

Comparisons on computation time.

Table 11 deliberates the comparisons of computation time and thus shows that our method attains less computation time as 12.4ms compared with other methods including HLD, DF, and CAN.

MethodsComputation time (ms)
HLD60
DF63
CAN73
MF-AIFR12.4

Table 11.

Computation time comparisons [average].

5.5 Research highlights

This section signifies highlights of this research regarding face recognition across aging. In order to achieve better performance in AIFR, our work establishes five consequent processes. Table 12 describes the benefits of proposed algorithms along with their functionalities. This table illustrates each algorithm with their benefits in performance metrics such as precision, recall, accuracy, recognition rate, and rank 1 score.

AlgorithmsMain functionalityBenefits related to performance
DGC-CLAHEIllumination NormalizationEnhances the recognition rate and accuracy
ASBFNoise removalEnhances recognition rate and feature extraction efficiency
EA-ATPose NormalizationEasier the feature extraction process and Increases the precision level
CNNTexture Feature ExtractionEnhances the accuracy in face recognition across aging and perform well in large scale data set
SIHKSShape & Demographic Feature extractionIncreases the rank 1-score and adapts large scale data set.
SVMRecognition and IntervalSimple processing, increases the accuracy and reduces the recall

Table 12.

Benefits of proposed algorithms.

Advertisement

6. Conclusion and future work

Face recognition across aging becomes challenging due to changes in the human faces with age progressions. In order to address this bottleneck, this chapter proposes MF-AIFR method where four successive processes performed that are listed as follows: IQE is performed to reduce time spend in preprocessing and thus enhances performance of our system drastically. An image that doesn’t satisfy the IQT is given as input to the preprocessing step. Here, illumination normalization and noise removal are performed, which enhances the accuracy in face recognition and retrieval. Illumination normalization adopts DGC-CLAHE, and noise removal adopts ASBF algorithm. In order to normalize the pose, we adopt EA-AT algorithm, which is performed to enhance the feature extraction efficacy. Two types of descriptors are utilized for features extractions that are CNN and SIHKS. Here, we extract multiple features such as texture, shape, and demographic features. We extract features from three types of regions that are periocular, nose, and mouth. CNN extracts texture features, and SIHKS extracts shape and demographic features. This way extracting features increases our recognition rate. In recognition and retrieval, we execute SVM algorithm, which follows the simple procedure and provides better results. At last, we evaluate the performance of MF-AIFR system using seven metrics that are Accuracy, Recall, Precision, Rank-1 Score, F-Score, Recognition rate, and Computation time. Thus it shows that our work performs better than existing methods such as HLD, DF, and CAN.

References

  1. 1. Tong SG, Huang YY, Tong ZM. A robust face recognition method combining LBP with multi-mirror symmetry for images with various face interferences. International Journal of Automation and Computing. 2019;16:1-12. DOI: 10.1007/s11633-018-1153-8
  2. 2. Moon HM, Seo CH, Pan SB. A face recognition system based on convolution neural network using multiple distance face. Soft Computing. 2017;21(17):4995-5002
  3. 3. Roh S-B, Oh S-K, Yoon J-H, Seo K. Design of face recognition system based on fuzzy transform and radial basis function neural networks. Soft Computing. 2019;23(13):4969-4985
  4. 4. Singh R, Om H. New born face recognition using deep convolutional neural network. Multimedia Tools and Applications. 2018;76(18):19005-19015
  5. 5. Agarwal V, Bhanot S. Radial basis function neural network based face recognition using firefly algorithm. Neural Computing and Applications. 2018;30(8):2643-2660
  6. 6. Wang Y, Gong D, Zheng Z, Ji X, Wang H, Li Z, et al. Orthogonal deep features decomposition for age-invariant face recognition. Computer Vision and Pattern Recognition. 2018:1-13
  7. 7. Wang H, Gong D, Li Z, Liu W, Tencent AI Lab. Decorrelated adversarial learning for age-invariant face recognition. Computer Vision and Pattern Recognition. 2019:1-10
  8. 8. Li Z, Gong D, Li X, Tao D. Aging face recognition: a hierarchical learning model based on local patterns selection. IEEE Transactions on Image Processing. 2016;25(5):2146-2154
  9. 9. Kishore KK, Trinatha RP. Biometric identification using the periocular region, information and communication technology for intelligent systems (ICTIS 2017), volume 2. Smart Innovation, Systems and Technologies. Cham: Springer. 2018;84:619-628. DOI: 10.1007/978-3-319-63645-0_69
  10. 10. Sawant MM, Bhurchandi K. Age invariant face recognition: A survey on facial aging databases, techniques and effect of aging. Artificial Intelligence Review. 2018:1-28
  11. 11. Feng S, Lang C, Feng J, Wang T, Luo J. Human facial age estimation by cost-sensitive label ranking and trace norm regularization. IEEE Trans Multimedia. 2017;19(1):136-148
  12. 12. Dhamija A, Dubey RB. Analysis on age invariance face recognition study and effects of intrinsic and extrinsic factors on skin ageing. International Journal of Computer Applications. 2019;182(43):1-9
  13. 13. Kishore KK, Trinatha RP. Face verification across ages using discriminative methods and see 5.0 classifier. In: Proceeding of First International Conference on Information and Communication Technology for Intelligent Systems: Volume 2. Smart Innovation, Systems and Technologies, vol 51. Cham: Springer. 2016. pp 439–448. DOI: 10.1007/978-3-319-30927-9_43
  14. 14. Li Z, Park U, Jain AK. A discriminative model for age invariant face recognition. IEEE Transactions on Information Forensics and Security. 2011;6:1028-1037
  15. 15. Kishore KK, Trinatha RP. Periocular region based biometric identification using the local descriptors. Intelligent Computing and Information and Communication. Advances in Intelligent Systems and Computing. Singapore: Springer. 2018;673:341-351. DOI:10.1007/978-981-10-7245-1_34
  16. 16. Kamarajugadda KK, Polipalli TR. Age-invariant face recognition using multiple descriptors along with modified dimensionality reduction approach. Multimedia Tools and Applications. 2019;78(19):27639-27661. DOI: 10.1007/s11042-019-7741-y
  17. 17. El Khiyari H, Wechsler H. Age invariant face recognition using convolutional neural networks and set distances. Journal of Information Security. 2017;8:174-185
  18. 18. Divyanshu S, Pandey JP, Chauhan B. A deep learning approach for age invariant face recognition. International Journal of Pure and Applied Mathematics. 2017;117(21):371-389
  19. 19. Bianco S. Large age-gap face verification by feature injection in deep networks. Pattern Recognition Letters. 2017;90:36-42
  20. 20. Riaz S, Ali Z, Park U, Choi J, Masi I, Natarajan P. Age-invariant face recognition using gender specific 3D aging modeling. Multimedia Tools and Applications. 2019:1-21
  21. 21. Kumar KK, Pavani M. Periocular region-based age-invariant face recognition using local binary pattern. Microelectronics, Electromagnetics and Telecommunications. Lecture Notes in Electrical Engineering. Singapore: Springer. 2019;521:713-720. DOI: 10.1007/978-981-13-1906-8_72
  22. 22. Kishore Kumar K, Pavani M. LBP based biometrie identification using the periocular region. In: 8th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). Vancouver, BC: IEEE; 2017. pp. 204-209. DOI: 10.1109/IEMCON.2017.8117193
  23. 23. Kamarajugadda KK, Polipalli TR. Extract features from periocular region to identify the age using machine learning algorithms. Journal of Medical Systems, Springer. 2019;43:196. DOI: 10.1007/s10916-019-1335-0
  24. 24. Nanni L, Lumini A, Brahnam S. Ensemble of texture descriptors for face recognition obtained by varying feature transforms and preprocessing approaches. Applied Soft Computing. 2017;61:8-16
  25. 25. Nhan Duong C, Gia Quach K, Luu K, Le N, Savvides M. Temporal non-volume preserving approach to facial age-progression and age-invariant face recognition. IEEE International Conference on Computer Vision (ICCV). 2017:3755-3763
  26. 26. Chen B-C, Chen C-S, Hsu WH. Cross-age reference coding for age-invariant face recognition and retrieval. European Conference on Computer Vision ECCV 2014: Computer Vision—ECCV 2014. 2014:768-783
  27. 27. Li Y, Wang G, Nie L, Wang Q, Tan W. Distance metric optimization driven convolutional neural network for age invariant face recognition. Pattern Recognition. 2018;75:51-62
  28. 28. Chandran PS, Byju NB, Deepak RU, Nishakumari KN, Devanand P, Sasi PM. Missing child identification system using deep learning and multiclass SVM. IEEE Recent Advances in Intelligent Computational Systems (RAICS). 2018
  29. 29. Verma G, Jindal A, Gupta S, Kaur L. A technique for face verification across age progression with large age gap. In: 2017 4th International Conference on Signal Processing, Computing and Control (ISPCC). Solan, India: IEEE. 2017
  30. 30. Bijarnia S, Singh P. Pyramid Binary Pattern for Age Invariant Face Verification. In: 2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). Jaipur, India: IEEE. 2017
  31. 31. Nimbarte M, Bhoyar KK. Face recognition across aging using GLBP features. Information and Communication Technology for Intelligent Systems (ICTIS 2017). Smart Innovation, Systems and Technologies. Cham: Springer. 2017;84:275-283. DOI: 10.1007/978-3-319-63645-0_30
  32. 32. Xu Z, Jiang Y, Wang Y, Zhou Y, Li W, Liao Q. Local polynomial contrast binary patterns for face recognition. Neuro Computing. 2019;355:1-12
  33. 33. Mohanraj V, Sibi Chakkaravarthy S, Vaidehi V. Ensemble of convolutional neural networks for face recognition. Advances in Intelligent Systems and Computing Recent Developments in Machine Learning and Data Analytics. 2019:467-477. DOI: 10.1007/978-981-13-1280-9_43
  34. 34. Kute RS, Vyas V, Anuse A. Component-based face recognition under transfer learning for forensic applications. Information Sciences. 2019;476:176-191
  35. 35. Venkata Kranthi B, Surekha B. Real-time facial recognition using deep learning and local binary patterns. Proceedings of International Ethical Hacking Conference. 2018;2018:331-347
  36. 36. Malek ME, Azimifar Z, Boostani R. Age-based human face image retrieval using zernike moments. Artificial Intelligence and Signal Processing Conference (AISP). 2017:347-351
  37. 37. Wang D, Cui Z, Ding H, Yan S, Xie Z. Face aging synthesis application based on feature fusion. In: 2018 International Conference on Audio, Language and Image Processing (ICALIP). Shanghai, China: IEEE. 2018
  38. 38. Kamarajugadda KK, Polipalli TR. Stride towards aging problem in face recognition by applying hybrid local feature descriptors. Evolving Systems. 2018;10:689–705. DOI: 10.1007/s12530-018-9256-6
  39. 39. Shafique MST, Manzoor S, Iqbal F, Talal H, Qureshi US, Riaz I. Demographic-assisted age-invariant face recognition and retrieval. Symmetry. 2018;10(5):1-17
  40. 40. Xu C, Liu Q, Ye M. Age invariant face recognition and retrieval by coupled auto-encoder networks. Neurocomputing. 2017;222:62-71
  41. 41. Zhou H, Lam K-M. Age-invariant face recognition based on identity inference from appearance age. Pattern Recognition. 2018;76:191-202
  42. 42. Alvi FB, Pears R. A composite spatio-temporal modeling approach for age invariant face recognition. Expert Systems With Applications. 2017;72:383-394

Written By

Kishore Kumar Kamarajugadda and Movva Pavani

Submitted: 03 April 2022 Reviewed: 14 April 2022 Published: 11 June 2022