Recognition rate (%) of the proposed eBGP descriptor on the SCface dataset using the patch-based topology.
Abstract
An excellent face recognition for a surveillance camera system requires remarkable and robust face descriptor. Binary gradient pattern (BGP) descriptor is one of the ideal descriptors for facial feature extraction. However, exploiting local features merely from smaller region or microstructure does not capture a complete facial feature. In this paper, an extended binary gradient pattern (eBGP) is proposed to capture both micro- and macrostructure information of a local region to boost up the descriptor performance and discriminative power. Two topologies, the patch-based and circular-based topologies, are incorporated with the eBGP to test its robustness against illumination, image quality, and uncontrolled capture conditions using the SCface database. Experimental results show that the fusion between micro- and macrostructure information significantly boosts up the descriptor performance. It also illustrates that the proposed eBGP descriptor outperforms the conventional BGP on both the patch-based topology and the circular-based topology. Furthermore, a fusion of information from two different image types, orientational image gradient magnitude (OIGM) and grayscale image, attained better performance than using OIGM image only. The overall results indicate that the proposed eBGP descriptor improves the recognition performance with respect to the baseline BGP descriptor.
Keywords
- surveillance system
- face recognition
- binary gradient pattern (BGP)
- facial feature extraction
- patch-based topology
- circular-based topology
1. Introduction
Face recognition is one of the biometric verification methods that offers a wide range of applications such as law enforcement, forensics, biometric authentication, surveillance, and health monitoring [1]. Face recognition has also been used to authenticate payment using mobile wallet, and the social media company like Facebook uses face recognition algorithm for the purpose of image tagging [2]. One of the advantages of face recognition is being contactless between the subject and camera. Given the advantages offered by face recognition and with the advancement in computing power, significant research and methods have been proposed over the years in face recognition domain. In fact, a robust facial recognition system must be able to work with various real-life situations or unconstrained conditions, such as but not limited to pose, lighting, image or camera quality, occlusion, rotation, and translation. The system must also be able to perform extremely well in a domain where limited sample is available. In surveillance monitoring applications, a typical approach is to sample face appearing in videos and then match them with facial models generated from high-quality target face image [3, 4].
Feature extraction is the process of capturing feature of interest from the face and represents it in the form of feature vector. The extraction process is usually done by a face descriptor. This descriptor must be able to work with multiple variations such as illumination, occlusion, face expression, and image quality [4]. Indeed, there is a collection of face descriptors proposed over the years such as scale-invariant feature transform (SIFT) [5], speeded up robust feature (SURF) [6], local binary pattern (LBP) [7], and histogram of oriented gradient (HOG) [8]. In terms of facial feature representation, there are two types of representations that many descriptors have evolved around over the years. They are global and local feature representations. Global-based feature extraction like principal component analysis [9], linear discriminant analysis [10], and independent component analysis [11] preserves the statistical information of the face by turning each face image into a high-dimensional feature vector. Meanwhile, local-based feature splits input image into smaller patches and extracts the micro textural details from each patch before fusing these features back to form the global shape information. Local-based feature extraction has shown to be resilient to multiple variations by enforcing spatial locality in both pixel and patch levels. For instance, local feature descriptor is robust to local deformation in expression and occlusion. LBP [7] is an example of feature extraction method that works on this principle which achieved reasonably good performance but heuristic in nature. Recently, LBP has drawn great intention as a face descriptor due its reputation as a powerful texture descriptor [9]. LBP extracts local-based spatial structure of an image by thresholding intensity of center pixel with its neighborhood. The product of this operation is characterized as local binary pattern, which then the distribution of binary pattern over the whole image is used to form the LBP histogram vector or feature vector. Neighborhood pixels are sampled on a circle, and any neighbor which does not fall exactly on the center of the pixel has an intensity computed from interpolation [7]. Due to some shortcomings of LBP, for instance, LBP produces long histogram, and therefore it is memory-consuming [12], LBP is very sensitive for image rotation and noise [13], and it only captures microstructure and ignores macrostructure of the texture resulting in missing extra discriminative power [14]. Several variants of LBP have been proposed in the literature, for example, rotation-invariant LBP [13], median robust extended local binary pattern (MRELBP) [15], and binary gradient pattern (BGP) [14]. This paper touches on a number of relevant existing LBP-based descriptors. The rest of this paper is organized as follows. In Section 2, two state-of-the-art descriptors (the LBP [7] and its variant, the BGP [14]) would be briefly reviewed since we would embed the proposed extended BGP (eBGP) into these two descriptors. Section 3 describes the proposed eBGP descriptor. The evaluating results are analyzed and discussed in Section 4. Finally, conclusions are drawn in Section 5.
2. From local binary pattern (LBP) to binary gradient pattern (BGP)
LBP [7] is one of various texture descriptors and is known for being computationally efficient [16]. It extracts local-based spatial structure of an image by thresholding intensity of center pixel with its neighborhood pixel
where
The success of LBP has continued since then. A variety of LBP-based descriptors have been proposed recently to overcome all shortcomings toward noise, illumination, color, and temporal information. Huang and Yin [14] proposed an improved version of LBP, called binary gradient pattern (BGP), by introducing structural pattern and image gradient orientation (IGO) implementation in multiple directions rather than on X and Y directions only, as in the conventional manner. The implementation of IGO in multiple directions helps to improve discriminative power of the proposed descriptor. Figure 2 shows how BGP encodes binary string from a region of interest (ROI). Given a set of grayscale intensity value of 9 pixels as in Figure 2(a), BGP computes binary correlations between symmetric neighbors of central pixel from multiple
Binary string for the ROI is constructed from four principal binary numbers which is equivalent to 0111, and the label
The number of structural labels
Based on a series of results obtained from multiple databases such as Extended Yale B [17], AR [18], CMU Multi-PIE [19], FERET [20], and LFW [21] against a wide range of descriptors, BGPM is proven to be the best descriptor for each database. The BGPM descriptor has achieved invariance against illumination changes and local distortions while reducing the vector dimensionality. BGP compact representation makes BGP extremely fast and uses much fewer pattern labels than LBP at any spatial resolution. For instance, in a system with spatial resolution of (8,1), BGP histogram only needs 9 bins, with 8 bins for structural patterns, and 1 bin for nonstructural patterns, in contrast to the LBP which requires 59 bins. BGP and BGPM have been demonstrated to possess strong spatial locality and orientation properties which lead to effective discrimination.
Although BGP has shown to be efficient in processing time and achieving outstanding results in several databases, BGP was never being tested with a proper surveillance database like [22], which consists of low-resolution non-frontal face images taken by different camera quality. Like most of other local-based descriptors, BGP exploits information from microstructure only, however exploiting facial feature from macrostructure to complement the microstructure feature resulting in a more complete image representation [23, 24], especially for surveillance applications where noise, occlusion, and head position might impact the descriptor performance. In this paper, information from both micro- and macrostructures are captured and integrated into the BGP descriptor to boost up its performance for video surveillance applications. The new proposed descriptor is termed as an extended BGP (eBGP).
3. Extended binary gradient pattern (eBGP)
An eBGP extends the BGP descriptor by exploiting macrostructure information from topology with larger spatial resolution. There are many different types of macrostructure topologies that have been proposed for other LBP variants [25]. In this paper, the patch-based topology with eight neighborhood patches and the circular topology are evolved with the proposed eBGP descriptor. Both topologies have been implemented by [24, 26], where each topology has its pros and cons with the implementation. Regardless of the topology, the microstructure information is always extracted using the same approach as in BGP. Herein, the eBGP is explained with the focus on extracting features from macrostructure based on the patch-based topology with eight neighborhood patches and the circular-based topology.
3.1 Patch-based topology
Patch-based topology is inspired by multi-scale block local binary pattern (MBLBP) [24]. In this topology, macrostructure is made up of nine patches of pixels as in Figure 5. All these patches have the same size and width, while the center patch represents the ROI microstructure. Thereby, a default BGP operator is applied to the center patch in order to extract the microstructure information, whereas the macrostructure information is extracted from the eight neighborhood patches. Accordingly, multiple sizes of patches could be selected from this topology, and the size of the structure is determined by the spatial resolution of the center patch.
For instance, when exploiting microstructure information from (8,1) spatial resolution, the size of the center patch will be 3 × 3 pixels as illustrated in Figure 5(b). In this implementation, all patches have the same size and do not overlap each other; therefore the macrostructure is formed from nine patches of 3 × 3 pixels. Figure 5(a) depicts the macrostructure topology formed from 9 patches of 5 × 5 pixels when microstructure information is exploited from (16,2) spatial resolution. For comparison purposes, this research will evaluate two structures as illustrated in Figure 5(a) and (b), to match BGP results exploited from (8,1) and (16,2) spatial resolution. Using Figure 5(a) as an example, each neighborhood patch contains 25 pixels with each pixel having its own grayscale value. Unlike the center patch, no feature is extracted from the individual neighborhood patch. Instead, each neighborhood patch is represented by a single intensity value which will be used for thresholding. In this topology, the patch’s mean and median will be applied to represent the patch intensity. The patch’s mean (G) of a neighborhood patch (P), accounted from 25 pixels in a single 5 × 5 patch, is computed as follows:
where
On the other hand, the patch median is computed by finding the middle value of ordered pixel values. Additional experiments are conducted in this research to find the best representation for the patch-based topology. As an example, feature extraction from macrostructure is illustrated in Figure 6. Figure 6(a) shows the patch-based topology with the size of 3 × 3 pixels and its intensity value. In each patch, a median is calculated from all pixels within the patch, and the median now represents the image intensity of the patch as shown in Figure 6(b). The following steps are similar to what has been explained in BGP. By thresholding each patch with symmetric neighbors in four directions using Eqs. (2) and (3), four pairs of binary numbers are generated as shown in Figure 6(c). Once all the principal bits are computed, the label can be calculated using Eq. (4). In general, the flow for macrostructure extraction is like microstructure except for its representative value used during thresholding. Indeed, the microstructure information is extracted from neighborhood pixels, while the macrostructure information is extracted from neighborhood patches.
Since there are only eight neighbor patches, regardless of the structures’ size, the generated histogram vector which represents the macrostructure information is bound to the maximum of 16 bins. Observing only a structural pattern will greatly reduce the dimensionality of macrostructure information to eight bins. The total length of the histogram vector (
where
Subsequently, information fusion between micro- and macrostructures is conducted through concatenating the feature vectors of both the microstructure and the macrostructure, as illustrated in Figure 7. At this point, both feature vectors are contributed by the same weight. Figure 8 demonstrates an example of face image represented using the patch-based topology. Figure 8 illustrates that eBGP on the patch-based topology capable to capture the micro textural details and the macrostructure provides complementary information to the small details. Moreover, the macrostructure information contains less detailed information and may reduce the noise or outlier embedded in the image.
3.2 Circular-based topology
Circular-based topology borrows the basic implementation of LBP which identifies a neighborhood as a set of pixels on a circular ring. In this topology, two levels of information are extracted from neighborhood at two different spatial resolutions. The first level of information is the microstructure information, which is extracted from a set of pixels on a circular ring with radius
Figure 10(a) shows a sample of image intensity that falls on circular rings
In BGP scheme, the length of histogram vector is equal to the number of neighbors at any spatial resolution. Similar to the patch-based topology, the generated histogram vector which embeds micro- and macrostructure information is concatenated to form a final representation of features for each ROI. The total length of histogram vector in this scheme can be computed using:
where
Figure 11 illustrates the general flow of feature extraction in the circular-based topology. Overall, this topology employs BGP operator on two different spatial resolutions, where the smaller resolution is for the microstructure information and the larger resolution is for the macrostructure information. In this research, no interpolation has been done to neighboring pixels where the circle does not fall exactly on the center of pixels. Figure 12 presents a sample image that is extracted from the two spatial resolutions
Similar to the patch-based topology, BGP captures the micro-oriented edges from the small structure while capturing less details of information at a much larger spatial resolution. But the combination of these two information will complement each other in providing a complete face representation.
4. Results, discussion, and analysis
To illustrate a real-world video surveillance system, the effectiveness of the proposed eBGP descriptor was evaluated using the Surveillance Camera Face (SCface) database [22]. The SCface database consists of low-resolution non-frontal face images taken by different camera quality. A series of experiments were planned to test all proposed topologies and structures on the SCface database. The performance of the proposed eBGP descriptor was evaluated against illumination, image quality, single sample per person, and real-world capture condition.
In fact, the SCface database is the most challenging database for face recognition, where its images were taken in uncontrolled indoor environment. The SCface database consists of 4160 images from 130 subjects. All images were taken at three distinct distances from the camera, where the cameras are installed at 2.25 m above the floor. Images were captured at distance 1 while the subject position is 4.20 m away from the camera, whereas for distances 2 and 3, the subject positions were at 2.60 and 1.00 m, respectively. The outdoor light was only the source of illumination, which came through a window on one side. The images were captured from five different quality commercial surveillance video cameras and two infrared night-vision cameras, in uncontrolled lighting so as to mimic the real-world conditions. Furthermore, full frontal mug shot image for each subject was captured using a high-quality photo camera with the capture conditions exactly the same as would be expected for any law enforcement. The high-quality photo camera for capturing visible light mug shots was installed the same way as the infrared camera but in a separate room with the standard indoor lighting, and it was equipped with adequate flash. In our experiments, the high-quality mug shot image of each person was used as a training gallery, while the remaining images from the five surveillance cameras and distances were used as test images, as depicted in Figure 13. With the focus of this research toward images in visible spectrum and single sample per person, especially for real-world surveillance system, the images taken from IR night-vision cameras and mug shot rotation were not used in this research. As preprocessing steps, all images in the SCface database were aligned based on the provided eye coordinates, so that the eyes’ line lies on a straight line. The images were then scaled and cropped to 64x64 pixel as has been implemented in [22].
The performance of the proposed eBGP descriptor was evaluated using the histogram intersection, where the histogram intersection computes the similarity between two discretized probability distributions or histogram vectors. Given
where
where
It is vital to stress that the classifier plays a decisive role in achieving better recognition rate. In this research, the experiments were dictated in such a way to focus on recognition rate improvement due to macrostructure information fusion. Hence, the recognition rate of the proposed eBGP descriptor and its baseline BGP descriptor were computed and compared to verify the recognition rate improvement. For comparative analysis, results of BGP descriptor on the SCface database are produced by running the BGP code requested from [14]. This is to ensure analysis of the result can be done without any concern on the validity of the results. In fact, Huang and Yin [14] do not use the SCface database in their work; thus BGP code was altered to work with the SCface database.
4.1 Experiment settings and preprocessing
As a preprocessing step, each image is first transformed into OIGM images using the same method used by the BGP descriptor. OIGM images are then divided into
4.2 Results of patch-based topology
For better presentation, several notations are used to describe the experiment setup and implementation. BGPM(
Table 1 shows the performance of the proposed descriptor on the SCface database, where eBGPM(16;2) and eBGPM(8;1) represent the extended BGPM (eBGPM) with structures of Figure 5(a) and Figure 5(b), respectively. Results of BGPM(16;2) and BGPM(8;1) represent the baseline descriptor. As mentioned before in this section, the images of SCface database were captured by five cameras with three different distances. Table 1 shows the recognition rate results for each set and the average recognition rate over all cameras. The recognition rate for each set was calculated based on Eqs. (8) and (9).
Distance | Descriptor | Camera | |||||
---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | Average | ||
1 | BGPM(8;1) | 3.08 | 0.77 | 3.08 | 3.08 | 5.38 | 3.08 |
BGPM(16;1) | 6.15 | 4.62 | 4.62 | 3.85 | 5.38 | 4.92 | |
eBGPM(8;1) | 4.62 | 1.54 | 4.62 | 3.85 | 6.15 | 4.16 | |
eBGPM(16;1) | 3.85 | 7.69 | 5.38 | 5.38 | 8.46 | 6.15 | |
2 | BGPM(8;1) | 16.15 | 12.31 | 6.92 | 11.54 | 13.85 | 12.15 |
BGPM(16;1) | 23.85 | 13.85 | 7.69 | 12.31 | 13.08 | 14.16 | |
eBGPM(8;1) | 20.77 | 13.85 | 10.77 | 16.92 | 16.15 | 15.69 | |
eBGPM(16;1) | 23.08 | 17.69 | 13.85 | 16.92 | 16.15 | 17.54 | |
3 | BGPM(8;1) | 15.38 | 19.23 | 10.00 | 16.92 | 11.54 | 14.61 |
BGPM(16;1) | 18.46 | 20.00 | 16.15 | 14.62 | 11.54 | 16.15 | |
eBGPM(8;1) | 19.23 | 17.69 | 11.54 | 17.69 | 13.08 | 15.85 | |
eBGPM(16;1) | 16.15 | 16.15 | 15.38 | 16.15 | 17.69 | 16.30 |
From Table 1, it can be seen that none of the descriptors achieved recognition rate higher than 35% over all cameras and distances. Particularly, the images of distance 1 recorded the lowest recognition rate with an average of 4.58%, while the images of distances 2 and 3 achieved better recognition rates with an average of 14.89 and 15.73%, respectively. Table 1 also shows that eBGPM(8;1) slightly boosted up the performance comparable with BGPM(8;1) for all distances, where it attained the highest recognition rate over BGPM(8;1) on the distance 2 with an average recognition rate which equals to 3.54%. On the contrary, eBGPM(16;2) has a mix result with respect to its baseline BGPM(16;2); the performance drop can be observed from camera 1 gallery results, where distance 1, distance 2, and distance 3 show lower recognition rate comparable with the baseline descriptor. Similar to eBGPM(8;1), eBGPM(16;2) presented the highest recognition rate on distance 2 gallery images compared to those from distance 1 and distance 3. This is because the gallery images of distance 1, which have been acquired at 4.20 m distance, are low in resolution and small in size. Moreover, the process of scaling and cropping the images into 64 × 64 size leads to loss of the quality and some dominant features. On the other hand, the images of distance 3 are higher in quality and details. However, as the subjects are closer to the camera, which is installed at 2.25 m from the floor, in most natural head position, the upper half of the subject face is more dominant in the captured images as depicted by Figure 14. Figure 14 demonstrates that the images of distance 2 are slightly better in quality than the other two distances, but they still suffer from head position. This interprets the superiority of descriptors on this distance.
Due to these discouraging results by both the proposed eBGP descriptor and its baseline BGP, extra experiments were conducted on the SCface database. Since Table 1 illustrated that the recognition rate is improved with increase of the spatial resolution, consequently the BGPM descriptor is first extended to larger spatial resolution of (24,3). Even though recognition rate increased by including the macrostructure in eBGP, the overall recognition rate is still too low for realistic applications. It might be because the structural pattern and OIGM image were extracted from low-resolution and deformed images (after scaling and cropping have been done). Hence, two additional descriptors were then designed to investigate the effectiveness of structural patterns and OIGM image when exploiting the macrostructure information from low-resolution images. These descriptors still use BGPM in exploiting information from the microstructure, but they extract the macrostructure information in a different way.
The first additional descriptor, denoted as Type I
Distance | Descriptor | Camera | |||||
---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | Average | ||
1 | BGPM(24;3) | 5.38 | 2.31 | 4.62 | 4.62 | 5.38 | 4.62 |
Type IP | 3.85 | 6.92 | 4.62 | 6.92 | 3.85 | 6.00 | |
Type IIP | 10.77 | 6.92 | 6.92 | 5.38 | 10.77 | 8.31 | |
2 | BGPM(24;3) | 21.54 | 16.15 | 13.08 | 16.15 | 15.38 | 16.46 |
Type IP | 23.85 | 20.00 | 13.85 | 19.23 | 15.38 | 18.46 | |
Type IIP | 34.62 | 25.38 | 20.00 | 25.38 | 21.54 | 25.38 | |
3 | BGPM(24;3) | 20.00 | 18.46 | 14.62 | 16.15 | 11.54 | 16.15 |
Type IP | 16.92 | 16.92 | 14.62 | 16.92 | 16.15 | 16.31 | |
Type IIP | 22.31 | 23.08 | 15.38 | 23.85 | 16.92 | 20.31 |
Results in Table 2 expose that the Type IIP descriptor achieved better recognition rate than the rest of descriptors. The results also illustrate that Type IIP achieved better performance on images of distance 2 than those from distances 1 and 3. Furthermore, it is notable to mention that employing BGPM(24;3) at larger spatial resolution did not help much in improving the recognition rate as much as Type IIP has achieved.
4.3 Results of circular-based topology
As described in Section 3.2, the macrostructure information are exploited from the outer circle which always has larger spatial resolution (
Performance of Type Ic and Type IIc descriptors on the SCface dataset at distance 1, distance 2, and distance 3 is presented in Tables 3,4, and 5, respectively. Similar to the results obtained by the patch-based topology, the average recognition rate of the images that belong to distance 1 from all cameras is the lowest compared to those from distance 2 and distance 3 as shown in Table 3. One noteworthy observation is that most of Type IIc descriptors at any spatial resolution achieved better recognition rate than Type Ic descriptors. Taking a closer look at the descriptor’s performance in Table 5, Type IIc descriptor with spatial resolution of
Circular eBGP | Camera | |||||||
---|---|---|---|---|---|---|---|---|
Type | 1 | 2 | 3 | 4 | 5 | Average | ||
(8,1) | (16,2) | Ic | 5.38 | 3.85 | 3.85 | 3.08 | 4.62 | 4.12 |
IIc | 6.92 | 6.92 | 6.92 | 6.15 | 7.69 | 6.92 | ||
(24,3) | Ic | 5.38 | 4.62 | 4.62 | 3.08 | 5.38 | 4.62 | |
IIc | 7.69 | 5.38 | 6.92 | 7.69 | 6.92 | 6.92 | ||
(32,4) | Ic | 5.38 | 6.15 | 5.38 | 6.15 | 6.15 | 5.84 | |
IIc | 6.92 | 6.15 | 6.92 | 8.46 | 6.15 | 6.92 | ||
(40,5) | Ic | 5.38 | 7.69 | 4.62 | 6.15 | 6.15 | 6.00 | |
IIc | 9.23 | 7.69 | 6.92 | 7.69 | 6.15 | 7.54 | ||
(16,2) | (24,3) | Ic | 5.38 | 4.62 | 6.15 | 3.85 | 6.15 | 5.23 |
IIc | 6.92 | 6.92 | 5.38 | 6.15 | 7.69 | 6.61 | ||
(32,4) | Ic | 6.15 | 6.92 | 7.69 | 4.62 | 6.15 | 6.31 | |
IIc | 10.00 | 6.92 | 3.85 | 7.69 | 7.69 | 7.23 | ||
(40,5) | Ic | 5.38 | 6.92 | 3.85 | 6.15 | 6.15 | 5.69 | |
IIc | 8.46 | 7.69 | 6.15 | 8.46 | 6.92 | 7.54 | ||
(24,3) | (32,4) | Ic | 5.38 | 3.85 | 6.92 | 3.85 | 6.15 | 5.23 |
IIc | 12.31 | 7.69 | 7.69 | 9.23 | 6.92 | 8.77 | ||
(40,5) | Ic | 6.15 | 6.15 | 5.38 | 2.31 | 6.92 | 5.38 | |
IIc | 10.77 | 8.46 | 9.23 | 9.23 | 4.62 | 8.46 | ||
(32,4) | (40,5) | Ic | 4.62 | 5.38 | 6.15 | 2.31 | 7.69 | 5.23 |
IIc | 9.23 | 7.69 | 10.00 | 10.77 | 5.38 | 8.61 | ||
Baseline | BGPM(8;1) | 3.08 | 0.77 | 3.08 | 3.08 | 5.38 | 3.08 | |
BGPM(16;2) | 6.15 | 4.62 | 4.62 | 3.85 | 5.38 | 4.92 |
Circular eBGP | Camera | |||||||
---|---|---|---|---|---|---|---|---|
Type | 1 | 2 | 3 | 4 | 5 | Average | ||
(8,1) | (16,2) | Ic | 20.77 | 12.31 | 10.00 | 11.54 | 14.62 | 13.85 |
IIc | 25.38 | 19.23 | 15.38 | 17.69 | 14.62 | 18.46 | ||
(24,3) | Ic | 24.62 | 15.38 | 11.54 | 15.38 | 16.92 | 16.77 | |
IIc | 25.38 | 21.54 | 16.15 | 19.23 | 14.62 | 19.38 | ||
(32,4) | Ic | 26.92 | 17.69 | 15.38 | 17.69 | 13.85 | 18.31 | |
IIc | 23.85 | 19.23 | 16.92 | 18.46 | 15.38 | 18.77 | ||
(40,5) | Ic | 29.23 | 19.23 | 13.08 | 19.23 | 13.85 | 18.92 | |
IIc | 23.08 | 19.23 | 15.38 | 17.69 | 16.92 | 18.46 | ||
(16,2) | (24,3) | Ic | 26.15 | 16.15 | 11.54 | 13.08 | 15.38 | 16.46 |
IIc | 25.38 | 22.31 | 16.15 | 21.54 | 19.23 | 20.92 | ||
(32,4) | Ic | 25.38 | 18.46 | 13.85 | 13.85 | 13.85 | 17.08 | |
IIc | 24.62 | 21.54 | 17.69 | 21.54 | 20.00 | 21.08 | ||
(40,5) | Ic | 25.38 | 20.00 | 13.08 | 20.77 | 15.38 | 18.92 | |
IIc | 24.62 | 20.77 | 16.92 | 20.00 | 17.69 | 20.00 | ||
(24,3) | (32,4) | Ic | 20.77 | 18.46 | 12.31 | 13.85 | 14.62 | 16.00 |
IIc | 28.46 | 24.62 | 16.92 | 20.77 | 20.77 | 22.31 | ||
(40,5) | Ic | 22.31 | 17.69 | 14.62 | 16.15 | 14.62 | 17.08 | |
IIc | 28.46 | 23.85 | 15.38 | 16.15 | 16.92 | 20.15 | ||
(32,4) | (40,5) | Ic | 22.31 | 16.92 | 13.85 | 17.69 | 13.85 | 16.92 |
IIc | 25.38 | 25.38 | 16.92 | 20.00 | 16.15 | 20.77 | ||
Baseline | BGPM(8;1) | 16.15 | 12.31 | 6.92 | 11.54 | 13.85 | 12.15 | |
BGPM(16;2) | 23.85 | 13.85 | 7.69 | 12.31 | 13.08 | 14.16 |
Circular eBGP | Camera | |||||||
---|---|---|---|---|---|---|---|---|
Type | 1 | 2 | 3 | 4 | 5 | Average | ||
(8,1) | (16,2) | Ic | 20.77 | 21.54 | 13.85 | 15.38 | 13.85 | 17.08 |
IIc | 25.38 | 26.15 | 20.00 | 23.85 | 13.85 | 21.85 | ||
(24,3) | Ic | 23.08 | 20.77 | 13.08 | 20.00 | 11.54 | 17.69 | |
IIc | 23.08 | 24.62 | 20.00 | 23.85 | 16.92 | 21.69 | ||
(32,4) | Ic | 20.00 | 21.54 | 14.62 | 17.69 | 11.54 | 17.08 | |
IIc | 20.77 | 24.62 | 17.69 | 21.54 | 14.62 | 19.85 | ||
(40,5) | Ic | 19.23 | 17.69 | 15.38 | 18.46 | 10.77 | 16.31 | |
IIc | 23.85 | 23.85 | 15.38 | 20.77 | 13.85 | 19.54 | ||
(16,2) | (24,3) | Ic | 20.77 | 20.77 | 13.08 | 17.69 | 13.08 | 17.08 |
IIc | 26.15 | 25.38 | 20.77 | 24.62 | 19.23 | 23.23 | ||
(32,4) | Ic | 20.77 | 18.46 | 16.15 | 19.23 | 10.00 | 16.92 | |
IIc | 24.62 | 22.31 | 16.15 | 22.31 | 16.92 | 20.46 | ||
(40,5) | Ic | 19.23 | 19.23 | 15.38 | 18.46 | 12.31 | 16.92 | |
IIc | 26.15 | 21.54 | 16.15 | 22.31 | 11.54 | 19.54 | ||
(24,3) | (32,4) | Ic | 17.69 | 16.15 | 13.85 | 17.69 | 9.23 | 14.92 |
IIc | 23.08 | 20.77 | 19.23 | 21.54 | 15.38 | 20.00 | ||
(40,5) | Ic | 20.00 | 16.15 | 13.85 | 19.23 | 10.77 | 16.00 | |
IIc | 23.85 | 21.54 | 16.92 | 18.46 | 16.15 | 19.38 | ||
(32,4) | (40,5) | Ic | 16.15 | 15.38 | 13.08 | 18.46 | 10.00 | 14.61 |
IIc | 20.77 | 20.77 | 16.92 | 21.54 | 10.77 | 18.15 | ||
Baseline | BGPM(8;1) | 15.38 | 19.23 | 10.00 | 16.92 | 11.54 | 14.61 | |
BGPM(16;2) | 18.46 | 20.00 | 16.15 | 14.62 | 11.54 | 16.15 |
For further evaluation, Table 6 demonstrates results of the proposed eBGP descriptor compared with state-of-the-art descriptors such as PCA [27], SIFT and sparse representation-based classification (SRC) [28], and edge-preserving super-resolution (SR) [29], on the SCface database at distance 2. All descriptors applied the same test conditions, where only one mug shot image per subject is used for training, while the remaining low-resolution images from all cameras are used as probe images. The results show that the proposed descriptors based on eBGP achieved the highest recognition rates compared to all other descriptors, especially eBGPM(16;2) (Type IIP) which has the best recognition rate over all camera images. Exploiting information from the macrostructure raised the BGPM results from the fifth highest to first. This indicates the importance of the macrostructure information in shaping a complete face representation in single-reference face recognition problem.
Descriptor | Camera | |||||
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | Average | |
PCA [27] | 7.70 | 7.70 | 3.90 | 3.90 | 7.70 | 6.18 |
SIFT [28] | 13.08 | 12.31 | 8.46 | 15.38 | 9.23 | 11.69 |
BGPM(16;2) | 23.85 | 13.85 | 7.69 | 12.31 | 13.08 | 14.16 |
SRC [28] | 29.23 | 16.15 | 12.31 | 25.38 | 13.08 | 19.23 |
Edge-preserving SR [29] | 26.92 | 21.54 | 15.38 | 24.61 | 15.38 | 20.77 |
eBGPM(24;3)(32;4) (circular) | 28.46 | 24.62 | 16.92 | 20.77 | 20.77 | 22.31 |
eBGPM(16;2) (Type IIP) | 34.62 | 25.38 | 20.00 | 25.38 | 21.54 | 25.38 |
5. Conclusion
In this paper, an extended BGP (eBGP) descriptor, which incorporates macrostructure information into BGP descriptor, has been proposed to improve the overall descriptor performance in single-reference face recognition problem. Results obtained from a series of experiments on the SCface database showed that a fusion of information extracted from micro- and macrostructures is capable of boosting up the performance of BGP descriptor. The proposed eBGP descriptor was tested with the patch-based and circular-based topologies; in overall, the circular-based topology outperformed the patch-based topology in terms of recognition rate. In patch-based topology, 5 × 5 structure recorded better hike in recognition rate than 3 × 3 structure, while in circular-based topology, larger spatial resolution showed better hike in the recognition performance. Moreover, a fusion of micro- and macrostructure information extracted from OIGM and grayscale image, respectively, raised the recognition rate higher. In fact, Type IIc setup always illustrated a better performance boost than Type Ic. With regard to thresholding implementation, it is worth to mention that local mean is on par with the local median for the descriptor and does not offer additional boost in the patch-based topology.
Acknowledgments
The authors highly acknowledge Universiti Sains Malaysia for its fund Universiti Sains Malaysia Research University Grant (RUI) no. 1001/PELECT/8014056.
References
- 1.
Radman A, Suandi SA. Robust face pseudo-sketch synthesis and recognition using morphological-arithmetic operations and HOG-PCA. Multimedia Tools and Applications. 2018; 77 (19):25311-25332 - 2.
Matta F, Dugelay J-L. Person recognition using facial video information: A state of the art. Journal of Visual Languages and Computing. 2009; 20 (3):180-187 - 3.
De-la-Torre M, Granger E, Radtke PVW, Sabourin R, Gorodnichy DO. Partially-supervised learning from facial trajectories for face recognition in video surveillance. Information Fusion. 2015; 24 :31-53 - 4.
Zakaria Z, Suandi SA, Mohamad-Saleh J. Hierarchical skin-AdaBoost-neural network (H-SKANN) for multi-face detection. Applied Soft Computing. 2018; 68 :172-190 - 5.
Lowe DG. Object recognition from local scale-invariant features. In: Proceedings of the Seventh IEEE International Conference on Computer Vision. 20-27 September; Kerkyra, Greece; 1999. pp. 1150-1157 - 6.
Bay H, Tuytelaars T, Van Gool L. Surf: Speeded up robust features. In: European Conference on Computer Vision. 7-13 May; Graz, Austria; 2006. pp. 404-417 - 7.
Ahonen T, Hadid A, Pietikainen M. Face description with local binary patterns: Application to face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2006; 28 (12):2037-2041 - 8.
Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: International Conference on Computer Vision and Pattern Recognition. 20-25 June; San Diego, CA; 2005. pp. 886-893 - 9.
Wold S, Esbensen K, Geladi P. Principal component analysis. Chemometrics and Intelligent Laboratory Systems. 1987; 2 (1-3):37-52 - 10.
Belhumeur PN, Hespanha JP, Kriegman DJ. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1997; 19 (7):711-720 - 11.
Bartlett MS, Movellan JR, Sejnowski TJ. Face recognition by independent component analysis. IEEE Transactions on Neural Networks. 2002; 13 (6):1450-1464 - 12.
Ren J, Jiang X, Yuan J. Noise-resistant local binary pattern with an embedded error-correction mechanism. IEEE Transactions on Image Processing. 2013; 22 (10):4049-4060 - 13.
Ojala T, Pietikäinen M, Mäenpää T. Gray scale and rotation invariant texture classification with local binary patterns. In: European Conference on Computer Vision. June 26-July 1; Dublin, Ireland; 2000. pp. 404-420 - 14.
Huang W, Yin H. Robust face recognition with structural binary gradient patterns. Pattern Recognition. 2017; 68 :126-140 - 15.
Liu L, Lao S, Fieguth PW, Guo Y, Wang X, Pietikäinen M. Median robust extended local binary pattern for texture classification. IEEE Transactions on Image Processing. 2016; 25 (3):1368-1381 - 16.
Ojala T, Pietikäinen M, Harwood D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognition. 1996; 29 (1):51-59 - 17.
Lee K-C, Ho J, Kriegman DJ. Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005; 27 (5):684-698 - 18.
Martinez AM. The AR face database. CVC Tech. Report. 1998. 24 - 19.
Gross R, Matthews I, Cohn J, Kanade T, Baker S. Multi-PIE. Image and Vision Computing. 2010; 28 (5):807-813 - 20.
Phillips PJ, Rizvi SA, Rauss PJ. The FERET evaluation methodology for face-recognition algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000; 22 (10):1090-1104 - 21.
Huang GB, Ramesh M, Berg T, Learned-Miller E. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. In: European Conference on Computer Vision Workshop on Faces in Real-life Images. October 200. pp.1-11 - 22.
Grgic M, Delac K, Grgic S. SCface—surveillance cameras face database. Multimedia Tools Applications. 2011; 51 (3):863-879 - 23.
Liu L, Fieguth P, Zhao G, Pietikäinen M, Hu D. Extended local binary patterns for face recognition. Information Sciences. 2016; 358 :56-72 - 24.
Liao S, Zhu X, Lei Z, Zhang L, Li SZ. Learning multi-scale block local binary patterns for face recognition. In: International Conference on Biometrics. 27-29 August 2007; Seoul, Korea;. pp. 828-837 - 25.
Liu L, Fieguth P, Guo Y, Wang X, Pietikäinen M. Local binary features for texture classification: Taxonomy and experimental study. Pattern Recognition. 2017; 62 :135-160 - 26.
Liu L, Zhao L, Long Y, Kuang G, Fieguth P. Extended local binary patterns for texture classification. Image and Vision Computing. 2012; 30 (2):86-99 - 27.
Martínez AM, Kak AC. Pca versus lda. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001; 23 (2):228-233 - 28.
Hu X, Peng S, Wang L, Yang Z, Li Z. Surveillance video face recognition with single sample per person based on 3D modeling and blurring. Neurocomputing. 2017; 235 :46-58 - 29.
Mandal S, Thavalengal S, Sao AK. Explicit and implicit employment of edge-related information in super-resolving distant faces for recognition. Pattern Analysis and Applications. 2016; 19 (3):867-884