Open access peer-reviewed chapter

A MANOVA of LBP Features for Face Recognition

By Yuchun Fang, Jie Luo, Qiyun Cai, Wang Dai, Ying Tan and Gong Cheng

Submitted: November 9th 2010Reviewed: May 7th 2011Published: July 27th 2011

DOI: 10.5772/20487

Downloaded: 2274

1. Introduction

Face recognition is one of the most broadly researched subjects in pattern recognition. Feature extraction is a key step in face recognition. As an effective texture description operator, Local Binary Pattern (LBP) feature is firstly introduced by Ahonen et al into face recognition. Because of the advantages of simplicity and efficiency, LBP feature is widely applied and later on becomes one of the bench mark feature for face recognition. The basic idea of LBP feature is to calculate the binary relation between the central pixel and its local neighborhood. The images are described with a multi-regional histogram sequence of the LBP coded pixels. Since most of the LBP pattern of the images are uniform patterns, Ojala et al, 2002 proposed Uniform Local Binary Pattern (ULBP). Through discarding the direction information of the LBP feature, they proposed the Rotation Invariant Uniform Local Binary Pattern (RIU-LBP) feature. The Uniform LBP feature partly reduces the dimension and retains most of the image information. RIU-LBP greatly reduces the dimension of the feature, but its performance in face recognition decreases drastically. This chapter mainly discusses the major factors of the ULBP and RIU-LBP features and introduces an improved RIU-LBP feature based on the factor analysis.

Many previous works also endeavored to modify the LBP features. Zhang and Shan et al, 2006 proposed Histogram Sequence of Local Gabor Binary Pattern (HSLGBP), whose basic idea was to perform LBP coding to the image in multi-resolution and multi-scale of the images, thereby enhancing the robustness to the variation of expression and illumination; Jin et al, 2004 handled the center pixel value as the last bin of the binary sequence, the formation of the new LBP operator could effectively describe the local shape of face and its texture information; Zhang and Liao et al, 2007a, 2007b proposed multi-block LBP algorithm (MB-LBP), the mean of pixels in the center block and the mean of pixels in the neighborhood block were compared; Zhao & Gao, 2008 proposed an algorithm for multi-directional binary mode (MBP) to perform LBP coding from four different directions; Yan et al, 2007 improved the robustness of the algorithm by fusing the mult-radius LBP feature; He et al, 2005 believed that every sub-block contained different information, and proposed an enhanced LBP feature. The original image was decomposed into four spectral images to calculate the Uniform LBP codes, and then the waterfall model was used to combine them as the final feature. In order to effectively extract the global and local features of face images, Wang Wei et al 2009 proposed LBP pyramid algorithm. Through multi-scale analysis, the algorithm first constructed the pyramid of face images, and then the histogram sequence in a hierarchical way to form the final features.

No matter how the ULBP features are modified, the blocking number, the sampling density, the sampling radius and the image resolution dominantly control the performance of the algorithms. They affect the memory consumption and computation efficiency of the final feature drastically. However, the values of these factors have to be pre-selected and in most previous work, their values are decided with some experience, which obviously ignores the influence degree of each factor and the experience values are hard to be generalized to other databases. In order to seek a general conclusion, in this chapter, we use statistical method of multivariate analysis of variance (MANOVA) to discuss the contribution of four factors for face recognition based on both ULBP and RIU-LBP features. Besides, we research the correlation of the factors and explore which factors play a key role in face recognition. We also analyze the characteristics of the factors; discuss the change of influence of factors for different LBP features. Based on the factor analysis, we propose a modified RIU-LBP feature.

The chapter is organized as follows. In Section 2, we introduce the LBP operators, the LBP features and the four major factors. In Section 3, we illustrate how the MANOVA is applied in exploring the importance of four factors and the results obtained for the two types of LBP features. Based on the above analysis results, an improved RIU-LBP algorithm is introduced in Section 4, which is a fusion of multi-directional RIU-LBP features. We summarize the chapter with several key conclusions in Section 5.

2. LBP features and factors

LBP feature is a sequence of histograms of blocked sub-images of face images coded with LBP operator. The image is divided into rectangle regions and histograms of the LBP codes are calculated over each of them. Finally, the histograms of each region are concatenated into a single one that represents the face image.

2.1. Three LBP operators

With the variation of the LBP operator, the obtained LBP features are of different computation complexity. Three types of LBP operators are compared in this chapter.

The basic LBP operator is formed by thresholding the neighborhood pixels into binary code 0 or 1 in comparison with the gray value of the center pixel. Then the central pixel is coded with these sequential binary values. Such coding denoted as LBP(P,R)is determined by the radius of neighborhoods Rand the sampling densityP. With various values of RandP, the general LBP operator could adapt to different scales of texture features, as shown in Figure 1. The order of binary code reserves the direction information of texture around each pixel with 2Pvariations.

When there exist at most 2 times of 0 to 1 or 1 to 0 variation, the binary pattern is called a uniform pattern. The Uniform LBP operator LBP(P,R)u2codes the pixel with uniform patterns and denotes all un-uniform patterns with the same value. Its coding complexity isP2P+2.

Rotation Invariant Uniform (RIU) LBP operator LBP(P,R)Riu2is another very popular texture operator. It neglects the order of binary coding and the center pixel of RIU-LBP is denoted by simply counting the number of 1s in the neighborhood as denoted in Equation (Eq. 1)

LBP(P,R)Riu2={p=0Ps(g(p)g(c)),    for  uniform  patterns P+1,          otherwise.E1

where cis the center pixel, g()denotes gray level of pixel and s()is the sign function. The coding complexity of the RIU-LBP operator isP+2.

Figure 1.

The General LBP operator

2.2. Three LBP features

After the original face image is transformed into an LBP image with the LBP operators, the LBP image is blocked into M-by-Nsquares (See examples in Figure 2) to reserve the space structure of face, and then the LBP histogram is calculated for each square to statistically reflect the edge sharpness, flatness of region, existence of special points and other attributes of a local region. The LBP feature is the concatenated serial of all M-by-NLBP histograms. Hence, LBP feature is intrinsically a statistic texture description of the image containing a sequence of histograms of blocked sub-images. The blocking number and the sampling density determine the feature dimensions. For the three LBP operators introduced in Section 2.1, the corresponding LBP feature is denoted asLBP(P,R)(M,N), LBP(P,R)u2(M,N)and LBP(P,R)Riu2(M,N)respectively by taking into consideration the blocking parameters.

Due to different coding complexity, the above three LBP featuresLBP(P,R)(M,N), LBP(P,R)u2(M,N)and LBP(P,R)Riu2(M,N)are of various dimensions as shown in Equation (Eq. 2) to (4) respectively. For the former two types of LBP feature, increasing the sampling density will result in explosion of dimension. Examples of dimension comparison are listed in Table I. The blocking number and sampling density are two major factors affecting the dimensions of the LBP feature.

D=(M×N)×2PE2
D=(M×N)×[P2P+2+1]E3
D=(M×N)×(P+2)E4
M×N/P7×8/87×8/1614×16/8
The general LBP143362867257344
M×N×2P
Uniform LBP33041360813216
M×N×(P2P+2+1)
RIU-LBP56010082240
M×N×(P+2)

Table 1.

Dimension comparison of three LBP features (M×Ndenotes the blocking number, Pdenotes the sampling density)

2.3. The four factors of LBP feature

The blocking number, the sampling density, the sampling radius and the image resolution are four factors that determine the LBP features.

The blocking number and sampling density are two important initial parameters affecting the dimensions and arouse more attentions in previous research. In addition, the blocking number and the image resolution determine the number of pixels of each sub-image. It means how much local information of face contains in each sub-image. If the image resolution isH×W, the blocking number isM×N, and then each sub-image contains[HM]×[WN]pixels, []is rounding. For example, when the image resolution is 140 * 160, the blocking numbers are respectively3×4, 7×8, 14×16, 21×24and the sub-images contain respectively 1880, 400, 100, 49 pixel as shown in Figure 2. The position of the neighbour points of the LBP operator is decided by the size of the sampling radius, so the vale of radius also directly affects the LBP features.

Figure 2.

Comparison of the blocking number of sub-image

Among the four factors, the blocking number and the sampling density are two factors deciding the dimension of the LBP features. For an example shown in Table 1, in the case of different sampling density, the dimension of LBP(8,1)u2(8,7)feature isD=3304. When Pdoubles, D=13608forLBP(16,1)u2(8,7). Figure 2 shows that the more the blocking number is, the higher the dimension of features is. Such feature will inevitably cost huge amount of memory and lowers the speed in computation. Does it deserve to spend so much memory to prompt precision of merely a few percents? We do some preliminary experiments to find the answer.

For different values of the four factors, we compare the face recognition rate on a face database containing 2398 face images (1199 persons, each of 2 images) selected from the FERET database. Experimental results are evaluated with the curve of rank with respect to CMS (Cumulative Matching Score), meaning the rate of correct matching lower than a certain rank. The closer this curve towards the lineCMS=1, the better is the performance of the corresponding algorithms.

Figure 3 is the comparison of recognition rate for three LBP features. It can be observed that the Uniform LBP has very close performance with the basic LBP but of much lower dimension. While the recognition rate of the RIU-LBP feature decreases significantly due to the loss of direction information. The results indicate that it is good enough to adopt ULBP instead of the basic LBP feature and the direction information has major impact to the recognition rate.

Figure 3.

Comparison of recognition rate for three LBP features (With blocking number 7 x 8, sampling density 8, sampling radius 2 and image resolution 70*80)

Figure 4 compares the performance of the same kind of LBP feature with various values of the blocking numbers and sampling density. The comparison shows that higher sampling density and more blocking numbers could result in better performance. It demonstrates that the two factors affect not only the feature dimensions, but also the face recognition rate. Besides, the sampling radius determines the sampling neighborhood of sub-blocks. The image resolution and the blocking number determine the number of pixels of each sub-block.

Figure 4.

Comparison of recognition rate for different blocking number and sampling density for RIU-LBP (With sampling radius 2, image resolution 70*80 and 7 x 8 /4 denotes blocking number 7 x 8 and sampling density 4)

Figure 5 compares the performance of the same kind of LBP feature with various values of the sampling radius and image resolution. The results show that higher resolution and larger sampling density are better for face recognition. Though, these two factors do not affect the feature dimensions, they have unneglectable impact on the recognition rate.

Figure 5.

Comparison of recognition rate for different sampling radius and image resolution for RIU-LBP (With blocking number 7 x 8, sampling density 8 and 140*160/1 denotes image resolution 140*160 and sampling radius 1)

3. The importance of four factors

As described in the above, when we use LBP feature in face recognition, the blocking number, the sampling density, the sampling radius, the image resolution would affect the recognition rate in varying degrees.

In most present research, the values of these factors are selected according to experience in experiments. Ahonen, et al, 2004 compared different levels of the blocking number, the sampling density and the sampling radius based on the Uniform LBP feature. Under the image resolution of 130 * 150, they selected blocking number7×7, sampling density 8 and radius 2 as a set of best value which could balance the recognition rate and the feature dimensions. Moreover, they referred to that if the sampling density dropped from 16 to 8, it could substantially reduce the feature dimension and the recognition rate only lowered by 3.1%. Later on, Ahonen, et al, 2006 also analyzed the effect of blocking number for the recognition rate by several experiments. The conclusion is that the blocking number 6×6are better than blocking number 4×4in case of less noise, and vice versa. Chen, 2008 added fusion of decision-making in LBP feature extraction method. They selected the sampling density 8, radius 2 blocking number 4×4and image resolution 128 * 128. Xie et al, 2009 proposed LLGP algorithm, and they selected image resolution 80* 88 and blocking number 8×11as the initial parameters. Zhang et al 2006 proposed HSLGBP algorithm and also discussed the size of sub-images and its relationship with recognition rates. Wang et al, 2008 used multi-scale LBP features to describe the face image. On the basis, they discussed the relationship between the blocking number and recognition rate, and summarized that too large or too small size of blocks would affect recognition rate. In some other papers, the researchers fixed the size of sub-image determined by the blocking number and the resolution of images.

However, more open problems could not be explained with the experienced values. How is the degree of impact of the four factors? Are they contributes the same in efficiency? Are there interactions between pairs of factors? How will the parameters affect the recognition? How to compare the performance of different LBP features? In order to solve these problems, we endeavor to compare four major factors for the two most typical LBP features, i.e. the ULBP and the RIU-LBP feature with MANOVA. Our purpose is to explore the contribution of the four factors for recognition and the correlation among them. The results of the studies to these problems provide important merits for the improvement of LBP features.

3.1. MANOVA

MANOVA is an extensively applied tool for multivariate analysis of variance in statistics. For our problem, the four variables waiting to be explored are the resolution of images, the blocking numbers, the sampling density and the radius of LBP operators for face recognition tasks. With MANOVA, we could identify whether the independent variables have notable effect and whether there exist notable interactions among the independent variables [12].

By denoting the four factors as follows:

I- Resolution of imagesB- Blocking numbers P- Sampling density of the LBP operatorR- Sampling radius of the LBP operator

And taking the face recognition rate as the dependent variable, the total sum of squares deviations STis denoted in Equation (Eq. 5):

ST=S+BSP+SR+SI+SB×P+SB×R+SB×I+SP×R+SP×I+SR×I+SEE5

where SB×P+SB×R+SB×I+SP×R+SP×I+SR×Iis the sum of interaction, and SEis the sum of squares of the errors.

MANOVA belongs to the F-test, in which the larger Fvalue and the smaller Pvalue correspond to independent variables that are more significant. Hence, the significance of the factors is evaluated through checking and comparing theFvalue and thePvalue. If thePvalue is less than a given threshold, the factor has dominant effect, or there exist notable interactions between two factors. If the Fvalue of one factor is the largest, its effect is the most important.

3.2. Experiment design of factors

We use the same face database as described in Section 2.3 in MANOVA. As analyzed in Section 2.3, the general LBP feature has much higher dimensions than the ULBP feature but the performance is close, so we conduct the experiments for ULBP and RIULBP features.

We set each factor three or four different levels as shown in Table 2. Under the RIU-LBP features, 108 sets of experimental data were obtained. Under the ULBP features, 81 sets of experimental data were obtained (level of blocking number 21×24is missed due to too high computation complexity).

BPRI
Level13×44135*40
Level27×88270*80
Level314×16163140*160
Level421×24

Table 2.

Different levels of four factors in experiment

3.3. Analysis of the factors of RIU-LBP

3.3.1. The significance and interaction

We firstly analyze the independent influence of four factors and significance of interaction. Table 3 shows the results based on RIU-LBP feature. The row of Table 3 is in descending order byFvalue.

The first part of Table 3 shows the independent effect of the four factors. The value ofPwere less than 0.05, it means all four factors have significant effects for recognition rate. The impact from the largest to the smallest, respectively, is the blocking number, the sampling radius, the image resolution and the sampling density. Specially, the Fvalue of blocking number is greater than the other three factors, which reflects the importance of the blocking number in face recognition. TheFvalue of the sampling density is much smaller than the other three factors, meaning the weakest degree of influence.

DfSum SqMean SqFvaluePvalue
B31.5090.503900.683<0.001
R20.4640.232415.919<0.001
I20.3810.190341.202<0.001
P20.0480.02442.772<0.001
B*R60.0670.01120.058<0.001
R*I40.0430.01019.239<0.001
B*I60.0210.0046.300<0.001
P*I60.0110.0034.7290.002
R*P60.0070.0022.9230.027
B*P60.0060.0011.7000.134
Residuals680.038
Total1072.594

Table 3.

MANOVA results based on RIU-LBP (Df is freedom, Sum Sq is sum of squares, Mean Sq is mean square and * is interaction)

3.3.2. Analysis of levels of each single factor

For each single factor, we also design the MANOVA for each pair of levels. ThePvalues are recorded for all four factors shown in Table 4-7 respectively. The second column lists the average recognition rate for each level in each of these tables.

Table 4 is the analysis result based on four levels of blocking numbers. From the second column it can be seen that the more the blocking number is, the higher the mean of recognition rate. For blocking numbers, thePvalue between level 14*16 and level 21*24 is 0.241. So there are no notable difference between them while for other pairs of levels, the interactions are notable.

LevelMean3×47×814×1621×24
3×40.4771.000<0.001<0.001<0.001
7×80.679<0.0011.0000.0160.002
14×160.754<0.0010.0161.0000.410
21×240.777<0.0010.0020.4101.000

Table 4.

Level comparison for the blocking number based on RIU-LBP

Through the first part of the analysis, we know that the sampling density is of the least affect among four factors. The pair-wise results for sampling density in Table 5 show that there is no significant difference for various levels since the Pvalues are all larger than 0.05. Besides, the second column shows that the highest recognition rate is 0.689, corresponding to the sampling density is 8. It gives us an important information that the sampling density is not the bigger the better. So we should not blindly pursue high sampling density.

LevelMean4816
40.6421.0000.610.61
80.6890.611.0000.89
160.6840.610.891.000

Table 5.

Level comparison for sampling density based on RIU-LBP

Table 6 and Table 7, respectively are the analysis result based on the three levels of sampling radius and image resolution. Although these two factors would not affect the feature dimension, but from previous discussion we already know that they also affect recognition rate. If we select the appropriate parameter values of these factors, it could help to improve the recognition rate.

For the resolution of image, there exists significant difference between 35*40 and 140*160. The significance between 35*40 and 70*80 is larger than that between 70*80 and 140*160 as shown in Table 6. Because the clearer the images are, the more useful information could be extracted. But at the same time, we should also consider that the higher the resolution, the larger the storage space is required. For the massive database, high-resolution images would increase the storage difficulties and need to spend more time to load image information. So according to the requirement of applications, we could select the lower resolution images to reduce storage space with acceptable recognition rate.

LevelMean35*4070*80140*160
35*400.5911.0000.007<0.001
70*800.6920.0071.0000.252
140*1600.732<0.0010.2521.000

Table 6.

Level comparison for image resolution based on RIU-LBP

In Table 7, we observe that both level 2 and level 3 are more important than level 1 for the sampling radius. However, there is no significant difference between level 2 and level 3.

LevelMean123
10.5801.0000.001<0.001
20.7050.0011.0000.452
30.730<0.0010.4521.000

Table 7.

Level comparison for sampling radius based on RIU-LBP

3.4. Analysis of factors of ULBP feature

In comparison with RIU-LBP feature, ULBP feature keeps the order of neighborhood coding and thus bears more direction information. Whether the analysis result with the RIU-LBP feature could be applicable to the ULBP features? Similar to the RIU-LBP, we perform MANOVA analysis to ULBP.

3.4.1. The significance and Interaction

Table 8 is the results for independent factors and their interactions. It shows that the four factors have significant effects independently for recognition rate. From large to small, the order of impact is respectively the image resolution, the blocking number, the sampling radius and the sampling density. The influence of sampling density is still the minimal. Compared with the RIULBP features, the image resolution is of the most notable effect for

DfSum SqMean SqFvaluePvalue
I20.07880.0394329.104<0.001
B20.06720.0336280.965<0.001
R20.02170.010890.517<0.001
P20.01070.005344.824<0.001
B*R40.00610.001512.680<0.001
B*I40.00430.001089.030<0.001
R*I40.00420.001068.856<0.001
B*P40.0018<0.0013.8620.008
P*I40.0011<0.0012.3210.070
R*P40.0007<0.0011.5750.196
Residuals480.0057
Total800.2026

Table 8.

MANOVA results based on Uniform LBP

ULBP feature instead of the blocking number. But for ULBP feature, the Fvalues of the image resolution and the blocking number are very close and both are over 3 times larger than the other two factors. In the interaction part, we could see that the interaction have no obvious effect between the sampling density and the sampling radius. We could approximately believe that these two factors of ULBP operator are independent to each other. And similar to the RIU-LBP, the interactions between pairs of factors are much smaller than the independent factors of the ULBP feature.

3.4.2. Analysis of levels of each single factor

Similarly, we also analyze the difference among various levels of factors for ULBP feature, and the results are summarized in Table 9-12.

Based on Table 9, there are significant differences between the three levels of blocking numbers, and the14×16blocks is corresponding to the highest mean of recognition rate.

LevelMean14×167×814×16
7×80.7351.000<0.001<0.001
7×80.781<0.0011.0000.004
14×160.805<0.0010.0041.000

Table 9.

Level comparisons for blocking number based on Uniform LBP

As in Table 10, the three levels of sampling density are not significantly different. The average recognition rate is the highest when the sampling density is 8, which indicates that high sampling density is not necessary.

LevelMean4816
40.7591.0000.1200.420
80.7880.1201.0000.420
160.7760.4200.4201.000

Table 10.

Level comparisons for sampling density based on Uniform LBP

For sampling radius, there is no apparent difference between level 2 and 3. When the sampling radius is 2, the mean of recognition rate is the highest.

LevelMean123
10.7511.0000.0250.025
20.7860.0251.0000.899
30.7850.0250.8991.000

Table 11.

Level comparisons for sampling radius based on Uniform LBP

Lastly, for the most prominent factor, i.e. the image resolution, there is no prominent difference between 70 * 80 and 140 * 160.

LevelMean35*4070*80140*160
35*400.7301.000<0.001<0.001
70*800.791<0.0011.0000.410
140*1600.800<0.0010.4101.000

Table 12.

Level comparisons for image resolution based on Uniform LBP

3.5. More analysis on the blocking number

Based on the MANOVA analysis for both RIULBP and ULBP feature, we can conclude that the more the blocking number is, the higher the recognition rate will be. But should its value goes to the up-limit of 1 pixel per block? And what is the suitable blocking number? We do more experiments to analyze these problems for RIU-LBP features.

We pick two more levels of the blocking number, i.e. 35×40and70×80, and fixed sampling density 8, the sampling radius 2, and the image resolution 70*80 based on previous conclusions of MANOVA. The RIU-LBP features of these two groups of parameter setting are of dimension 14,000 and 56,000 respectively. And there is only one pixel in each block when the blocking number is70×80. Figure 6 summarizes the variation of the face recognition rate with respect to the blocking number. It shows that the blocking number is not the higher the better.

Figure 6.

Comparison recognition rate based on different the blocking number and RIU-LBP

(With sampling density 8, sampling radius 2 and image resolution 70*80)

We extend the experiments to 35*40 and 140*160 image resolution cases. The recognition rate of blocking number 14×16is the highest for image resolution 35*40; the recognition rate of blocking number 35×40is the highest for image resolution 140*160. Hence, the blocking number is not the more the better.

Based on the analysis result of ULBP and RIU-LBP feature, we could take the following steps in setting the parameters of the four factors. Although different setting might be necessary for the specific applications, the basic rule is that more bits should be assigned to the blocking numbers and less for the sampling density. Moreover, the appropriate blocking number should be selected in consideration of the image resolution together, and then to choose the proper value for sampling density and sampling radius.

I/B3×47×814×1621×2435×4070×80
35*400.4600.6890.7510.7500.751N/A
70*800.5670.7430.8170.8390.8300.784
140*1600.5810.7700.8310.8470.8620.849

Table 13.

Comparison recognition rate based on different the blocking number and three image resolution (With RIU-LBP, sampling density 8, sampling radius 2)

3.6. Summary

We comprehensively analyze the four factors of the ULBP and RIU-LBP features and many useful conclusions are drawn. Firstly, the blocking number is the main factor that influences the recognition rate, which indicates that the contribution of local features in face recognition is more important than the global features. Secondly, the effect of sampling density for the recognition rate is small, but it severely affects the feature dimension. At the same time, such results mean the feasibility of using low sampling density to acquire features of high recognition rate. In addition, the sampling density and the sampling radius decide the setting of the LBP operator, but they have much less obvious affect to the recognition rate compared with the blocking numbers and the image resolution. These conclusions demonstrate that the complex encoding of the LBP operator is not important in face recognition. Finally, the interactions between the factors are of less effect to recognition rate compared with the independent factors.

4. Fusion of multi-directional RIU-LBP

The difference between the ULBP feature and the RIU-LBP feature lies in the way of LBP coding. The latter totally abandons the directional information and hence of much lower dimension but of less precision in face recognition. However, if introducing the direction information into the RIU-LBP feature in a linear complexity, a new feature of much lower dimension could result in similar precision as the ULBP feature.

4.1. Multi-directional RIU-LBP

The dimension explosion of the LBP features is mainly aroused by the sampling density. The basic LBP operator adopts an ordered way to code the variation in each direction around one pixel, thus it has to cost 2Pbits in the calculation of one LBP histogram and even the ULBP feature can only lower the cost toP2P+3. We propose a new LBP feature that fuses multi-directional low density RIU-LBP feature.

First, we split Pneighbors of a pixel into several non-overlapped subsets with P1,P2,...,PK(i=1kPi=P)uniformly distributed pixels respectively. In accordance with the mathematical coordinate system, set the angle of positive xaxis as 0and counterclockwise as positive direction, then each subset can be discriminate by its size Pi(i{1,2,...,k})and the pixel with minimum positive angleθi(i{1,2,...,k}), denoted asS(θi,Pi). An example is shown for a P=16case in Fig.7, in which the 16-point neighborhood is split into both two 8-point sets and four 4-point sets. Secondly, the regular RIU-LBP feature is calculated for each neighborS(θi,Pi)(i{1,2,...,k}), denoted asLBP(Pi,θi,R)Riu2(M,N), the so-called directional low density RIU-LBP feature. Lastly, a combination of all LBP(Pi,θi,R)Riu2(M,N)(i=1,...k)at feature level is taken as the final LBP features denoted asi=1kLBP(Pi,θi,R)Riu2(M,N), defined as multi-directional RIU-LBP feature. The dimension of i=1kLBP(Pi,θi,R)Riu2(M,N)is

D=(M×N)×i=1k(Pi+2)E6

For the two division settings shown in Fig.7, the dimension of i=12LBP(8,θi,1)Riu2(8,7)(θi{0,π8})is1120, and 1344fori=14LBP(4,θi,1)Riu2(8,7)(θi{0,π8,π4,3π8}). Both types of features are of the same computation complexity asLBP(16,1)Riu2(8,7). In comparison withLBP(16,1)u2(8,7), the dimension decreases to nearly 1/10.

Figure 7.

Example of neighborhood split

With such simple way of feature fusion, the i=1kLBP(Pi,θi,R)Riu2(M,N)feature reserves the variation of intensity in a certain direction represented by each componentLBP(Pi,θi,R)Riu2(M,N). Instead of the exponential or power 2 way of dimension growth with Paroused by the basic LBP operator, the dimension of multi-directional RIU-LBP feature increases linearly with Pgrowth.

4.2. Performance analysis

We perform the experimental analysis to the multi-directional RIU-LBP feature with the same face database described in Section 2.3. We also compare the proposed multi-directional RIU-LBP feature i=1kLBP(Pi,θi,R)Riu2(M,N)with the uniform LBP feature and the RIU-LBP feature. Two examples are illustrated in Fig.8 and Fig.9. Fig.8 shows the comparison results ofLBP(8,1)u2(8,7), LBP(8,1)Riu2(8,7)andi=12LBP(4,θi,1)Riu2(8,7)(θi{0,π8}). All these three methods have adopted exactly the same 8 neighborhood in LBP coding. The four curves in Fig.9 respectively are results ofLBP(16,2)u2(8,7), i=12LBP(8,θi,2)Riu2(8,7)(θi{0,π8}), LBP(16,2)Riu2(8,7)and i=14LBP(4,θi,2)Riu2(8,7)(θi{0,π8,π4,3π8})with the same 16 neighborhood in LBP coding. Dimensions of all features are also put in the parenthesis. In Fig.8, whereP=8,R=1,M×N=8×7, the proposed multi-directional RIU-LBP features perform much better than the RIU-LBP features and the dimensions of both feature are very close. In Fig.9, whereP=16,R=2,M×N=8×7, two types of multi-directional RIU-LBP features all outperform the RIU-LBP features with very close length of feature. Moreover, both features have very close or even better CMS in comparison with that of the uniform LBP features though the dimension of both features are much lower.

Though all three types of LBP features have adopted exactly the same neighborhood in LBP coding, the curves in Fig.8 and Fig.9 prove the promising application of the proposed multi-directional RIU-LBP features in comparison with the uniform LBP feature and the RIU-LBP

Figure 8.

Comparison results: 1-LBP(8,1)u2(8,7) (D=3304), 2-LBP(8,1)Riu2(8,7) (D=560) and 3-∪i=12LBP(4,θi,1)Riu2(8,7)(θi∈{0,π8}) (D=672)

feature. The RIU-LBP feature abandons the direction information of intensity variation around pixel. The uniform LBP feature uses a complex ordered coding way which bears of course more direction information of intensity variation. However, both work no better than the proposed algorithm.

Figure 9.

Comparison results: 1-LBP(16,2)u2(8,7) (D=13608), 2- LBP(16,2)Riu2(8,7) (D=1008), 3-∪i=12LBP(8,θi,2)Riu2(8,7)(θi∈{0,π8}) (D=1120) and 4-∪i=14LBP(4,θi,2)Riu2(8,7)(θi∈{0,π8,π4,3π8}) (D=1344)

In one hand, the experiments reveal that the direction information of intensity variation is very important and useful in face recognition. In another hand, the proposed algorithm reserves such direction information through feature fusion. With much lower dimensional in comparison with the uniform LBP feature, the proposed algorithm gains very close and even better precision.

5. Conclusion

In this chapter, we first perform a thorough analysis of the four factors of the ULBP features and the RIU-LBP features. From a statistical point of view, we use MANOVA to study four factors the blocking numbers, the sampling density, the sampling radius and the image resolution that affect the recognition rate. Based on the analysis results, a multi-directional RIU-LBP, a modified LBP feature, is proposed through fusing the RIU-LBP features of various initial angels. Several conclusions are drawn as follows: 1) For the RIU-LBP features and the ULBP features, the impact of the blocking number is more important than the other factors. For example, for the RIU-LBP feature with constant values of other factors, when the blocking number changes from 7 *8 to 14 * 16, the recognition rate increased 9 percents. This result indicates that the accuracy of face recognition strongly relies on the localized LBP histograms. 2) The sampling density has little contribution to the recognition rate and high sampling density could not help to achieve high recognition rate. So complex LBP coding is not necessary. (3) The correlation between pairs of the factors is not important. (4) The multi-directional RIU-LBP feature preserve intensity variation around pixels in different directions at the cost of linearly increasing of feature dimension instead of the power 2 way of ULBP feature. Experiments not only show that the proposed scheme could result in better recognition rate than RIU-LBP feature after reserving the direction information in feature-level fusion, but also prove that with much lower dimension of features, the proposed multi-directional RIU-LBP feature could result in very close performance compared with the ULBP feature. In all, through the statistical analysis of the importance of four factors, an effective feature is proposed and the future directions to improve the LBP feature are presented.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under Grant No.60605012, the Natural Science Foundation of Shanghai under Grant No.08ZR1408200, the Open Project Program of the National Laboratory of Pattern Recognition of China under Grant No.08-2-16 and the Shanghai Leading Academic Discipline Project under Grant No.J50103.

© 2011 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0 License, which permits use, distribution and reproduction for non-commercial purposes, provided the original is properly cited and derivative works building on this content are distributed under the same license.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Yuchun Fang, Jie Luo, Qiyun Cai, Wang Dai, Ying Tan and Gong Cheng (July 27th 2011). A MANOVA of LBP Features for Face Recognition, Reviews, Refinements and New Ideas in Face Recognition, Peter M. Corcoran, IntechOpen, DOI: 10.5772/20487. Available from:

chapter statistics

2274total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Recent Advances on Face Recongition using Thermal Infrared Images

By César San Martin, Roberto Carrillo, Pablo Meza, Heydi Mendez-Vazquez, Yenisel Plasencia, Edel García-Reyes and Gabriel Hermosilla

Related Book

First chapter

Speaker Recognition

By Homayoon Beigi

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us