Open access

Multimodal Biometric Person Recognition System Based on Multi-Spectral Palmprint Features Using Fusion of Wavelet Representations

Written By

Abdallah Meraoumia, Salim Chitroub and Ahmed Bouridane

Submitted: 24 March 2011 Published: 09 August 2011

DOI: 10.5772/18450

From the Edited Volume

Advanced Biometric Technologies

Edited by Girija Chetty and Jucheng Yang

Chapter metrics overview

2,814 Chapter Downloads

View Full Metrics

1. Introduction

The rapid development in many applications for some different areas, such as public security, access control and surveillance, requires reliable and automatic personal recognition for effective security control. Traditionally, passwords (knowledge-based security) and ID cards (token-based) have been used. However, security can be easily breached when a password is divulged or a card is stolen; further, simple passwords are easy to guess and difficult passwords may be hard to recall (Kong & Zhang; 2002). At present, applications of biometrics are rapidly increasing due to inconveniences in using traditional identification method. Biometrics refers to the technologies that can automate the recognition of persons based on one or more intrinsic physical or behavioral traits.

Currently, a number of biometric based technologies have been developed and hand-based person identification is one of these technologies. This technology provides a reliable, low cost and user-friendly solution for a range of access control applications (Kumar & Zhang; 2002). In contrast to other modalities, like face and iris, hand based biometric recognition offers some advantages. First, data acquisition is simple using off the shelf low-resolution cameras, and its processing is also relatively simple. Second, hand based access systems are very suitable for several usages. Finally, hand features are more stable over time and are not susceptible to major changes (Sricharan & Reddy; 2006). Some features related to a human hand are relatively invariant and distinctive to an individual. Among these features, palmprint modality has been systematically used for human recognition using the palm patterns. The rich texture information of palmprint offers one of the powerful means in personal identification (Fang & Maylor; 2004).

Several studies for palmprint-based personal recognition have focused on improving the performance of palmprint images captured under visible light. However, during the past few years, some researchers have considered multi-spectral images to improve the effect of these systems. Multi spectral imaging is a new technique in remote sensing, medical imaging and machine vision that generate several images corresponding to different wavelengths. This technique can be give different information from the same scene using an acquisition device to capture the palmprint images under visible and infrared light resulting into four color bands (RED, BLUE, GREEN or Near-IR (NIR) (Zhang & Guo; 2010). The idea is to employ the resulting information in these color bands to improve the performance of palmprint recognition. This paper work presents a novel technique by using information from palmprint images captured under different wavelengths, for palmprint recognition using the multivariate Gaussian Probability Density Function (GPDF) and Multi resolution analysis. In this method, a palmprint image (color band) is firstly decomposed into frequency sub-bands with different levels of decomposition using different techniques. We adopt as features for the recognition problem, the transform coefficients extracted from some sub-bands. Subsequently, we use the GPDF for modeling the feature vector of each color band. Finally, Log-likelihood scores are used for the matching.

In this work, a series of experiments were carried out using a multi spectral palmprint database. To evaluate the efficiency of this technique, the experiments were designed as follow: the performances under different color bands were compared to each other, in order to determine the best color band at which the palmprint recognition system performs. We also present a multi spectral palmprint recognition system using fused levels which combines several sub-bands at different decomposition levels.

Advertisement

2. System design

Fig. 1 illustrates the various modules of our proposed multi-spectral palmprint recognition system (single band). The proposed system consists of preprocessing, feature extraction, matching and decision stages. To enroll into the system database, the user has to provide a set of training multi-spectral palmprint images (each image is formed by: RED, BLUE, GREEN or Near-IR (NIR)). Typically, a feature vector is extracted from each band which describes certain characteristics of the palmprint images using multi-resolution analysis and modeling using Gaussian probability density function. Finally, the models parameters are stored as references models. For recognition (identification/verification), the same features vectors are extracted from the test palmprint images and the log-likelihood is computed using all of models references in the database. For the multi-modal system, each sub-system computes its own matching score and these individual scores are finally combined into a total score (using fusion at the matching score level), which is used by the decision module. Based on this matching score a decision about whether to accept or reject a user is made.

Figure 1.

Block-diagram of a multi-spectral palmprint recognition system based on the Gaussian probability density function modeling.

Advertisement

3. Region of interest extraction

From the whole image of the palmprint (each color band) only some characteristics are useful. Therefore, each color band images may have variable size and orientation. Moreover, the region of non useful interest may affect accurate processing and thus degrade the identification performance. Therefore, image preprocessing {Region Of Interest extraction (ROI)} is a crucial and necessary part before feature extraction. Thus, a palmprint region is extracted from each original palmprint image (each color band). In order to extract the center part of palmprint, we employ the method described in (Zhang & Kong; 2003). In this technique, the tangent of these two holes are computed and used to align the palmprint. The central part of the image, which is 128x128, is then cropped to represent the whole palmprint. The pre-processing steps are shown in Fig. 2. The basic steps to extract the ROI are summarized as follows: First, apply a low pass filter, such as Gaussian smoothing, to the original palmprint image. A threshold, T p , is used to convert the filtered image to a binary image, then, the boundary tracking algorithm used to obtain the boundaries of the binary image. This boundary is processed to determine the points F1 and F2 for locating the ROI pattern and, based on the relevant points (F 1 and F 2 ), the ROI pattern is located on the original image. Finally, the ROI is extracted.

Figure 2.

Various steps in a typical region of interest extraction algorithm. (a) The filtered image, (b) The binary image, (c) The boundaries of the binary image and the points for locating the ROI pattern, (d) The central portion localization, and (e) The pre-processed result (ROI).

Advertisement

4. Feature extraction and modeling

The feature extraction module processes the acquired biometric data (each color band) and extracts only the salient information to form a new representation of the data. Ideally, this new representation should be unique for each person. In our method, the color band is typically analyzed using a multi-resolution analysis. After the decomposition transform of the ROI sub-image, some of the bands are selected to construct feature vectors (observation vectors). Since the Gaussian distribution of observation vectors is computed.

4.1. Feature extraction

A multi-resolution analysis of the images has better space-frequency localization. Therefore, it is well suited for analyzing images where most of the informative content is represented by components localized in space (such as edges and borders) and by information at different scales or resolutions, with large and small features. Several methods were used for obtained the multi-resolution representation such as two dimensional discrete wavelet transform (2D-DWT) and two dimensional block based discrete cosine transform with reordering the coefficients to come to multi-resolution representation (2D-RBDCT).

4.1.1. DWT decomposition

Wavelets can be used to decompose the data in the color band into components that appear at different resolutions. Wavelets have the advantage over traditional Fourier transform in that the frequency data is localized, allowing the computation of features which occur at the same position and resolution (Antonini & Barlaut; 1992). The discrete wavelet transform (DWT) is a multi-resolution representation. Fig. 3 shows an implementation of a one-level forward DWT based on a two quadrature mirror filter bank, where h o (n) and h i (n) are low-pass and high-pass analysis filters, respectively, and the block ↓2 represents the down-sampling operator by a factor 2. Thus, for 1D-DWT, the signal is convolved with these two filters and down-sampled by a factor of two to separate it into an approximation and a representation of the details (Noore & Singh; 2007). A perfect reconstruction of the signal is possible by up-sampling (↑2) the approximation and the details and convolving with reversed filters (g o (n) and g i (n)).

Figure 3.

Implementation of a one-level forward DWT and its inverse IDWT.

For two-dimensional signals, such as images (color band), the decomposition is applied consecutively on both dimensions, e.g. first to the rows and then to the columns. This yields four types of lower-resolution coefficient images: the approximation produced by applying two low-pass filters (LL), the diagonal details, computed with two high-pass filters (HH), and the vertical and horizontal details, output of a high-pass/low-pass combination (LH and HL). In Fig. 4 an example of two levels wavelet decomposition is reported. At the first level, the original image, A0, is decomposed in four sub-band leading to: A1, the scaling component containing global low-pass information, and H1, V1, D1, three transform components corresponding, respectively, to the horizontal, vertical and diagonal details. In the second level, the approximation, A1, is decomposed in four sub-bands leading to: A2, the scaling component containing global low-pass information, and H2, V2, D2, three transform components corresponding, respectively.

4.1.2. DCT decomposition

Discrete Cosine Transform (DCT) is a powerful transform to extract proper features for palmprint recognition. The DCT is the most widely used transform in image processing

Figure 4.

Two level wavelet decomposition

algorithms, such as image/video compression and pattern recognition. Its popularity is due mainly to the fact that it achieves a good data compaction, that is, it concentrates the information content in a relatively few transform coefficients (Dabbaghchian & Ghaemmaghami; 2010). In the two dimensional block based discrete cosine transform (2D-BDCT) formulation, the input image is first divided into N x N blocks, and the 2D-DCT of each block is determined. The 2D-DCT can be obtained by performing a 1D-DCT on the columns and a 1D-DCT on the rows. Given an image f, where HxW represent their size, the DCT coefficients of the spatial block are then determined by the following formula:

F i j ( u , v ) = α ( u ) α ( v ) m = 0 N 1 n = 0 N 1 f i j ( n , m ) ψ ( n , m , u , v ) E1
ψ ( n , m , u , v ) = cos [ ( 2 n + 1 ) u π 2 N ] cos [ ( 2 m + 1 ) v π 2 N ] E2

u; v = 0, 1,…..,N-1, i = 0, 1,……., (H/N)-1, j = 0, 1,……., (W/N)-1. Where F ij (u,v) are the DCT coefficients of the B ij block, f ij (n;m) is the luminance value of the pixel (n,m) of the B ij block, HxW are the dimensions of the image, and

C ( u ) = { 1 2 i f u = 0 1 i f u 0 E3

The DCT coefficients reflect the compact energy of different frequencies. The first coefficient F 0 =F(0, 0), called DC, is the mean of visual gray scale value of pixels of a block. The AC coefficients of upper left corner of a block represent visual information of lower frequencies, whereas the higher frequency information is gathered at the right lower corner of the block (Chen & Tai; 2004).

DCT theory can provides a multi-resolution representation for interpreting the image information with the multilevel decomposition. After applying the 2D-BDCT, the coefficients are reordered resulting in a multi-resolution representation. Therefore, if the size of the block transform, N, is equal to 2, each coefficient is copied into a one-band (See Fig. 5). 2D-BDCT concentrates the information content in a relatively few transform coefficients at the top-left zone of the block. As such, the coefficients where the information is concentred, tend to be grouped together at the approximation band.

Figure 5.

Multi-resolution representation using 2D-DCT transform with reordering these coefficients. (a) Image to be transformed, (b) 2D-BDCT with a block size 2x2, and (c) 2D-RBDCT decomposition.

4.2. Feature vector

To create an observation vector, the color band image is transformed into a multi-resolution form as shown in Fig. 6. Then the palmprint feature vectors are created by combining the horizontal detail (H i ), global low-pass information (Approximation: A i ) and the vertical detail (V i ) extracted using multi-resolution analysis. Three feature vectors can be extracted using three levels of decomposition for each color band (RED, BLUE, GREEN and NIR) (See Fig. 7).

Figure 6.

Three levels decomposed into multi-resolution representation.

Let ψ x represent a HxW palmprint ROI image (color band) and x = {R, B, G, N}, thus

Ψ R = RED ψ B = BLUE ψ G = GREEN ψ N = NIR

Let F the applied transform method: F: 2D-DWT or F: 2D-RBDCT

  • One level: F(ψ x ) [A 1 , H 1 , V 1 , D 1 ] / A 1 , H 1 , V 1 , D 1 : with H/2 x W/2 coefficients.

  • Tow level: F(A1) [A 2 , H 2 , V 2 , D 2 ] / A 2 , H 2 , V 2 , D 2 : with H/4 x W/4 coefficients.

  • Three level: F(A2) [A 3 , H 3 , V 3 , D 3 ] / A 3 , H 3 , V 3 , D 3 : with H/8 x W/8 coefficients.

Those three feature vectors (observation) are shown in Fig. 7.

O 1 = [ V 1 A 1 H 1 ] O 2 = [ V 2 A 2 H 2 ] O 3 = [ V 3 A 3 H 3 ] E4

Where the size of O 1 is (3H/2)*(W/2) coefficients, O 2 is (3H/4)*(W/4) coefficients and O 3 is (3H/8)*(W/8) coefficients, respectively. As results, the color band image as a single template (Feature vectors) as follows:

Figure 7.

The observation vector.

O j = [ o 11 o 21 o M 1 o 12 o 22 o M 2 o 13 o 23 o M m 3 o 1 L o 2 L o M L ] E5

Where: j = 1 3 for Level 1, Level 2 and Level 3. L = W/k and M = 3H/k for k = 2, 4, 8. If the size of original color band is 128x128 pixels, the size of O 1 is MxL=192x64 coefficients, O 2 is equal to 96x32 coefficients and O 3 is 48x16 coefficients.

4.3. Modeling process: Gaussian Probability Density Function (GPDF)

In our system, the observation probabilities have been modeled as multi-variate Gaussian distributions. Arguably the single most important PDF is the Gaussian probability distribution function. It is one of the most studied and one of the most widely used distributions (Varchol & Levicky; 2007). The Gaussian has two parameters: the mean μ, and the variance σ2. The mean specifies the centre of the distribution, and the variance tells us how “spread-out” the PDF is. For a d-dimensional vector O, the Gaussian is written

P ( O j / μ , Σ ) = 1 ( 2 π ) d | Σ | exp [ 1 2 ( O j μ ) T Σ 1 ( O j μ ) ] E6

Where μ is the mean vector, and Σ is the d×d covariance matrix. Gaussian distributions can also be written using the notation: η(O; μ, Σ) The covariance matrix Σ of a Gaussian must be symmetric and positive definite.

After the feature extraction, we now consider the problem of learning a Gaussian distribution from the vector samples O i . Maximum likelihood learning of the parameters μ and Σ entails maximizing the likelihood (Fierrez & Ortega-Garcia; 2007). Since we assume that the data points come from a Gaussian:

P ( O j / μ , Σ ) = i = 1 L P ( o i / μ , Σ ) = i = 1 L 1 ( 2 π ) d | Σ | exp [ 1 2 ( o i μ ) T Σ 1 ( o i μ ) ] E7

It is somewhat more convenient to minimize the negative log-likelihood, LH:

L H ( O j , μ , Σ ) ln P ( O j / μ , Σ ) = i ln P ( o i / μ , Σ ) = i ( o i μ ) T Σ 1 ( o i μ ) / 2 + L 2 ln | Σ | + L d 2 ln ( 2 π ) E8

Solving for μ and Σ by setting ∂LH(O j ,μ, Σ)/∂ μ = 0 and ∂LH(O j , μ, Σ)/∂Σ = 0 (subject to the constraint that Σ is symmetric) gives the maximum likelihood estimates (Cardinaux & Sanderson; 2004):

μ ^ = 1 L i o i Σ ^ = 1 T ( o i μ ^ ) ( o i μ ^ ) T E9

The complete specification of the modeling process requires determining two model parameters (μ and Σ). For convenience, the compact notation λ (μ, Σ) is used to represent a model.

Advertisement

5. Feature matching

5.1. Matching process

During the identification process, the characteristics of the test color band image are analyzed by the 2D-DWT (2D-RBDCT) corresponding to each person. Then the Log-likelihood score of the feature vectors given each GPDF model is computed. Therefore, the score vector is given by:

L h ( O j ) = [ L H ( O j , μ 1 , Σ 1 ) L H ( O j , μ 2 , Σ 2 ) L H ( O j , μ 3 , Σ 3 ) L H ( O j , μ s , Σ s ) ] E10

Where S represents the size of model database.

5.2. Normalization and decision process

In a verification mode, our normalization rule is formulated as D o = -10 -5 LH(O j , λ i ), where D o denotes the normalized Log-likelihood scores. In an identification mode, prior to finding the decision, a Min-Max normalization, (Savic & Pavesic; 2002), scheme was employed to transform the Log-likelihood scores computed into similarity scores in the same range.

L h N = L h min ( L h ) max ( L h ) min ( L h ) E11

Where Lh N denotes the normalized Log-likelihood scores. Therefore, these scores are compared, and the highest score is selected. Therefore, the best score is D o and its equal to:

D 0 = max ( L h N ) E12

This score, D o , is compared with the decision threshold T o . When D o ≥ T o , the claimed identity is accepted; otherwise it is rejected.

5.3. Fusion process

The goal of the fusion process is to investigate the systems performance when the information from some color bands of a person is fused. In fact, in such a case the system works as a kind of multi-modal system with a single biometric trait but with multiple units. Therefore, the information presented by different bands (BLUE, GREEN, RED, and NIR) is fused to make the system efficient.

Fusion at the matching-score level is preferred in the field of biometric recognition because there is sufficient information content and it is easy to access and combine the matching scores (Ross & Jain; 2001). In our system we adopted the combination approach, where the individual matching scores are combined to generate a single scalar score, which is then used to make the final decision. During the system design we experiment four different fusion schemes: Sum-score, Min-score, Max-score and Sum-weighting-score (Singh & Vatsa; 2008). Suppose that the quantity D0i represents the score of the i th matcher (i = 1; 2; 3; 4) for different palmprint color bands (BLUE, GREEN, RED, and NIR) and DF represents the fusion score. Therefore, DF is given by:

S U M D F = i = 1 n D o i M I N D F = min { D o i } M A X D F = max { D o i } W H T D F = i = 1 n w i D o i w i = 1 / j = 1 n ( 1 / E E R j ) E E R i E13

Where w i denotes the weight associated with the matcher i, with Σw i = 1, and EER i is the equal error rate of matcher i, respectively.

Advertisement

6. Experimental results and discussion

6.1. Experimental database

Experiments were performed using the multi-spectral palmprint database from the Hong Kong polytechnic university (PolyU) (PolyU Database; 2003). The database contains images captured with visible and infrared light. Multi-spectral palmprint images were collected from 250 volunteers, including 195 males and 55 females. The age distribution is from 20 to 60 years old. It has a total of 6000 images obtained from about 500 different palms. These samples were collected in two separate sessions. In each session, the subject was asked to provide 6 images for each palm. Therefore, 24 images of each illumination from 2 palms were collected from each subject. The average time interval between the first and the second sessions was about 9 days.

6.2. Evaluation criteria

The measure of utility of any biometric recognition system for a particular application can be explained by two values (Connie & Teoh; 2003). The value of the FAR criterion, which is the ratio of the number of instances of different feature pairs of the traits found do match to the total number of counterpart attempts, and the value of the FRR criterion, which is the ratio of the number of instances of same feature pairs of the traits found do not match to the total number of counterpart attempts. It is clear that the system can be adjusted to vary the values of these two criteria for a particular application. However, decreasing one involves increasing the other and vice versa. The system threshold value is obtained using EER criteria when FAR = FRR. This is based on the rationale that both rates must be as low as possible for the biometric system to work effectively.

Another performance measurement is obtained from FAR and FRR, which is the Genuine Acceptance Rate (GAR). It represents the identification rate of the system. In order to visually describe the performance of a biometric system, Receiver Operating Characteristics (ROC) curves are usually given. A ROC curve shows how the FAR values are changed relatively to the values of the GAR and vice-versa (Jain & Ross; 2004). Biometric recognition systems generate matching scores that represent the degree of similarity (or dissimilarity) between the input and the stored template.

Cumulative Match Curves (CMC) is another method of showing the measured accuracy performance of a biometric system operating in the closed-set identification task. Templates are compared and ranked based on their similarity. The CMC shows how often the individual’s template appears in the ranks based on the match rate.

6.3. Performance of verification algorithm

In a verification mode, the log-likelihood score (LH) of the input template given each GPDF model of the claimed person is computed. To obtain the accuracy, each of the color band images was matched with all of the models in the database. In our experiment, we use randomly three color band images of each person as the training samples (enrollment) and the remainder as the test samples. The total number of matching is 562500. The number of comparisons that have corrected matching is 2250 and 560250 incorrect matching. The verification experiments were performed by using each of the Blue, Green, Red and NIR features, as well as the fusion of them at the matching score level.

6.3.1. Verification in case of uni-modal system

The goal of this experiment was to evaluate the system performance when using the information from each modality (each color band). For this, we found the performance under different modalities (Blue, Green, Red and NIR). The performances in terms of the EER of proposed method using the 2D-RBDCT technique for different threshold values T o is shown in Table 1.

Level 1 Level 2 Level 3
To EER (%) To EER (%) To EER (%)
BLUE 0.5861 4.2586 0.1609 3.6383 0.0457 6.8941
GREEN 0.5909 6.3815 0.1638 4.7971 0.0466 9.7511
RED 0.4190 3.6407 0.1433 3.5613 0.0510 5.4266
NIR 0.3950 4.9093 0.1328 4.2071 0.0469 6.2473

Table 1.

Verification test result for 2D-RBDCT (Uni-modal case)

From these results, one can clearly observe advantages of using the color band RED than the BLUE, NIR and GREEN in terms of EER for all levels. For example, in level 2, if the BLUE feature is used a GAR of 96.3617% is obtained In the case of using the GREEN, GAR was 95.2029 %. When NIR is used, the EER was 95.7929% while a RED feature improves the result to 96.4387 % for a database containing 250 persons. Therefore, from these test results one can conclude that an integration of color band RED with level 2 does result in an improvement of verification accuracy. Fig. 8 .(a) compares the performance of the system for deferent levels using RED color band. Finally, the genuine and impostor distributions are plotted in Fig. 8 .(a) and the ROC curves, shown in Fig. 8 .(c), depicts the performance of the system.

Figure 8.

Uni-modal verification test results using the level 2 decomposition with 2D-RBDCT features extraction. (a) The ROC curves for all levels (color bands RED), (b) The genuine and impostor distribution and (c) The ROC curves.

In the case of using 2D-DWT, the system was tested with different thresholds and the results are shown in Table 2. It is clear that the system achieves a minimum EER when the RED band with level 2 is used. In level 1, the system achieves a maximum GAR of 94.5740% if the RED band is used. For level 3, the system operates with a maximum GAR of 94.6539% at the NIR band. Fig. 9 .(a) compares the performance of the system for deferent levels using RED color band. Fig. 9 .(b) shows the two distributions for RED band using level 2 and the system performance at all thresholds can be depicted in the form of a receiver operating characteristic (ROC) curve, see Fig. 9.(c).

6.3.2. Verification in case of multi-modal system

The objective of this section is to investigate the combination of all color bands features in order to achieve higher performances that may not be possible with uni-modal biometric

Level 1 Level 2 Level 3
To EER (%) To EER (%) To EER (%)
BLUE 0.5007 5.4339 0.1183 3.6438 0.0298 6.0094
GREEN 0.5074 6.5484 0.1212 4.8002 0.0302 6.0468
RED 0.4253 5.4260 0.1006 3.5485 0.0264 5.8528
NIR 0.3845 6.2639 0.0900 3.9351 0.0238 5.3461

Table 2.

Verification test result for 2D-DWT (Uni-modal case)

Figure 9.

Uni-modal verification test results using the level 2 decomposition with 2D-DWT features extraction. (a) The ROC curves for all levels (color bands RED), (b) The genuine and impostor distribution and (c) The ROC curves.

alone. Thus, in order to see the performance of the system, we have evaluated different fusions of color bands and Table 3 summarizes the equal error rates for these experiments. From Table 3, we can observe the advantages of using the RGBN fusion modalities at level 2. For example, a fusion of RGB at level 1 gives a minimum EER equal to 3.0978 % at T o = 1.6919 by using SUM rule fusion. This system can achieve a minimum EER of 3.8145 % for T o = 0.4733 in the case of level 2 fusion with SUM rule. A fusion at level 3 results in an EER of 26.430 % at T o = 0.1416 with SUM rule fusion. In the case of using the RGBN, an EER of 2.6848 % is achieved at a threshold T o = 2.1067 at level 1 by SUM rule. In the case of level 3, EER was 16.110% at the threshold T o = 0.1782 by using SUM rule fusion. Finally, the system can operate at a 1.8442 % EER, and the corresponding threshold is T o = 0.6051 at level 2 by using the WHT rule. The experimental results show that fusion of all color bands at level 2 with WHT rule fusion is much higher than the individual color bands. The multimodal verification test results using fusion of RGBN bands at the level 2 decomposition with 2D-RBDCT and all fusion schemes are shown in Fig. 10 .(a). The genuine and imposter distributions are shown in Fig. 10.(b). Fig. 10.(c) depicts the ROC curve. Compared with the methods described in (Sun & Qiu; 2006), (Kumar & C.M; 2002) our system achieves better results expressed in terms of the EERs.

In the case of the 2D-DWT, Table 4 depicts the minimum EER obtained from test database. It can be observed that the proposed scheme can recognize palmprints more accurately as a minimum EER of 2.5785 % has been obtained from RGBN fusion at level 2 by using SUM rule fusion. These results are similar as 2D-RBDCT except that fusion takes place at level 3

EER Fusion of level 1 Fusion of level 2 Fusion of level 3
SUM MIN MAX WHT SUM MIN MAX WHT SUM MIN MAX WHT
RGB 3.098 4.605 5.228 3.933 3.815 16.14 9.162 9.032 26.43 42.50 48.67 46.52
RGBN 2.685 4.520 8.083 3.166 2.143 15.53 25.12 1.844 16.11 41.42 49.74 47.56

Table 3.

Verification test result for 2D-RBDCT (Multi-modal case)

Figure 10.

Multimodal verification test results using fusion of RGBN bands at the level 2 decomposition with 2D-RBDCT and all fusion schemes. (a) The ROC curves for all fusion schemes, (b) The genuine and impostor distribution and (c) The ROC curves for the level 2.

EER Fusion of level 1 Fusion of level 2 Fusion of level 3
SUM MIN MAX WHT SUM MIN MAX WHT SUM MIN MAX WHT
RGB 3.249 5.047 5.798 4.883 3.948 6.025 17.53 7.783 6.789 7.024 21.06 44.14
RGBN 2.777 4.944 7.526 3.835 2.579 5.981 13.54 3.902 10.56 6.994 34.52 11.11

Table 4.

Verification test result for 2D-DWT (Multi-modal case)

where the minimum EER was 6.994% with To = 0.0317 by using MIN rule fusion. The test results using fusion of RGBN bands at the decomposition level 2 with 2D-DWT and all fusion schemes are shown in Fig. 11 .(a). The genuine and impostor distributions are estimated and are illustrated Fig. 11 .(b). Fig. 11.(c) presents the verification test results and show the ROC curve.

6.4. Performance of identification algorithm

In an identification mode the recognition system examines whether the user is one of enrolled candidates. Therefore, the biometric data is collected and compared to all the templates in the database. Identification is closed-set if the person is assumed to exist in the database. In open-set identification, the person is not guaranteed to exist in the database. In our work, the proposed method was tested for the two modes of identification.

Figure 11.

Multimodal verification test results using fusion of RGBN bands at the level 2 decomposition with 2D-DWT and all fusion schemes. (a) The ROC curves for all fusion schemes, (b) The genuine and impostor distribution and (c) The ROC curves for the level 2.

6.4.1. Identification in the case of uni-modal system

6.4.1.1. Open set identification

The open set identification result is clearly shown in Table 5 in the case of using 2D-RBDCT features. From Table 5, one can observe a good performance and acceptability when the BLUE band is used at level 2 fusion. BLUE band open set identification system based on level 2 produces 0.0734 % accuracy, while identification system based on ‘level 1’ and system based on ‘level 3’ produce 0.2729 % and 0.1388 % accuracies, respectively. In order to show the effectiveness of the BLUE band, we have plotted ROC curves for all levels, (see Fig. 12.(a)). The genuine and impostor distributions are estimated and are shown in Fig. 12 .(b). Fig. 12 .(c) presents the identification test results including the ROC curve. 2D-RBDCT feature based identification method also outperforms other methods presented in the literatures, such as (Prasad & Govindan; 2009), (Varchol & Levicky; 2007), (Dai & Bi; 2004).

Level 1 Level 2 Level 3
To EER (%) To EER (%) To EER (%)
BLUE 0.9171 0.2729 0.9465 0.0734 0.9460 0.1388
GREEN 0.8960 0.5036 0.9087 0.1980 0.9268 0.2526
RED 0.9258 0.5666 0.9395 0.2340 0.9432 0.3892
NIR 0.9263 0.9851 0.9307 0.3154 0.9503 0.3437

Table 5.

Open set identification test result for 2D-RBDCT (Uni-modal case)

A performance comparison of all color bands using 2D-DWT features are made in Table 6. The matching is employed for the proposed methods and the results shows that the system performance is found to be superior (0.0734%) with BLUE band when it is compared with the other three bands methods based on the level 2. It is also observed from the Table 6 that changing the 2D-RBDCT with 2D-DWT does not provide any improvement in the EER. The ROC curves for all levels (color bands BLUE), are shown in Figure 13.(a). Finally, the genuine and impostor distributions are plotted in Fig. 13.(b) and the ROC curves, shown in Fig. 13.(c), depicts the performance of the system.

Figure 12.

Unimodal identification test results using the level 2 decomposition with 2D-RBDCT features. (a) The ROC curves for all levels (color bands BLUE), (b) The genuine and impostor distribution and (c) The ROC curves.

Level 1 Level 2 Level 3
To EER (%) To EER (%) To EER (%)
BLUE 0.9171 0.2729 0.9465 0.0734 0.9466 0.1466
GREEN 0.8982 0.4314 0.9087 0.1980 0.9268 0.2526
RED 0.9258 0.5666 0.9395 0.2340 0.9432 0.3892
NIR 0.9261 0.9811 0.9307 0.3153 0.9503 0.3437

Table 6.

Open set identification test result for 2D-DWT (Uni-modal case)

Figure 13.

Unimodal identification test results using the level 2 decomposition with 2D-DDWT features. (a) The ROC curves for all levels (color bands BLUE), (b) The genuine and impostor distribution and (c) The ROC curves.

6.4.1.2. Closed set identification

In the case of a closed set identification, a series of experiments were carried out to select the best color band and the corresponding decomposition level (1, 2, 3). This has been done by comparing all bands using all decomposition levels (i.e., 1, 2 3) and finding the color band that gives the best identification rate. Table 7 and 8 presents the experiments results obtained for all color bands for 2D-RBCD and 2D-DWT, respectively. From Table 7 (Table 8), the best results of rank-one identification for the BLUE, GREEN, RED, and NIR produce 98.0889 % with lowest rank (Rank of perfect rate) of 37, 99.2889% with lowest rank of 13 and 98.9333 % with lowest rank of 28, respectively. As the Table 7 (Table 8) shows, by using the BLUE band biometric with level 2, the performance of the system is increased. Finally, the CMC curves for 2D-RBDCT based system and 2D-DWT based system are plotted in Fig. 14.

Level 1 Level 2 Level 3
Rank-one ident [%] lowest rank Rank-one ident [%] lowest rank Rank-one identi [%] lowest rank
BLUE 98.0889 37 99.2889 13 98.9333 28
GREEN 97.1110 119 99.2000 78 98.0000 128
RED 96.6670 160 98.8000 162 98.7556 80
NIR 94.7111 108 98.7556 69 98.4889 85

Table 7.

Closed set identification test result for 2D-RBDCT (Uni-modal case)

Level 1 Level 2 Level 3
Rank-one ident [%] lowest rank Rank-one ident [%] lowest rank Rank-one identi [%] lowest rank
BLUE 98.0889 37 99.2889 13 98.9333 28
GREEN 97.6000 121 99.2000 81 98.0000 141
RED 96.6667 161 98.8000 181 98.7552 81
NIR 94.7111 121 98.4889 101 98.7511 81

Table 8.

Closed set identification test result for 2D-DWT (Uni-modal case)

Figure 14.

Uni-modal closed-set identification test for all bands at level 2. (a) 2D-RBDCT based system and (b) 2D-DWT based system.

6.4.2. Identification in case of multi-modal system

6.4.2.1. Open set identification

Table 9 shows the performance of fusion using different schemes when using 2D-RBDCT method. Compared with the performance of individual biometrics showen in Table 5, the performance of all individual biometrics in this experiment decreases to some extent. For fusion based on RGB, the best performance is also achieved by the WHT rule at a decomposition level 2 with a minimum EER of 0.0301% and a threshold T o = 0.9235, followed by the SUM rule on level 3 and the SUM rule on level 1 at 0.0507 % and T o = 0.9121 and the Sum rule on level 1 at 0.0889% and T o = 0.9152. For RGBN fusion, fusion based on SUM rule achieves a minimum EER of 0.0732 % with T o = 0.9321 at decomposition level of 1. Level 3 results in an EER of 0.0342 % with T o = 0.9231 when using SUM rule. The multimodal system error is decreases to 0.0158 % with T o = 0.9403 when using a decomposition level of 2 with WHT rule. The ROC curves for all fusion schemes (RGBN fusion) are shown in Fig. 15.(a). The genuine and impostor distributions are plotted in Fig. 15.(b) and the ROC curve of GAR against FAR for various thresholds is shown in Fig. 15.(c).

EER Fusion of level 1 Fusion of level 2 Fusion of level 3
SUM MIN MAX WHT SUM MIN MAX WHT SUM MIN MAX WHT
RGB 0.089 0.413 0.509 0.089 0.044 0.186 0.151 0.030 0.051 0.346 0.236 0.052
RGBN 0.073 0.377 0.841 0.094 0.044 0.164 0.285 0.016 0.034 0.271 0.272 0.037

Table 9.

Identification test result for 2D-RBDCT (Multi-modal case)

Figure 15.

Multimodal identification test results using fusion of RGBN bands at the level 2 decomposition with 2D- RBDCT. (a) The ROC curves for all fusion schemes, (b) The genuine and impostor distribution and (c) The ROC curves for the level 2 and WHT rule fusion.

EER Fusion of level 1 Fusion of level 2 Fusion of level 3
SUM MIN MAX WHT SUM MIN MAX WHT SUM MIN MAX WHT
RGB 0.089 0.387 0.517 0.107 0.044 0.186 0.151 0.021 0.051 0.346 0.236 0.054
RGBN 0.078 0.373 0.870 0.089 0.044 0.164 0.285 0.016 0.034 0.271 0.272 0.037

Table 10.

Identification test result for 2D-DWT (Multi-modal case)

Figure 16.

Multimodal identification test results using fusion of RGBN bands at the level 2 decomposition with 2D-DWT. (a) The ROC curves for all fusion schemes, (b) The genuine and impostor distribution and (c) The ROC curves for the level 2 and WHT rule fusion.

Figure 17.

Uni-modal closed-set identification test for all bands at level 2. (a) 2D-RBDCT based system and (b) 2D-DWT based system.

Rank-one identification [%]
EER Fusion of level 1 Fusion of level 2 Fusion of level 3
SUM MIN MAX WHT SUM MIN MAX WHT SUM MIN MAX WHT
RGB 99.29 97.78 97.02 99.11 99.82 99.24 98.00 99.73 99.64 98.62 98.84 99.69
RGBN 99.33 97.82 94.93 99.02 99.87 99.24 98.53 99.73 99.85 98.89 98.89 99.64
Rank of perfect Identification rate
EER Fusion of level 1 Fusion of level 2 Fusion of level 3
SUM MIN MAX WHT SUM MIN MAX WHT SUM MIN MAX WHT
RGB 51 64 160 47 24 67 89 12 12 76 80 9
RGBN 7 58 108 13 24 61 85 12 5 56 70 6

Table 11.

Closed set identification test result for 2D-RBDCT (Multi-modal case)

Rank-one identification [%]
EER Fusion of level 1 Fusion of level 2 Fusion of level 3
SUM MIN MAX WHT SUM MIN MAX WHT SUM MIN MAX WHT
RGB 99.07 98.13 96.80 99.11 99.82 99.24 98.80 99.73 99.64 98.62 98.84 99.69
RGBN 99.07 98.18 94.89 99.06 99.87 99.24 98.53 99.82 99.85 98.89 98.89 99.83
Rank of perfect Identification rate
EER Fusion of level 1 Fusion of level 2 Fusion of level 3
SUM MIN MAX WHT SUM MIN MAX WHT SUM MIN MAX WHT
RGB 55 59 161 19 25 67 89 13 13 77 81 9
RGBN 45 51 109 43 25 61 85 13 5 57 71 7

Table 12.

Closed set identification test result for 2D-DWT (Multi-modal case)

In this experiment, the performance results, based on 2D-DWT method, are shown Table 10. In this table, it can be observed that the proposed feature extraction based on 2D-DWT has a similar performance as its 2D-RBDCT counterpart. The ROC curves for all fusion schemes (RGBN fusion) are shown in Fig. 16.(a). The genuine and impostor distributions are plotted in Fig. 16.(b) and the ROC curve of GAR against FAR for various thresholds is depicted in Fig. 16.(c).

6.4.2.2. Closed set identification

To appreciate the performance differences between 2D-RBDCT and 2D-DWT feature extraction methods, experiments were conducted and the results are shown in Table 11 and 12. We can observe that the two feature extraction methods have marginally similar performances. Thus, the best results of rank-one identification for the RGB, RGBN are given as 99.82 % with lowest rank of 25 and 99.87 % with lowest rank of 24, respectively. As the Table 11 (Table 12) shows, by using the RGBN fusion with level 2, the performance of the system is increased. Finally, the CMC curves for 2D-RBDCT based system and 2D-DWT based system are plotted in Fig. 17.

Advertisement

7. Conclusion

In this chapter, we proposed algorithms to fuse the information from multi-spectral palmprint images where fusion is performed at the matching score level to generate a unique score which is used for recognizing a palmprint image. Several fusion rules including SUM, MIN, MAX and WHT are employed for the fusion of the multi-spectral palmprint at the matching score level. The features extracted from palmprint images are obtained using 2D-DCT and 2D-DWT methods. The algorithms are evaluated using the multi-spectral palmprint database from the Hong Kong polytechnic university (PolyU) which consists of palmprint images from BLUE, GREEN, RED and NIR color bands. Experimental results have shown that the combination of all colors bands palmprint images, RGBN, performs better when compared against other combinations, RGB, for both 2D-RBDCT and 2D-DWT extraction methods resulting in an EER of 2.1425% for verification and 0.0158% for identification. This also compares favourably against uni-modal band palmprint recognition. Experimental results also show that these proposed methods give an excellent closed set identification rate for the two extraction methods. For further improvement of the system, our future work will focus on the performance evaluation using large size database, and a combination of multi-spectral palmprint information with other biometrics such as finger-knuckle-print to obtain higher accuracy recognition performances.

References

  1. 1. Kong W. K. Zhang D. 2002 Palmprint Texture Analysis based on Low- Resolution Images for Personal Authentication, In: 16th International Conference on Pattern Recognition, 3 807 810 , August 2002.
  2. 2. Ajay Kumar & David Zhang. 2010 Improving Biometric Authentication Performance from the User Quality, In: IEEE transactions on instrumentation and measurement, 59 3 march 2010. 730 735
  3. 3. Kumar K. Sricharan A. Aneesh Reddy. Ramakrishnan A. G. 2006 Knuckle based Hand Correlation for User Authentication, In: Biometric Technology for Human Identification III, Proc. of SPIE, 6202 62020X, (2006).
  4. 4. Fang Li, Maylor K.H. Leung & Xaozhou Yu. 2010 Palmprint Identi_cation Using Hausdorff Distance, In: International Workshop on Biomedical Circuits & Systems (BioCAS’04),2004.
  5. 5. David Zhang, Zhenhua Guo, Guangming Lu, Lei Zhang & Wangmeng Zuo. 2010 An Online System of Multi-spectral Palmprint Verification, In: IEEE transactions on instrumentation and measurement, 59 2 February 2010, 48 490 .
  6. 6. Zhang D. Kong W. You J. Wong M. 2003 On-line Palmprint Identification, In: IEEE transactions On pattern analysis and machine intelligence, 25 9 September 2003. 1041 1050
  7. 7. Antonini M. Barlaud M. Mathieu P. Daubechies I. 1992 Image coding using the wavelet transform, In: IEEE transactions on Image Processing (2), 205 220 .
  8. 8. Afzel Noore, Richa Singh & Mayank Vatsa. 2007 Robust memory-efficient data level information fusion of multi-modal biometric images, In: Information Fusion 8, 337 346 . 2007. 8 N 4, pp: 337-346, October 2007
  9. 9. Saeed Dabbaghchian, Masoumeh P.Ghaemmaghami & Ali Aghagolzadeha. 2010 Feature Extraction Using Discrete Cosine Transform and Discrimination Power Analysis With a Face Recognition Technology, In: Pattern Recognition, 43, 1431 1440 . April 2010
  10. 10. Yen-Yu Chen & Shen-Chuan Tai. 2004 Embedded Medical Image Compression Using DCT Based Subband Decomposition and Modified SPIHT Data Organization, In: Proceedings of the Fourth IEEE Symposium on Bioinformatics and Bioengineering (BIBE’04). Taichung, Taiwan, 167 174 , may 2004
  11. 11. Peter Varchol & Dusan Levicky. 2007 Using of Hand Geometry in Biometric Security Systems, In: Radio engineering, 16 4 December 2007. Julian Fierrez, 82 87
  12. 12. Javier Ortega-Garcia & Daniel Ramos. 2007 HMM-Based On-Line Signature Verification: Feature Extraction and Signature Modeling, In: Pattern recognition letters, 28 16 2325 2334 , December 2007.
  13. 13. Fabien Cardinaux, Conrad Sanderson & Samy Bengio. 2004 Face Verification Using Adapted Generative Models, In: The 6th IEEE International Conference Automatic Face and Gesture Recognition (AFGR), Seoul, 2004, 825 830 .
  14. 14. Tadej Savic & Nikola Pavesic. 2007 Personal recognition based on an image of the palmar surface of the hand, In: Pattern Recognition, 40, 3152 3163 , 2007.
  15. 15. Ross A. Jain A. -Z J. Qian 2001 Information Fusion in Biometrics, In: Audio and video-Based Biometric Person Authentication, 354 359 , 2001.
  16. 16. Richa Singh, Mayank Vatsa & Afzel Noore. 2008 Hierarchical fusion of multispectral face images for improved recognition performance, In: Science Direct, Information Fusion 9, 200 210 , 2008. 9 N2, April 2008
  17. 17. PolyU Database. The Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, available at: http://www.comp.polyu.edu.hk/ biometrics /Multispectral Palmprint/MSP.htm.
  18. 18. Connie T. Teoh A. Goh M. Ngo D. 2003 Palmprint Recognition with PCA and ICA, In: Palmerston North, 2003. Conference of image and vision computing new Zealand 2003, 227 232 .
  19. 19. Dongmei Sun, Zhengding Qiu & Qiang Li. 2006 Palmprint Identification using Gabor Wavelet Probabilistic Neural Networks, In: IEEE International conference on signal processing (ICSP 2006), 2006.
  20. 20. Ajay Kumar. David C. M. Wong Helen. C. Shen Anil K. Jain 2006 Personal authentication using hand images, In: Pattern Recognition Letters 27, 1478 1486 , 2006.
  21. 21. Prasad S. M. Govindan V. K. Sathidevi P. S. 2009 Palmprint Authentication Using Fusion of Wavelet Based Representations, In: IEEE Xplore, World Congress on Nature & Biologically Inspired Computing (NaBIC), 2009. Coimbatore, India, 520 525 , December 2009
  22. 22. Jain A. K. Ross A. Prabhakar S. 2004 An Introduction to Biometric Recognition, In: IEEE Transactions on Circuits and Systems for Video Technology, 14 N. 1, 4 20 , January 2004.
  23. 23. Qingyun Dai, Ning Bi, Daren Huang, DvaiJ Zhang & Feng Li. 2004 M-Band Wavelets Application To Palmprint Recognition Based On Texture Features, In: IEEEexplorer International Conference on Image Processing (ICIP), 893 896 , Singapore, October 2004

Written By

Abdallah Meraoumia, Salim Chitroub and Ahmed Bouridane

Submitted: 24 March 2011 Published: 09 August 2011