Open access

A Contactless Biometric System Using Palm Print and Palm Vein Features

Written By

Goh Kah Ong Michael, Tee Connie and Andrew Beng Jin Teoh

Submitted: 03 November 2010 Published: 09 August 2011

DOI: 10.5772/19337

From the Edited Volume

Advanced Biometric Technologies

Edited by Girija Chetty and Jucheng Yang

Chapter metrics overview

9,862 Chapter Downloads

View Full Metrics

1. Introduction

Recently, biometrics has emerged as a reliable technology to provide greater level of security to personal authentication system. Among the various biometric characteristics that can be used to recognize a person, the human hand is the oldest, and perhaps the most successful form of biometric technology (Hand-based biometrics, 2003). The features that can be extracted from the hand include hand geometry, fingerprint, palm print, knuckle print, and vein. These hand properties are stable and reliable. Once a person has reached adulthood, the hand structure and configuration remain relatively stable throughout the person’s life (Yörük et al., 2006). Apart from that, the hand-scan technology is generally perceived as nonintrusive as compared to iris- or retina-scan systems (Jain et al., 2004). The users do not need to be cognizant of the way in which they interact with the system. These advantages have greatly facilitated the deployment of hand features in biometric applications.

At present, most of the hand acquisition devices are based on touch-based design. The users are required to touch the device or hold on to some peripheral or guidance peg for their hand images to be captured. There are a number of problems associated with this touch-based design. Firstly, people are concerned about the hygiene issue in which they have to place their hands on the same sensor where countless others have also placed theirs. This problem is particularly exacerbated during the outbreak of epidemics or pandemics like SARS and Influenza A (H1N1) which can be spread by touching germs leftover on surfaces. Secondly, latent hand prints which remain on the sensor’s surface could be copied for illegitimate use. Researchers have demonstrated systematic methods to use latent fingerprints to create casts and moulds of the spoof fingers (Putte & Keuning, 2000). Thirdly, the device surface will be contaminated easily if not used right, especially in harsh, dirty, and outdoor environments. Lastly, some nations may resist placing their hands after a user of the opposite sex has touched the sensor.

This chapter presents a contactless hand-based biometric system to acquire the palm print and palm vein features. Palm prints refer to the smoothly flowing pattern formed by alternating creases and troughs on the palmar surface of the hand. Three types of line patterns are clearly visible on the palm. These line patterns are known as the principal lines, wrinkles, and ridges. Principal lines are the longest, strongest and widest lines on the palm. The principal lines characterize the most distinguishable features on the palm. Most people have three principal lines, which are named as the heart line, head line, and life line (Fig. 1). Wrinkles are regarded as the thinner and more irregular line patterns. The wrinkles, especially the pronounced wrinkles around the principal lines, can also contribute for the discriminability of the palm print. On the other hand, ridges are the fine line texture distributed throughout the palmar surface. The ridge feature is less useful for discriminating individual as they cannot be perceived under poor imaging source.

Figure 1.

The Line Patterns on the Palm Print. The Three Principal Lines on a Palm: 1–heart line, 2–head line and 3–life line (Zhang et al., 2003)

On the other hand, hand vein refers to the vascular pattern or blood vein patterns recorded from underneath the human skin. The subcutaneous blood vein flows through the human hand, covering the wrist, palm, and fingers. Every person has unique structure and position of veins, and this does not change significantly from the age of ten (Vein recognition in Europe, 2004). As the blood vessels are believed to be “hard-wired” into the body at birth, even twins have unique vein pattern. In fact, the vascular patterns on the left and right hands are also different. As the complex vein structure resides in the human body, it is not possible (except using anatomically surgery) to copy or duplicate the vein pattern. Besides, external conditions like greasy and dirty, wear and tear, dry and wet hand surface do not affect the vein structure. The properties of stability, uniqueness, and spoof-resilient make hand vein a potentially good biometrics for personal authentication. Fig. 2 depicts the vein patterns captured from the palmar surface, and back of the hand.

Figure 2.

a) Vein Image on the Palm and Fingers (PalmSecure™, 2009). (b) Vein Image at the Hand Dorsum (Hitachi and Fujitsu win vein orders in diverse markets, 2007).

Advertisement

2. Literature review

2.1. Palm print biometrics

2.1.1. Image acquisition

Most of the palm print systems utilized CCD scanners to acquire the palm print images (Zhang et al., 2003; Han, 2004; Kong & Zhang, 2004). A team of researchers from Hong Kong Polytechnic University pioneered CCD-based palm print scanner (Zhang et al., 2003). The palm print scanner was designed to work with predefined controlled environment. The proposed device captured high quality palm print images and aligned palms accurately with the aid of guidance pegs.

Although CCD-based palm print scanners could capture high quality images, they require careful device setup. This design involves appropriate selection and configuration of the lens, camera, and light sources. In view of this, some researchers proposed to use digital cameras and video cameras as this setting requires less effort for system design (Doublet et al., 2007). Most of the systems that deployed digital cameras and video cameras posed less stringent constraint on the users. They did not use pegs for hand placement and they did not require special lighting control. This was believed to increase user acceptance and reduce maintenance effort of the system. Nevertheless, they might cause problem as the image quality may be low due to uncontrolled illumination variation and distortion due to hand movement.

Apart from CCD scanners and digital camera/video camera, there was also research which employed digital scanner (Qin et al., 2006). Nonetheless, digital scanner is not suitable for real-time applications because of the long scanning time. Besides, the images may be deformed due to the pressing effect of the hand on the platform surface. Fig. 3 shows the palm print images collected using CCD scanner, digital scanner, and video camera.

Figure 3.

Palm print images captured with (a) CCD scanner (Zhang et al., 2003), (b) digital scanner (Qin et al., 2006), and (c) video camera (Doublet et al., 2007).

2.1.2. Feature extraction

A number of approaches have been proposed to extract the various palm print features. The works reported in the literature can be broadly classified into three categories, namely line-based, appearance-based, and texture-based (Zhang & Liu, 2009). Some earlier research in palm print followed the line-based direction. The line-based approach studies the structural information of the palm print. Line patterns like principle lines, wrinkles, ridges, and creases are extracted for recognition (Funada, et al., 1998; Duta et al., 2002; Chen et al., 2001). The later researches used more flexible approach to extract the palm lines by using edge detection methods like Sobel operator (Wu et al., 2004a; Boles & Chu, 1997; Leung et al., 2007), morphological operator (RafaelDiaz et al., 2004), edge map (Kung et al., 1995), and modified radon transform (Huang et al., 2008). There were also researchers who implemented their own edge detection algorithms to extract the line patterns (Liu & Zhang, 2005; Wu et al., 2004b; Huang et al., 2008).

On the other hand, the appearance-based approach is more straightforward as it treats the palm print image as a whole. Common methods used for the appearance-base approach include principal component analysis (PCA) (Lu et al., 2003; Kumar & Negi, 2007), linear discriminant analysis (LDA) (Wu et al., 2003), and independent component analysis (ICA) (Connie et al., 2005). There were also researchers who developed their own algorithms to analyze the appearance of the palm print (Zuo et al., 2005; Feng et al., 2006; Yang et al., 2007; Deng et al., 2008).

Alternatively, the texture-based approach treats the palm print as a texture image. Therefore, statistical methods like Law’s convolution masks, Gabar filter, and Fourier Transform could be used to compute the texture energy of the palm print. Among the methods tested, 2-D Gabor filter has been shown to provide engaging result (You et al., 2004; Wu et al., 2004b; Kong et al., 2006). Ordinal measure has also appeared as another powerful method to extract the texture feature (Sun et al., 2005). It detects elongated and line-like image regions which are orthogonal in orientation. The extracted feature is known as ordinal feature. Some researchers had also explored the use of texture descriptors like local binary pattern to model the palm print texture (Wang et al., 2006). In addition to this, there were other techniques which studied the palm print texture in the frequency domain by using Fourier transform (Li et al., 2002) and discrete cosine transform (Kumar & Zhang, 2006 ).

Apart from the approaches described above, there were research which took a step forward to transform the palm print feature into binary codes representation (Kong & Zhang, 2004b; Kong & Zhang, 2006 ; Zhang et al., 2003; Sun et al., 2005; Kong & Zhang, 2002). The coding methods are suitable for classification involving large-scale database. The coding algorithms for palm print are inspired by the IrisCode technique (Daugman, 1993). PalmCode (Kong & Zhang, 2002; Zhang, Kong, You, & Wong, 2003) was the first coding based technique reported for palm print research. Later on, more variations had evolved from PalmCode which included Fusion Code (first and second versions) (Kong & Zhang, 2004b), and Competitive Code (Kong & Zhang, 2004; Kong, Zhang, & Kamel, 2006b). In addition, there were also other coding approaches like Ordinal code (Sun et al., 2005), orientation code (Wu et al., 2005), and line orientation code (Jia et al., 2008).

2.1.1. Matching

Depending on the types of features extracted, a variety of matching techniques were used to compare two palm print images. In general, these techniques can be divided into two categories: geometry-based matching, and feature-based matching (Teoh, 2009). The geometry-based matching techniques sought to compare the geometrical primitives like points (Duta, Jain, & Mardia, 2002; You, Li, & Zhang, 2002) and lines features (Huang, Jia, & Zhang, 2008 ; Zhang & Shu, 1999) on the palm. When the point features were located using methods like interesting point detector (You, Li, & Zhang, 2002), distance metric such as Hausdorff distance could be used to calculate the similarly between two feature sets. When the palm print pattern was characterized by line-based feature, Euclidean distance could be applied to compute the similarity, or rather dissimilarity, between two line segments represented in the Z2 coordinate system. Line-based matching on the whole is perceived as more informative than point-based matching because the palm print pattern could be better characterized using the rich line features as compared to isolated datum point. Besides, researchers conjectured that simple line features like the principal lines have sufficiently strong discriminative ability (Huang, Jia, & Zhang, 2008 ).

Feature-based matching works well for the appearance-based and texture-based approaches. For research which studied the subspace methods like PCA, LDA, and ICA, most of the authors adopted Euclidean distances to compute the matching scores (Lu, Zhang, & Wang, 2003 ; Wu, Zhang, & Wang, 2003 ; Lu, Wang, & Zhang, 2004). For the other studies, a variety of distance matrices like city-block distance and chi square distances were deployed (Wu, Wang, & Zhang, 2004b; Wu, Wang, & Zhang, 2002; Wang, Gong, Zhang, Li, & Zhuang, 2006 ). Feature-based matching has a great advantage over geometry-based matching when low-resolution images are used. This is due to the reason that geometry-based matching usually requires higher resolution images to acquire precise location and orientation of the geometrical features.

Aside from the two primary matching approaches, more complicated machine learning techniques like neural networks (Han, Cheng, Lin, & Fan, 2003), Support Vector Machine (Zhou, Peng, & Yang, 2006), and Hidden Markov models (Wu, Wang, & Zhang, 2004c) were also tested. In most of the time, a number of the matching approaches can be combined to yield better accuracy. You et al. (2004) showed that the integration can be performed in a hierarchical manner for the boost in performance and speed.

When the palm print features were transformed into binary bit-string for representation, Hamming distance was utilized to count the bit difference between two feature sets (Zhang, Kong, You, & Wong, 2003; Kong & Zhang, 2004b; Sun, Tan, Wang, & Li, 2005). There was an exception to this case where the angular distance was employed for the competitive coding scheme (Kong & Zhang, 2004).

2.2. Hand vein biometrics

2.2.1. Image acquisition

In visible light, the vein structure of the hand is not always easily discernible. Due to biological composition of the human tissues, the vein pattern can be observed under infrared light. In the entire electromagnetic spectrum, infrared refers to a specific region with wavelength typically spanning from 0.75μm to 1000μm. This region can be further divided into four sub-bands, namely near infrared (NIR) in the range of 0.75μm to 2μm, middle infrared in the range of 2μm to 6μm, far infrared (FIR) in the range of 6μm to 14μm, and extreme infrared in the range of 14μm to 1000μm. In the literature, the NIR (Cross & Smith, 1995; Miura, Nagasaka, & Miyatake, 2004; Wang, Yau, Suwandya, & Sung, 2008; Toh, Eng, Choo, Cha, Yau, & Low, 2005) and FIR (Wang, Leedhamb, & Cho, 2008; Lin & Fan, 2004) sources were used to capture the hand vein images.

FIR imaging technology forms images based on the infrared radiation emitted from the human body. Medical researchers have found that human veins have higher temperature than the surrounding tissues (Mehnert, Cross, & Smith, 1993). Therefore, the vein patterns can be clearly displayed via thermal imaging (Fig. 4(a) and (b)). No external light is required for FIR imaging. Thus, FIR does not suffer from illumination problems like many other imaging techniques. However, this technology can be easily affected by external conditions like ambient temperature and humidity. In addition, perspiration can also affect the image quality (Wang, Leedham, & Cho, 2007).

On the other hand, the NIR technology functions based on two special attributes, (i) the infrared light can penetrate into the hand tissue to a depth of about 3mm, and (ii) the reduced haemoglobin in the venous blood absorbs more incident infrared radiation than the surrounding tissues (Cross & Smith, 1995). As such, the vein patterns near the skin surface are discernible as they appear darker than the surrounding area. As shown in Fig. 4(c), NIR can capture the major vein patterns as effectively as the FIR imaging technique. More importantly, it can detect finer veins lying near the skin surface. This increases the potential discriminative ability of the vein pattern. Apart from that, the NIR has better ability to withstand the external environment and the subject’s body temperature. Besides, the colour of the skin does not have any impact of the vein patterns (Wang, Leedhamb, & Cho, 2008).

Figure 4.

a) FIR image in normal office environment, (b) FIR image in outdoor environment (Wang, Leedhamb, & Cho, 2008). (c) NIR taken for vein image (Wang, Leedhamb, & Cho, 2008

Infrared sensitive CCD cameras like Takena System NC300AIR (Miura, Nagasaka, & Miyatake, 2004), JAI CV-M50 IR (Toh, Eng, Choo, Cha, Yau, & Low, 2005; Wang, Yau, Suwandya, & Sung, 2008), and Hitachi KP-F2A (Wang, Leedham, & Cho, 2007) were used to capture images of veins near to body surface. Near infrared LEDs with wavelength from 850nm (Wu & Ye, 2009; Wang, Leedham, & Cho, 2007; Kumar & Prathyusha, 2009) to 880nm (Cross & Smith, 1995) were used as the light source. To cutoff the visible light, IR filter with different cutoff wavelengths, λ, were devised. Some researches deployed IR filter with λ ~ 800nm (Wang, Leedham, & Cho, 2007; Wu & Ye, 2009) and some used higher cutoff wavelengths at 900nm (Cross & Smith, 1995).

2.2.2. Feature extraction

The feature extraction methods for vein recognition can be broadly categorized into: (i) structural-, and (ii) global-based approaches. The structural method studies the line and feature points (like minutiae) of the vein (Cross & Smith, 1995; Miura, Nagasaka, & Miyatake, 2004; Wang, Zhang, Yuan, & Zhuang, 2006 ; Kumar & Prathyusha, 2009). Thresholding and thinning techniques (Cross & Smith, 1995; Wang, Zhang, Yuan, & Zhuang, 2006 ), morphological operator (Toh, Eng, Choo, Cha, Yau, & Low, 2005), as well as skeletonization and smoothing methods (Wang, Leedhamb, & Cho, 2008) were used to extract the vein structure. These geometrical/topological features were used to represent the vein pattern.

On the contrary, the global-based method characterizes the vein image in its entirety. Lin and Fan (2004) performed multi-resolution analysis to analyze the palm-dorsa vein patterns. Wang et al. (2006) carried out multi feature extraction based on vein geometry, K-L conversion transform, and invariable moment and fused the results of these methods. The other global-based approaches adopted curvelet (Zhang, Ma, & Han, 2006) and Radon Transform (Wu & Ye, 2009) to extract the vein feature.

2.2.3. Matching

Most of the works in the literature deployed the correlation technique (or its other variations) to evaluate the similarity between the enrolled and test images (Cross & Smith, 1995; Kono, Ueki, & Umemur, 2002; Miura, Nagasaka, & Miyatake, 2004; Toh, Eng, Choo, Cha, Yau, & Low, 2005). Other simple distance matching measure like Hausdorff distance was also adopted (Wang, Leedhamb, & Cho, 2008). More sophisticated methods like back propagation neural networks (Zhang, Ma, & Han, 2006) and Radial Basis Function (RBF) Neural Network and Probabilistic Neural Network (Wu & Ye, 2009) were also used as the classifiers in vein recognition research.

Advertisement

3. Proposed solution

In this research, we endeavour to develop an online acquisition device which can capture hand images in a contactless environment. In specific, we want to acquire the different hand modalities, namely palm print and palm vein images from the hand simultaneously without incurring additional sensor cost or adding user complication. The users do not need to touch or hold on to any peripheral for their hand images to be acquired. When their hand images are captured, the regions of interest (ROI) of the palm will be tracked and extracted. ROIs contain the important information of the hand that is used for recognition. The ROIs are pre-processed so that the print and vein textures become distinguishable from the background. After that, distinguishing features in the ROIs are extracted using a proposed technique called directional coding. The hand features are mainly made up of line-like texture. The directional coding technique encodes the discriminative information of the hand based on the orientation of the line primitives. The extracted palm print and palm vein features are then fused at score level to yield better recognition accuracy. We have also included an image quality assessment scheme to evaluate the image quality. We distribute more weight to better quality image when fusion is performed. The framework of our proposed system is shown in Fig. 5.

Figure 5.

Framework of the proposed system.

3.1. Design and implementation of acquisition device

Image acquisition is a very important component because it generates the images to be used and evaluated in this study. We aim to develop a real-time acquisition device which can capture hand images in a contactless environment. The design and implementation of an efficient real-time hand acquisition device must contend with a number of challenges. Firstly, the acquisition device must be able to provide sufficient contrasted images so that the hand features are discernable and can be used for processing. The hardware setup plays a crucial role in providing high quality images. Arrangement of the imaging sensor and design of the lighting units also have great impact on the quality of the images acquired. Therefore, the capturing device should be calibrated carefully to obtain high contrasted images. Secondly, a single acquisition device should be used to capture multiple image sources (e.g. visible and infrared images). It is not efficient and economical for a multimodal biometric system to install multiple capturing devices, for example, using a normal camera to acquire visible image and using another specialized equipment to obtain IR image. Therefore, an acquisition device with low development cost is expected for a multimodal biometric system from the system application view. Thirdly, speed is a major concern in an online application. The capturing time of the acquisition device should be fast enough to make it unnoticeable to the user that multiple biometric features are being acquired by the system for processing. In other words, a real-time acquisition system should be able capture all of the biometric features in the shortest time possible.

In this research, we design an acquisition device that aims to fulfil the requirements above. The hardware setup of the capturing device is shown in Fig. 6. Two low-cost imaging units are mounted side by side on the device. The first imaging unit is used to capture visible light images while the second for obtaining infrared images. Both units are commercially available off-the-shelf webcams. Warm-white light bulbs are placed around the imaging units to irradiate the hand under visible light. The bulbs emit yellowish light source that enhances the lines and ridges of the palm. To acquire IR image, we do not use any specialized IR camera. Instead, we modify the ordinary webcam to be an IR-sensitive camera. The webcam used for infrared imaging is fitted with an infrared filter. The filter blocks the visible (non-IR) light and allows only the IR light to reach the sensor. In this study, we find that IR filter which passes infrared rays above 900nm gives the best quality images. A number of infrared LEDs are arranged on the board to serve as the infrared cold source to illuminate the vein pattern. We have experimented with different types of infrared LEDs and those emitting light in the range of 880nm to 920nm provide relatively good contrast of the vein pattern. A diffuser paper is used to attenuate the IR source so that the radiation can be distributed more uniformly around the imaging unit.

During image acquisition, we request the user to position his/her hand above the sensor with the palm facing the sensor (Fig. 6(a)). The user has to slightly stretch his/her fingers apart. There is no guidance peripheral to restraint the user’s hand. The user can place his/her hand naturally above the sensor. We do not restrict the user to place his/her hand at a particular position above the sensor nor limit them to pose his/her at a certain direction. Instead, we allow the user to move his/her hand while the images are being acquired. Besides, the user can also rotate his/her hand while the images are being taken. The optimal viewing region for the acquisition sensor is 25 cm from the surface of the imaging unit. We allow a tolerable focus range of 25 cm ± 4 cm to permit more flexibility for the users to interact with the device (Fig. 6(c) and Fig. 7).

Figure 6.

a) Image acquisiton device (covered). (b) Image acquisiton device (uncovered). (c) Acquiring the hand images.

Figure 7.

Tolerable focus range for the image acquisition device.

In this study, a standard PC with Intel Core 2 Quad processor (2.4 GHz) and 3072 MB RAM was used. The program was developed using Visual Studio.NET 2008. The application depicted in Fig. 8 shows a live video sequence of the hand image recorded by the sensor. Both of the visible light and IR images of the hand can be captured simultaneously. The interface provides direct feedback to the user that he/she is placing his/her hand properly inside the working volume. After the hand was detected in the working volume, the ROIs of the palm and fingers were captured and stored as bitmap format from the video sequence. The hand image was detected in real-time video sequence at 30 fps. The image resolution was 640 x 480 pixels, with color output type in 256 RGB (8 bits-per-channel). The delay interval between capturing the current and the next ROI was 2 seconds.

Figure 8.

Software interface depicting the image acquisition process.

We used the setup described above in an office environment to evaluate the performance of the proposed multimodal hand-based biometric system. We have recorded the hand images from 136 individuals. 64 of them are females, 42 of them are less than 30 years old. The users come from different ethnic groups such as Chinese, Malays, Indians, and Arabians. Most of them are students and lecturers from Multimedia University. Ten samples were captured for each user. The samples were acquired in two different occasions separated at an interval of two months.

3.2. Pre-processing

We adopt the hand tracking algorithm proposed in our previous work (Goh et al., 2008) to detect and locate the region of interest (ROI) of the palm. After obtaining the ROIs, we enhance the contrast and sharpness of the images so that the dominant palm vein features can be highlighted and become distinguishable from the background. Gamma correction is first applied to obtain better image contrast (Gonzalez, & Woods, 2002). To bring out the detail of the ridge pattern, we have investigated a number of well-known image enhancement methods like Laplacian filters, Laplacian of Gaussian, and unsharp masking method. Although these techniques work well for sharpening the images, the noise elements tend to be over-enhanced. For this reason, we propose a local-ridge-enhancement (LRE) technique to obtain a sharp image without overly amplifying the noise. This method discovers which part of the image contains important lines and ridge patterns, and amplifies only these areas.

The proposed LRE method uses a “ridge detection mask” to find the palm vein structures in the image. LRE first applies a low-pass filter, g ( x , y ) , on the original image, I ( x , y ) , shown in Fig. 9a to obtain a blur version of the image, M ( x , y ) ,

M ( x , y ) = g ( x , y ) I ( x , y ) E1

In this research, Gaussian filter with σ=60 is used for this purpose. After that, we use a high-pass filter, h ( x , y ) , to locate the ridge edges from the blur image,

M ' ( x , y ) = h ( x , y ) M ( x , y ) E2

Note that since the trivial/weak ridge patterns have already been “distilled” in the blur image, only the edges of the principal/strong ridges show up in M ' ( x , y ) . In this work, the Laplacian filter is used as the high-pass filter.

At this stage, M ' ( x , y ) exhibit the edges of the primary ridge structure (Fig. 9c). We binarize M ' ( x , y ) by using a threshold value, τ. Some morphological operators like opening and closing can be used to eliminate unwanted noise regions. The resultant image is the “mask” marking the location of the strong ridge pattern.

We “overlay” M ' ( x , y ) on the original image to amplify the ridge region,

I ' ( x , y ) = { c I ( x , y ) if  M ' ( x , ) = 1 I ( x , y ) otherwise E3

where I ' ( x , y ) is the enhanced image and c is the coefficient to determine the level of intensity used to highlight the ridge area. The lower the value of c, the more the ridge pattern will be amplified (the darker the area will be). In this work, the value of c is empirically set to 0.9. Fig. 9f shows the result of the enhanced image. We wish to point out that more variations can be added to determine different values for c in order to highlight the different ridge areas according to their strength levels. For example, gray-level slicing can be used to assign larger weight, c, to stronger ridge pattern, and vice versa. We do not perform this additional step due to the consideration for computation overhead (computation time is a critical factor for an online application). Fig. 10 depicts some sample image enhancement results for the palm print and palm vein images.

Figure 9.

Processes involved in the proposed LRE method. (a) Original image. (b) Response of applying low-pass filter. (c) Response of applying high-pass filter on the response of low-pas filter. (d) Image binarization. (e) Applying morphological operations. (f) Result of LRE.

Figure 10.

Result of applying the proposed LRE method.

3.3. Feature extraction

We propose a new scheme named Directional Coding method to extract the palm print and palm vein features. These hand features contain similar textures which are primarily made up of line primitives. For example, palm prints are made up of strong principal lines and some thin wrinkles, whilst palm vein contains vascular network which also resembles line-like characteristic. Therefore, we can deploy a single method to extract the discriminative line information from the different hand features. The aims is to encode the line pattern based on the proximal orientation of the lines. We first apply Wavelet Transform to decompose the palm print images into lower resolution representation. The Sobel operator is then used to detect the palm print edges in horizontal, vertical, +45º, and -45º orientations. After that, the output sample, Φ ( x , y ) , is determined using the formula,

Φ ( x , y ) = δ ( arg max f ( ω R ( x , y ) ) ) E4

where ω R ( x , y ) denotes the responses of the Sobel mask in the four directions (horizontal, vertical, +45º, and -45º), and δ { 1 , 2 , 3 , 4 } indicates the index used to code the orientation of. The index, δ, can be in any form, but we use decimal representation to characterize the four orientations for the sake of simplicity. The output, Φ ( x , y ) , is then converted to the corresponding binary reflected Gray code. The bit string assignment enables more effective matching process as the computation only deals with plain binary bit string rather than real or floating point numbers. Besides, another benefit of converting bit string to Gray code representation is that Gray code exhibits less bit transition. This is a desired property since we require the biometric feature to have high similarity within the data (for the same subject). Thus, Gray code representation provides less bit difference and more similarity in the data pattern. Fig. 11(b) to (e) shows the gradient responses of the palm print in the four directions. Fig. 11(f) is the result of taking the maximum gradient values obtained from the four responses. This image depicts the strongest directional response of the palm print and it closely resembles the original palm print pattern shown in Fig. 11(a). The example of directional coding applied on palm vein image is illustrated in Fig. 12.

Figure 11.

Example of Directional Code applied on palm print image.

Figure 12.

Example of Directional Code applied on palm vein image.

3.4. Feature matching

Hamming distance is deployed to count the fraction of bits that differ between two code strings for the Directional Coding method. Hamming distance is defined as,

d h a m ( G , P ) = X O R ( G , P ) E5

3.5. Fusion approach

In this research, the sum-based fusion rule is used to consolidate the matching scores produced by the different hand biometrics modalities. Sum rule is defined as,

S ¯ = i = 1 k s i E6

where s denotes the scores generated from the different experts and k signifies the number of experts in the system. The reason of applying sum rule is because studies have shown that sum rule provides good results as compared to other decision level fusion techniques like likelihood ratio-based fusion (He, et al., 2010), neural networks (Ross & Jain, 2003) and decision trees (Wang, Tan, & Jain, 2003). Another reason we do not apply sophisticated fusion technique in our work is because our dataset has been reasonably cleansed by the image pre-processing and feature extraction stages (as will be shown in the experiment section).

Sum rule is a linear-based fusion method. To conduct more thorough evaluation, we wish to examine the use of non-linear classification tool. Support Vector Machine (SVM) is adopted for this purpose. SVM is a type of machine learning technique which is based on Structural Risk Minimization (SRM) principal. It has good generalization characteristics by minimizing the boundary based on the generalization error, and it has been proven to be successful classifier on several classical pattern recognition problems (Burges, 1998). In this research, the Radial Basis Kernel (RBF) function is explored. RBF kernel is defined as (Saunders, 1998; Vapnik, 1998),

K ( x , x i ) = exp ( ( x x i ) 2 2 σ 2 ) E7

where σ> 0 is a constant that defines the kernel width.

3.6. Incorporating image quality assessment in fusion scheme

We propose a novel method to incorporate image quality in our fusion scheme to obtain better performance. We first examine the quality of the images captured by the imaging device. We distribute more weight to better quality image when fusion is performed. The assignment of larger weight to better quality image is useful when we fuse the images under visible (e.g. palm print) and infrared light (e.g. palm vein). Sometimes, the vein images may not appear clear due to the medical condition of the skin (like thick fatty tissue obstructing the subcutaneous blood vessels), thus, it is not appropriate to assign equal weight between these poor quality images and those having clear patterns.

We design an evaluation method to assess the richness of texture in the images. We quantify/measure the image quality by using the measures derived using Gray Level Co-occurrence Matrix (GLCM) (Haralick, Shanmugam, & Dinstein, 1973). We have discovered several GLCM measures which can describe image quality appropriately. These measures were modelled using fuzzy logic to produce the final image quality metric that can be used in the fusion scheme.

3.6.1. Brief overview of GLCM

GLCM is a popular texture analysis tool which has been successfully applied in a number of applications like medical analysis (Tahir, Bouridane, & Kurugollu, 2005), geological imaging (Soh & Tsatsoulis, 1999), remote sensing (Ishak, Mustafa, & Hussain, 2008), and radiography (Chen, Tai, & Zhao, 2008). Given an M × N image with gray level values range from 0 to L-1, the GLCM for this image, P ( i , j , d , θ ) , refers to the matrix recording the joint probability function, where i and j are the elements in the GLCM defined by a distance d in θ direction. More formally, the (i, j)th element in the GLCM for an image can be expressed as,

P ( i , j , d , θ ) = # { [ ( x 1 , y 1 ) , ( x 2 , y 2 ) ] | f ( x 1 , y 1 ) = i , f ( x 2 , y 2 ) = j , d i s ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = d , ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = θ } E8

where dis() refers to a distance metric, and is the angle between two pixels in the image. # denotes the number of times the pixel intensity f(x, y) appears in the relationship characterized by d and θ. To obtain the normalized GLCM, we can divide each entry by the number of pixels in the image,

P n o r m ( i , j , d , θ ) = P ( i , j , d , θ ) M × N E9

Based on the GLCM, a number of textural features could be calculated. Among the commonly used features are shown in Table 1.

No. Feature Equation
1 Angular second moment (ASM)/Energy A S M = i , j P ( i , j , d , θ ) 2
2 Contrast c o n = i , j | i j | 2 P ( i , j , d , θ )
3 Correlation c o r r = i , j ( i μ i ) ( j μ j ) σ i σ j P ( i , j , d , θ )
4 Homogeneity h o m = i , j P ( i , j , d , θ ) 1 + | i j |
5 Entropy e n t = i , j P ( i , j , d , θ ) log ( P ( i , j , d , θ ) )

Table 1.

Some common GLCM textual features.

These measures are useful to describe the texture of an image. For example, ASM tells how orderly an image is, and homogeneity measures how closely the elements are distributed in the image.

3.6.2. Selecting image quality metrics

Based on the different texture features derived from GLCM, the fuzzy inference system can be used to aggregate these parameters and derive a final image quality score. Among the different GLCM metrics, we observe that contrast, variance, and correlation could characterize image quality well. Contrast is the chief indicator for image quality. An image with high contrast portrays dark and visible line texture. Variance and correlation are also good indicators of image quality. Better quality images tend to have higher values for contrast and variance, and lower value for correlation. Table 2 shows the values for contrast, variance, and correlation for the palm print and palm vein images.

When we observe the images, we find that images constituting similar amount of textural information yield similar measurements for contrast, variance, and correlation. Both the palm print and palm vein images for the first person, for instance, contain plenty of textural information. Thus, their GLCM features, especially the contrast value, do not vary much. However, as the texture is clearly more visible in the palm print image than the palm vein image for the second person, it is not surprising to find that the palm print image contains much higher contrast value than the vein image in this respect.

3.6.3. Modeling fuzzy inference system

The three image quality metrics namely contrast, variance and correlation are fed as input to the fuzzy inference system. Each of the input sets are modelled by three functions as depicted in Fig. 13(a)-(c).

The membership functions are formed by Gaussian functions or a combination of Gaussian functions given by,

Subject Palm print Quality metrics Palm vein Quality metrics
1 Contrast: 5.82
Variance: 12.91
Correlation: 3.21
Defuzzified : 0.74
Output
Contrast: 5.82
Variance: 12.91
Correlation: 3.21
Defuzzified : 0.74
Output
2 Contrast: 7.71
Variance: 7.92
Correlation: 2.63
Defuzzified : 0.81
Output
Contrast: 2.97
Variance: 6.44
Correlation: 3.45
Defuzzified : 0.49
Output
3 Contrast: 12.13
Variance: 8.28
Correlation: 1.90
Defuzzified : 0.81
Output
Contrast: 8.47
Variance: 8.44
Correlation: 2.54
Defuzzified : 0.80
Output
4 Contrast: 7.05
Variance: 8.57
Correlation: 2.80
Defuzzified : 0.80
Output
Contrast: 2.04
Variance: 4.10
Correlation: 3.58
Defuzzified : 0.26
Output

Table 2.

The GLCM measures and image quality metrics for the sample palm print and palm vein images.

f ( x , σ , c ) = e ( x c ) 2 / 2 σ 2 E10

where c indicates the centre of the peak and σ controls the width of the distribution. The parameters for each of the membership functions are determined by taking the best performing values using the development set.

The conditions for the image quality measures are expressed by the fuzzy IF-THEN rules. The principal controller for determining the output for image quality is the contrast value. The image quality is good if the contrast value is large, and vice versa. The other two inputs, variance and correlation, serve as regulators to aggregate the output value when the contrast value is fair/medium. Thirteen rules are used to characterize fuzzy rules. The main properties for these rules are,

  • If all the inputs are favourable (high contrast, high variance, and low correlation), the output is set to high.

  • If the inputs are fair, the output value is determined primarily by the contrast value.

  • If all the inputs are unfavourable (low contrast, low variance, and high correlation), the output is set to low.

Figure 13.

Three membership functions defined for the input variables, (a) the contrast parameters, (b) the variance parameter and (c) the correlation parameters. (d) Output membership functions.

We use the Mamdami reasoning method to interpret the fuzzy set rules. This technique is adopted because it is intuitive and works well with human input. The output membership functions are given as O={Poor, Medium, Good}. The output functions are shown in Fig. 13(d)and they comprise of similar distribution functions as the input sets (combination of Gaussian functions). The defuzzified output score are recorded in Table 2. The output values adequately reflect the quality of the input images (the higher the value, the better the image quality).

3.6.4. Using image quality score in fusion scheme

The defuzzified output values are used as the weighting score for the biometric features in the fusion scheme. Let say we form a vector ( d 1 , d 2 , ... , d j ) from the individual outputs of the biometrics classifiers, the defuzzified output can be incorporated into the vector as ( ω 1 d 1 , ω 2 d 2 , ... , ω j d j ) , where j stands for the number of biometric samples, and ω refers to the defuzzified output value for the biometric samples. Note that ω is the normalized value in which ω 1 + ω 2 + ... + ω j = 1 . The weighted vector can then be input to the fusion scheme to perform authentication.

Advertisement

4. Results and discussion

4.1. Performance of uni-modal hand biometrics

An experiment was carried out to assess the effectiveness of the proposed Directional Coding method applied on the individual palm print and palm vein modalities. The results for both the left and right hands were recorded for the sake of thorough analysis of the hand features. The values for EER were taken at the point where FAR was equalled, or nearly equalled, to FRR. In the experiment, we also examined the performance of the system when FAR was set to 0.01% and 0.001%. The reason for doing this is because FAR is considered as one of the most significant parameter settings in a biometric system. It measures the likelihood of unauthorized accesses to the system. In some security critical applications, even one failure to detect fraudulent break-in could cause disruptive consequence to the system. Therefore, it is of paramount importance to evaluate the system at very low FAR.

The performances of the individual hand modalities are presented in Table 3. We observe that palm vein performs slightly better by yielding GAR approximately equals 97% when FAR was set to 0.001%. Thus, we find that there is a need to combine these modalities in order to obtain promising result. We also discover that the results for both of the hands do not vary significantly. This implies that the users can use either hand to access the biometric system. This is an advantage in security and flexibility as the user can choose to use either hand for the system. If one of the user’s hand information is tampered or the hand is physically injured, he/she can still access the system by using the other hand. Apart from that, allowing the user to use both hands reduces the chance of being falsely rejected. This gives the user more chances of presentation and thereby reduces the inconvenience of being denied access.

We had also included an experiment to verify the usefulness of the proposed local ridge enhancement (LRE) pre-processing technique to enhance the hand features. The result of applying and without applying the pre-processing procedure is depicted in Fig. 14. The pre-processing step had indeed helped to improve the overall performance by 6%.

Hand Side Biometrics EER% GAR% when FAR = 0.01% GAR% when FAR = 0.001%
Right Palm print (PP) 2.02 95.46 94.65
Right Palm vein (PV) 0.71 98.34 97.95
Left Palm print (PP) 1.97 95.77 94.05
Left Palm vein (PV) 0.80 98.26 97.49

Table 3.

Performance of individual biometric experts.

4.2. Performance of multimodal hand biometrics

4.2.1. Analysis of biometric features

Correlation analysis of individual experts is important to determine their discriminatory power, data separability, and information complementary ability. A common way to identify the correlation which exists between the experts is to analyze the errors made by them. The fusion result can be very effective if the errors made by the classifiers are highly de-correlated. In other words, the lower the correlation value, the more effective the fusion will become. This is due to the reason that more new information will be introduced when the dependency between the errors decreases (Verlinde, 1999). One way to visualize the

Figure 14.

Improvement gain by applying the proposed LRE pre-processing technique for left and right hands.

correlation between two classifiers is to plot the distribution graph of the genuine and imposter populations. In the correlation observation shown in Fig. 15, the distribution of the genuine and imposter populations take the form of two nearly independent clusters. This indicates that the correlation between the individual palm print and palm vein modality is low. In other words, we found that both biometrics are independent and are suitable to be used for fusion.

Figure 15.

Visual representation of the correlation of between palm print and palm vein experts.

4.2.2. Fusion using sum-rule

In this experiment, we combine the palm print and palm vein experts using the sum-based fusion rule. Table 4 records the results when we fused the two different hand modalities. We observe that, in general, the fusion approach takes advantage of the proficiency of the individual hand modalities. The fusions of palm print and palm vein yielded an overall increase of 3.4% in accuracy as compared to the single hand modalities.

Hand Side Fused Biometrics EER% GAR% when FAR = 0.01% GAR% when FAR = 0.001%
Right PP + PV 0.040 99.84 99.73
Left PP + PV 0.090 99.75 99.56

Table 4.

Performance of using sum-based fusion rule.

4.2.3. Fusion using support vector machine

In this portion of study, we examine the use of SVM for our fusion approach. In the previous experiment, we use sum-rule (linear-based) method to fuse the different experts. Although sum-rule can yield satisfying result especially in the fusion of three or more modalities, the fusion result can be further improved by deploying a non-linear classification tool. The fusion result of using SVM is presented in Table 5.

Hand Side Fused Biometrics EER% GAR% when FAR = 0.01% GAR% when FAR = 0.001%
Right PP + PV 0.020 99.90 99.82
Left PP + PV 0.040 99.86 99.64

Table 5.

Performance of using SVM.

As a whole, SVM has helped to reduce the error rates of the fusion of the experts. This improvement is due to the fact that SVM is able to learn a non-linear decision plane which could separate our datasets more efficiently. Fig. 16 shows the decision boundary learnt by SVM in classifying the genuine and imposters score distributions.

Figure 16.

Decision boundaries learnt by SVM.

4.3. Fuzzy-weighted quality-based fusion

In order to testify the proposed fuzzy-weighted (FW) image quality-based fusion scheme is useful, we carried out an experiment to evaluate the technique. Fig. 17 depicts the comparison of applying the proposed fuzzy-weighted method over the standalone sum-rule and SVM fusion approaches.

Figure 17.

Improvement gained of fuzzy-weighted fusion scheme for palm print and palm vein.

We observe that the performance of the fusion methods could be improved by incorporating the image quality assessment scheme. The gain in improvement is particularly evident when the fuzzy-weighted quality assessment method is applied on sum-rule. This result shows that the proposed quality-based fusion scheme offers an attractive alternative to increase the accuracy of the fusion approach.

Advertisement

5. Conclusions

This chapter presents a low resolution contactless palm print and palm vein recognition system. The proposed system offers several advantages like low-cost, accuracy, flexibility, and user-friendliness. We describe the hand acquisition device design and implementation without the use of expensive infrared sensor. We also introduce the LRE method to obtain good contrast palm print and vein images. To obtain useful representation of the palm print and vein modalities, a new technique called directional coding is proposed. This method represents the biometric features in bit string format which enable speedy matching and convenient storage. In addition, we examined the performance of the proposed fuzzy-weighted image quality checking scheme. We found that performance of the system could be improved by incorporating image quality measures when the modalities were fused. Our approach produced promising result to be implemented in a practical biometric application.

References

  1. 1. Boles W. Chu S. 1997 Personal identification using images of the human palms. Proceedings of IEEE Region 10 Annual Conference, Speech and Image Technologies for Computing and Telecommunications, 1 295 298
  2. 2. Chen J. Zhang C. Rong G. 2001 Palmprint recognition using creases. Proceedings of International Conference of Image Processing, 234 237
  3. 3. Connie T. Jin A. Ong M. Ling D. 2005 An automated palmprint recognition system. Image and Vision Computing, 23 5 501 515
  4. 4. Cross J. Smith C. 1995 Thermographic imaging of the subcutaneous vascular network of theback of the hand for biometric identification. Proceedings of IEEE 29th International Carnahan Conference on Security Technology, 20 35
  5. 5. Daugman J. 1993 High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15 11 1148 1161
  6. 6. Deng W. Hu J. Guo J. Zhang H. Zhang C. 2008 Comment on globally maximizing locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30 8 1503 1504
  7. 7. Doublet J. Revenu M. Lepetit O. 2007 Robust grayscale distribution estimation for contactless palmprint recognition. First IEEE International Conference on Biometrics: Theory, Applications, and Systems, 1 6
  8. 8. Duta N. Jain A. Mardia K. 2002 Matching of palmprint. Pattern Recognition Letters, 23 477 485
  9. 9. Feng G. Hu D. Zhang D. Zhou Z. 2006 An alternative formulation of kernel LPP with application to image recognition. Neurocomputing, 67 13-15 , 1733 1738
  10. 10. Funada J. Ohta N. Mizoguchi M. Temma T. Nakanishi T. Murai K. et al. 1998 Feature extraction method for palmprint considering elimination of creases. Proceedings of the 14th International Conference of Pattern Recognition, 2 1849 1854
  11. 11. Goh M. Connie T. Teoh A. 2008 Touch-less Palm Print Biometric System. The 3rd International Conference on Computer Vision Theory and Applications, 423 430 .
  12. 12. Gonzalez R. C. Woods R. E. 2002 Digital Image Processing (Second Edition), Prentice-Hall Inc..
  13. 13. Han C. C. 2004 A hand-based personal authentication using a course-to-fine strategy. Image and Vision Computing, 22 11 909 918
  14. 14. Han C. Cheng H. Lin C. Fan K. 2003 Personal authentication using palm-print features. Pattern Recognition, 36 2 371 381
  15. 15. Hand-based Biometrics. 2003 Biometric Technology Today, 11 7 9 11
  16. 16. Hitachi and Fujitsu win vein orders in diverse markets. 2007 March). Biometric Technology Today, 4
  17. 17. Huang D. Jia W. Zhang D. 2008 Palmprint verification based on principal lines. Pattern Recognition, 41 4 1316 1328
  18. 18. Jain A. K. Ross A. Prabhakar S. 2004 An Introduction to biometric recognition. IEEE Transactions on Circuits System and Video Technology, 14 1 4 20
  19. 19. Jia W. Huang D. Zhang D. 2008 Palmprint verification based on robust line orientation code. Pattern Recognition, 41 5 1504 1513
  20. 20. Kong A. Zhang D. 2004 Competitive coding scheme for palmprint verification. Proceedings of International Conference on Pattern Recognition, 1 520 523
  21. 21. Kong A. Zhang D. 2006 Palmprint identification using feature-level fusion. Pattern Recognition, 39 3 478 487
  22. 22. Kong A. Zhang D. Kamel M. 2006a Palmprint identification using feature-level fusion. Pattern Recognition, 39 478 487
  23. 23. Kong A. Zhang D. Kamel M. 2006b A study of brute-force break-ins of a palmprint verification system. IEEE Transactions on Systems, Man and Cybernetics, Part B, 36 5 1201 1205
  24. 24. Kong W. Zhang D. 2002 Palmprint texture analysis based on low-resolution images for personal authentication. Proceedings of 16th International Conference on Pattern Recognition, 3 807 810
  25. 25. Kumar A. Zhang D. 2006 Personal recognition using hand-shape and texture. IEEE Transactions on Image Processing, 15 2454 2461
  26. 26. Kumar K. V. Negi A. 2007 A novel approach to eigenpalm features using feature-partitioning framework. Conference on Machine Vision Applications, 29 32
  27. 27. Kung S. Lin S. Fang M. 1995 A neural network approach to face/palm recognition. Proceedings of IEEE Workshop on Neural Networks for Signal Processing, 323 332
  28. 28. Leung M. Fong A. Cheung H. 2007 Palmprint verification for controlling access to shared computing resources. IEEE Pervasive Computing, 6 4 40 47
  29. 29. Li W. Zhang D. Xu Z. 2002 Palmprint identification by Fourier transform. International Journal of Pattern Recognition and Artificial Intelligence, 16 4 417 432
  30. 30. Lin C. L. Fan K. C. 2004 Biometric verification using thermal images of palm-dorsa vein patterns. IEEE Transactions on Circuits and Systems for Video Technology, 14 2 199 213
  31. 31. Liu L. Zhang D. 2005 Palm-line detection. IEEE International Conference on Image Processing, 3 269 272
  32. 32. Lu G. Wang K. Zhang D. 2004 Wavelet based independent component analysis for palmprint identification. Proceedings of International Conference on Machine Learning and Cybernetics, 6 3547 3550
  33. 33. Lu G. Zhang D. Wang K. 2003 Palmprint recognition using eigen palms features. Pattern Recognition Letters, 24 9 1463 1467
  34. 34. Miura N. Nagasaka A. Miyatake T. 2004 Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification. Machine Vision and Applications, 15 194 203
  35. 35. PalmSecure™. 2009 In: Fujitsu, 10.12.2010, Available from http://www.fujitsu.com/us/services/biometrics/palm-vein/
  36. 36. Putte T. Keuning J. 2000 Biometrical fingerprint recognition: don’t get your fingers burned. Proceedings of the Fourth Working Conference on Smart Card Research and Advanced Applications, 289 303
  37. 37. Qin A. K. Suganthan P. N. Tay C. H. Pa H. S. 2006 Personal Identification System based on Multiple Palmprint Features. 9th International Conference on Control, Automation, Robotics and Vision, 1 6
  38. 38. Rafael Diaz. M. Travieso C. Alonso J. Ferrer M. 2004 Biometric system based in the feature of hand palm. Proceedings of 38th Annual International Carnahan Conference on Security Technology, 136 139
  39. 39. Saunders C. 1998 Support Vector Machine User Manual. RHUL, Technical Report
  40. 40. Sun Z. Tan T. Wang Y. Li S. 2005 Ordinal palmprint representation for personal identification. Proceeding of Computer Vision and Pattern Recognition, 1 279 284
  41. 41. Teoh A. 2009 Palmprint Matching. In S. Z. Li, Encyclopedia of Biometrics, 1049 1055 . Springer
  42. 42. Toh K. A. Eng H. L. Choo Y. S. Cha Y. L. Yau W. Y. Low K. S. 2005 Identity verification through palm vein and crease texture. International Conference on Biometrics, 546 553
  43. 43. Vapnik V. 1998 Statistical Learning Theory, Wiley-Interscience publication
  44. 44. Vein recognition in Europe. 2004 Biometric Technology Today, 12 9 6
  45. 45. Wang J. G. Yau W. Y. Suwandya A. Sung E. 2008 Person recognition by fusing palmprint and palm vein images based on “Laplacianpalm” representation. Pattern Recognition, 41 1514 1527
  46. 46. Wang L. Leedham G. Cho S. 2007 Infrared imaging of hand vein patterns for biometric purposes. IET Computer Vision, 1 3-4 , 113 122
  47. 47. Wang X. Gong H. Zhang H. Li B. Zhuang Z. 2006 Palmprint identification using boosting local binary pattern. Proceedings of International Conference on Pattern Recognition, 503 506
  48. 48. Wu J. D. Ye S. H. 2009 Driver identification using finger-vein patterns with Radon transform and neural network. Expert Systems with Applications, 36 5793 5799
  49. 49. Wu X. Wang K. Zhang D. 2002 Line feature extraction and matching in palmprint. Proceeding of the Second International Conference on Image and Graphics, 583 590
  50. 50. Wu X. Wang K. Zhang D. 2004a A novel approach of palm-line extraction. Proceeding of the Third International Conference on Image and Graphics, 230 233
  51. 51. Wu X. Wang K. Zhang D. 2004b Palmprint recognition using directional energy feature. Proceedings of International Conference on Pattern Recognition, 4 475 478
  52. 52. Wu X. Wang K. Zhang D. 2004c HMMs based palmprint identification. Lecture Notes in Computer Science, 3072 775 781
  53. 53. Wu X. Wang K. Zhang D. 2005 Palmprint authentication based on orientation code matching. Proceeding of Fifth International Conference on Audio- and Video-based Biometric Person Authentication, 555 562
  54. 54. Wu X. Zhang D. Wang K. 2003 Fisherpalms based palmprint recognition. Pattern Recognition Letters, 24 15 2829 2838
  55. 55. Yang J. Zhang D. Yang J. Niu B. 2007 Globally maximizing locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics. IEEE Transactions on Pattern Analysisand Machine Intelligence, 29 4 650 664
  56. 56. Yörük E. Dutağaci H. Sankur B. 2006 Hand biometrics. Image and Vision Computing, 24 5 483 497
  57. 57. You J. Kong W. Zhang D. Cheung K. 2004 On hierarchical palmprint coding with multiple features for personal identification in large databases. IEEE Transactions on Circuits and Systems for Video Technology, 14 2 234 243
  58. 58. You J. Li W. Zhang D. 2002 Hierarchical palmprint identification via multiple feature extraction. Pattern Recognition, 35 4 847 859
  59. 59. Zhang D. Liu L. L. 2009 Palmprint Features, In Encyclopedia of Biometrics, S. Z. Li, 1043 1049 , Springer
  60. 60. Zhang D. Shu W. 1999 Two novel characteristics in palmprint verification: datum point invariance and line feature matching. Pattern Recognition, 32 4 691 702
  61. 61. Zhang D. Kong W. You J. Wong M. 2003 On-line palmprint identification. IEEE Transaction on PAMI, 25 9 1041 1050
  62. 62. Zhang Z. Ma S. Han X. 2006 Multiscale feature extraction of finger-vein patterns based on curvelets and local interconnection structure neural network. The 18th International Conference on Pattern Recognition (ICPR’06), 145 148
  63. 63. Zhou X. Peng Y. Yang M. 2006 Palmprint Recognition Using Wavelet and Support Vector Machines. Lecture Notes in Computer Science, 4099 385 393
  64. 64. Zuo W. Wang K. Zhang D. 2005 Bi-directional PCA with assembled matrix distance metric. Proceeding of IEEE International Conference on Image Processing, 2 958 961

Written By

Goh Kah Ong Michael, Tee Connie and Andrew Beng Jin Teoh

Submitted: 03 November 2010 Published: 09 August 2011