Open access peer-reviewed chapter

Image Sharpness-Based System Design for Touchless Palmprint Recognition

Written By

Xu Liang, Zhaoqun Li, Jinyang Yang and David Zhang

Submitted: 09 March 2020 Reviewed: 14 May 2020 Published: 22 July 2020

DOI: 10.5772/intechopen.92828

From the Edited Volume

Biometric Systems

Edited by Muhammad Sarfraz

Chapter metrics overview

650 Chapter Downloads

View Full Metrics

Abstract

Currently, many palmprint acquisition devices have been proposed, but how to design the systems are seldom studied, such as how to choose the imaging sensor, the lens, and the working distance. This chapter aims to find the relationship between image sharpness and recognition performance and then utilize this information to direct the system design. In this chapter, firstly, we introduce the development of recent palmprint acquisition systems and abstract their basic frameworks to propose the key problems needed to be solved when designing new systems. Secondly, the relationship between the palm distance in the field of view (FOV) and image pixels per inch (PPI) is studied based on the imaging model. Suggestions about how to select the imaging sensor and camera lens are provided. Thirdly, image blur and depth of focus (DOF) are taken into consideration; the recognition performances of the image layers in the Gaussian scale space are analyzed. Based on this, an image sharpness range is determined for optimal imaging. The experiment results are obtained using different algorithms on various touchless palmprint databases collected using different kinds of devices. They could be references for new system design.

Keywords

  • palmprint recognition
  • system design
  • image sharpness assessment
  • scale space
  • field of view
  • depth of focus

1. Introduction

Biometric identification has been widely applied in modern society, such as electronic payment, entrance control, and forensic identification. As a reliable solution for identity authentication, biological characteristics refer to the inherent physiological or behavioral characteristics of the human body, including the iris, pattern, retina, palmprint, fingerprint, face and also voiceprint, gait, signature, key strength, etc. In the last decade, we have witnessed the successful employment of recognition systems using fingerprint, iris, and face. With the development of image capture devices and recognition algorithms, palmprint recognition receives more and more attention recently. Palmprint image contains principal lines, wrinkles, ridges, and texture that are regarded as useful features for palmprint representation and can be captured with a low-resolution image [1]. Palmprint recognition has several advantages compared with other biometrics: (1) the line features and texture features in a palmprint are discriminative and robust, which can be easily fused with other hand features (dorsal hand vein, fingerprint, finger knuckle); (2) the pattern of palmprint is mainly controlled by genetic genes, when combined with palm vein information it can achieve high antispoof capability; (3) palmprint image acquisition is convenient and low-cost, and a relative low-resolution camera and a light source are sufficient to acquire the images; (4) the palmprint acquisition is hygienic and user friendly in the real applications. Based on the custom acquisition devices, more information can be retrieved in a multispectral image or 3D palmprint image. A 2D gray scale palmprint example with feature definitions is shown in Figure 1. The purpose of this chapter is to review recent research on palmprint acquisition systems to trace the development of palmprint recognition-based biometric systems. In this chapter, we coarsely divide the devices into three types by acquisition mode: touch-based devices, touchless devices, and portable devices. Touch-based devices usually have pegs to constrain the hand pose and position, which can capture the details of palmprint to the most extent. The illuminating environment is also stable during capturing process. These constrains ensure the captured palmprint images to be high quality. For touchless devices, users can freely place their palms in front of the camera while the hand pose is generally required to spread out the fingers. The environment during the capturing process becomes more complicated, especially the illumination. There are also datasets composed of palmprint images captured in a relatively free fashion. These images may be collected on the Internet which we will not discuss here. Otherwise, collectors use digital cameras or phone cameras to capture palmprint image, and usually, there are no strict conditions forced on the user. In the rest of this chapter, first, we will introduce the representative palmprint acquisition devices, and then study the relationship between the palm distance, image sharpness, hardware parameters, and the final recognition performance. Table 1 summarizes the palmprint acquisition devices.

Figure 1.

Palmprint images and feature definitions.

Ref.YearDevice typeImage typeDescription
[1]2003Touch-basedGray scaleAdopt low-cost camera to capture low-resolution image palmprint; use pegs as guidance
[2]2007TouchlessRGB and IRRealize noncontact capturing of palmprint images under unconstrained scenes
[3]2008TouchlessRGBCapture palm in real-time video stream using skin-color thresholding
[4]2009Touch-based3DAcquire depth information in palm using structured light imaging
[5]2010Touch-basedMultispectralPropose an online multispectral palmprint system
[6]2010TouchlessRGB and IRCapture palmprint and palm vein images simultaneously
[7]2011Touch-basedGray scale and IRCapture palmprint, palm vein, and dorsal vein images simultaneously
[8]2012PortableGray scaleUse different portable devices to capture palmprint images
[9]2012Touch-basedGray scale and 3DAcquire 3D information and 2D texture in palm
[10]2015TouchlessRGBThe RGB’s blue and red channels are processed separately for bimodal feature extraction
[11]2016Touch-basedGray scaleDevelop a line scanner to capture palmprint images
[12]2017Touch-basedGray scaleProposed a novel doorknob device to capture the knuckle images
[13]2018TouchlessMultispectralCapture palmprint and palm vein images in the device; established the current biggest publicly available database

Table 1.

The palmprint recognition systems.

Advertisement

2. The current palmprint recognition devices

2.1 Touch-based devices

Reference [1] is a pioneer work for palmprint acquisition and recognition that builds the first large-scale public palmprint dataset. The captured palmprint images are low-resolution with 75 pixels per inch (PPI), so that the whole process can be completed in 1 s, which achieves real-time palmprint identification. The palmprint capture device includes a ring light source, charge-coupled device (CCD) camera, a frame grabber, and an analog-to-digital (AD) converter. Six pegs are serving as control points that constrain the user’s hands. To guarantee the image quality, during palmprint image capturing, the device environment is semiclosed, and the ring source provides uniform lighting conditions. After capturing the palmprint, the AD converter directly transmits the captured images by the CCD camera to a computer. The well-designed acquisition system can capture high-quality images, which boosts the performance of the identification algorithm. The experiment result also demonstrates that low-resolution palmprint can achieve efficient person identification. Our palms are not pure planes, and many personal characteristics lie on the palm surface. From this view, 2D palmprint recognition has some inherent drawbacks. On one hand, much 3D depth information is neglected in 2D imaging. The main features in 2D palmprint are line features including principal lines and wrinkles, which is not robust to the illumination variations and contamination influence. On the other hand, the 2D palmprint image is easy to be counterfeited so that the anti-forgery ability of 2D palmprint needs improvement. For capturing depth information in palmprint, [4, 14] explores a 3D palmprint acquisition system that leverages the structured light imaging technique. Compared to 2D palmprint images, several unique features, including mean curvature image, Gaussian curvature image, and surface type, are extracted in 3D images. Many studies have proposed different algorithms that encode the line features on the palm surface; however, the discriminative and antispoof capability of palm code needs to be further improved for large-scale identification. To obtain more biometric information in the palm, in [5] a multispectral palmprint acquisition system is designed, which can capture both red, green, and blue (RGB) images and near-infrared (NIR) images of one palm. It consists of a CCD camera, lens, an A/D converter, a multispectral light source, and a light controller. The monochromatic CCD is placed at the bottom of the device to capture palmprint images, and the light controller is used to control the multispectral light. In the visible spectrum, a three-mono-color LED array is used with red peaking at 660 nm, green peaking at 525 nm, and blue peaking at 470 nm. In the NIR spectrum, a NIR LED array peaking at 880 nm is used. It has been shown that light in the 700 to 1000 nm range can penetrate the human skin, whereas 880–930 nm provides a good contrast of subcutaneous veins. The system is low-cost, and the acquired palmprint images are high-quality. By fusing the information provided by multispectral palmprint images, the identification algorithm achieves higher performance on recognition accuracy and antispoof capacity.

2.2 Touchless devices

Touch-based devices can easily capture high-quality palmprint images which contribute to high performance in person identification, while their drawbacks also lie in this acquisition mode. Firstly, users may have hygienic concerns since the device cannot be cleaned immediately. Secondly, some users may feel uncomfortable with the control pegs and constrained capture environment. Thirdly, the volume of the device is usually larger than palm, which causes problems of portability and usability. As the first attempt to solve the above issues, [2] presents a real-time touchless palmprint recognition system, and the capture processes are conducted under unconstrained scenes. Two complementary metal-oxide semiconductor (CMOS) web cameras are placed in parallel, one is a near-infrared (NIR) camera, and the other is a traditional red green blue (RGB) camera. A band pass filter is fixed on the camera lens to eliminate the influence of NIR light on the palm. The two cameras work simultaneously, and the resolution of both cameras is 640 × 480. For further hand detection process, during the image capture, users need to open their hands and place palm regions in front of the cameras. Also, the palm plane needs to be approximately flat and orthogonal to the optical axis of cameras. Minor in-plane rotation is allowed. The distance between the hand and device should be in a fixed range (35–50 cm) to ensure the clarity of the palmprint images. In [3], a novel touchless device with a single camera is proposed. The principle of device design is similar to [2]. During the input process, the user places his/her hand in front of the camera without touching the device, and there are no strict constraints on its pose and location. The main difference is that the paddles are placed around the camera to reduce the effect of illumination changes. By these measures, the acquisition process becomes flexible and efficient. [6] presents a touchless palmprint and palm vein recognition system. The structure of the device is similar to that in [3], which mainly contains two parallel mounted cameras with visible light and IR light. The flexibility of this touchless device is further improved. Users are allowed to position their hands freely above the sensor, and they can move their hands during the acquisition process. The acquisition program will give feedback to the user that he/she is placing his/her hand correctly inside the working volume. In this way, the device can capture high-quality palmprint and palm vein images at the same time. In [7], the palmprint, palm vein, and dorsal vein images are simultaneously captured with a touchless acquisition device. In the capturing process, the users are asked to put their hands into the device with five fingers separated. The time cost is less than 1 s. The multimodal images can be fused in the algorithm to boost the identification performance.

2.3 Portable devices

With the widespread application of digital cameras and smartphones, more and more portable biometric devices appear to us. To investigate the problem of palmprint recognition across different portable devices and build the available dataset, [8] uses one digital camera and two smartphones to acquire palmprints in a free manner.

2.4 Key problems in device design

As is discussed above, the main parts of palmprint acquisition devices are cameras and light sources. So, the problems we need to consider when designing new devices are as follows:

  1. The resolution of the imaging sensor

  2. The focal length of the lens

  3. The distance range of the palm

  4. The sharpness range of the final palmprint image

  5. The light source intensity

  6. The signal-to-noise ratio of the palmprint image

Many previous works have studied the light sources [15, 16, 17]. Generally, the basic goal is avoiding overexposure and underexposure. Image noise increases under low illumination conditions. Although many new deep learning-based denoising techniques are proposed [18], the most effective solution for palmprint imaging is developing active light sources to provide suitable illumination conditions. In this work, we only focus on the first four problems. We developed three palm image capture devices to test the performance of different hardware frameworks (as is shown in Figure 2). We denote them as devicea, deviceb, and devicec. Among them, devicea and deviceb are touch-based devices. devicea is designed to generate high-quality palmprint images. The device contains an ultra-high-definition imaging sensor (about 500 M pixels) and a distortion-free lens. The long working distance is designed to further guarantee the image quality. During the capture process, the user’s palm is put on the device to avoid motion blur. deviceb is designed to generate high-distortion palmprint images. It contains a high-definition imaging sensor (about 120 M pixels) and an ultrawide lens. The working distance is very short (about 2 cm). devicec is a touchless device; it is designed to capture high- and low-definition images in touchless scenarios. It has two cameras, one is high-definition (120 M pixels), and the other one is low-definition (30 M pixels); both of them are equipped with distortion-free lenses. We use different devices to collect palm images from the same palm; the captured images are shown in Figure 2(d)–(e). We can see that the 500 M pixel camera can capture clear ridges and valleys of the palmprint, the 120 M pixel camera can capture most of the ridges and valleys, and the 30 M pixel camera only can capture the principal lines and coarse-grained skin textures. For touchless applications, the distance between the palm and the camera is not stable. Distance variations may decrease the palm image PPI and cause defocus-blur. In practice, it is very hard to guarantee the quality of the captured images. Hence, what we want to know is which level of image sharpness is sufficient for palmprint identification.

Figure 2.

Different palmprint acquisition devices and the palm images generated by them. (a) The touch-based device with a 500 M pixel imaging sensor and a long imaging distance. (b) The touch-based device with a 120 M pixel imaging sensor and a very short imaging distance. (c) The multicamera touchless device with 120 M and 30 M pixel imaging sensors and a long imaging distance. (d) The palm image captured by (a) and the corresponding enlarged local regions. (e) The palm image captured by (b) and the corresponding enlarged local regions. (f) The palm images captured by (c) and the corresponding enlarged local regions.

Advertisement

3. System design based on palm image sharpness

3.1 Palm distance and recognition performance

The imaging model is shown in Figure 3. Let lp and wp stand for the statistical information of the length and width of the palm, respectively. Let Zmin and Zmax stand for the minimum and maximum distance the palm can reach in the field of view (FOV). If the hand want to be captured completely, we need llp and wwp, where l and w are the corresponding sizes of the field of view (FOV) of the camera (as is shown in Figure 3). Then Zmin could be estimated by

Figure 3.

Imaging model and related notations.

Zmin=maxlp/2tanθuwp/2tanθvE1

where θu and θv are half angles of the FOV along directions of u and v, respectively. As is shown in Figure 3, in the generated image, pw (in units of pixel) is the palm width. rw (in units of pixel) is the length of the tangent line formed by two finger valley key points. We introduce it here, because most of the region of interest (ROI) localization methods utilize those two key points [1]. The PPI is calculated by

ppi=pw/wpE2

in which wp is the fixed real palm size. Based on the triangle geometry constraints defined in the pin-hole imaging model [19], we have

pw/f=wp/zE3

where f is the focal length (in units of pixel), which is related with the pixel size of the imaging sensor and the focal length of the lens; z is the distance between the palm and the camera’s optical center. So pw changes according to different palm distances. Eq. (3) shows the constraints of the image palm width pw, equivalent focal length f, palm distance z, and the palm width wp. According to Eqs. (2) and (3), we have

z=f/ppiE4

Hence,

Zmax=f/ppiminE5

where ppimin is the minimum PPI for palmprint recognition. So, what we need to know is the relation between image PPI and system equal error rate (EER). Here, EER is an index of the system’s recognition performance; lower is better. In data collection process, it is very difficult to let the users to put and hold their hands at the designed target distances, so we plan to utilize the public database to conduct simulation experiments to study the relationship between EER and PPI. In this section, database COEP [20] is selected to use, due to it is collected in a highly constrained environment. The images in it are captured by single-lens reflex camera (SLR), so they have a high signal-to-noise ratio (SNR) and very low distortions. During capturing, the user’s palm is put stably on the backboard. The image resolution is sufficient to record the palmprint ridges and valleys. So we take images in COEP as the ground truth; it means they are captured with proper focus and sufficient PPI. Then the images are resized to generate palm images with different PPI. The mean PPI of a database is defined as

ppi¯=1Ni=1NppiiE6

where N is the image number of the dataset and ppii is the ppi value of the i-th palm image. However, in practice the captured image may contain radial and tangential distortions. The distortion parameters of the imaging model could be estimated by camera calibration [19]. Based on the imaging model, the captured image could be undistorted. Image undistortion also introduces image blur to the undistorted image. Taking this into consideration, we select four different kinds of lenses for testing, they are long-focus, standard, wide-angle, and ultrawide-angle lenses (as is shown in Figure 4). We use them to capture checkboard images from different views. After camera calibration, we got the corresponding intrinsic parameters. They are listed in Table 2. fu and fv are focal length along u and v directions, respectively. θu and θv are half angle of the FOV along u and v directions, respectively. k1, k2, and k3 are radial distortion coefficients. p1 and p2 are tangential distortion factors. As is shown in Figure 5, the images in COEP first are distorted by the four distortion parameter sets and then undistorted by coordinates mapping and pixel interpolation based on the distortion model. The obtained images are further resized to generate different PPI palm images. According to [21], the average palm width is 84 mm for male and 74 mm for female. In [22], the average palm width is 84.18±6.81 mm for German and 82.38±11.82 mm for Chinese, and most of their subjects are male. Since palm width varies with gender, age, and race, it depends on the specific application scenarios. For simplicity, we set wp¯80mm (3.15 inches) and lp110 mm (4.33 inches) in our work. The original image size of COEP is 1600 × 1200. In order to delete the background area, they are cropped to size of 1280 × 960. In this experiment, we totally generate 10 datasets by image resizing; detail statistical information is listed in Table 3. For each palm image, using the ROI localization method proposed in [1], we can detect the tangent line of the two finger valleys, and then rw is obtained. pw also could be detected based on the relative coordinate system of the palm. Given a dataset, the mean pw and mean rw are defined as.

pw¯=1Ni=1NpwiE7
rw¯=1Ni=1NrwiE8

where N is the image number of the dataset and pwi and rwi are the pw and rw values of the i-th palm image. Here, pw¯ is selected as the index to measure the resolution of the palm image. The sample images and corresponding enlarged local patches of the generated datasets are shown in Figure 5. Table 4 describes the EERs and thresholds obtained by CompCode on different datasets. Here, eav¯ is an index for sharpness assessment [23]. It should be noted that the sharpness level (eav¯) obtained here has not taken the defocus-blur into consideration. It will be further studied in the next subsection. The distribution curves of pw¯ and corresponding EER and eav¯ are shown in Figure 6. From it, we can see that the affection on image sharpness caused by undistortion is not quite obvious. Among the four cameras (as is shown in Figure 4), the long-focus lens obtains the highest sharpness, and wide-angle lens reaches the lowest sharpness. As to the ultrawide-angle lens, many newly designed lenses have improved their optical models to generate big distortions just in the boundary regions and small distortions in the center region. In this experiment, the wide-angle lens gains more distortions than the ultrawide-angle lens; it depends on the specific optical model the manufacturer used. Generally, the palm is put at the center of the image, so the differences between the four lenses are not large. Although the long-focus lens can provide high sharpness palm images, in real-world scenarios, the wide-angle lens is more recommended because its wide FOV provides better user experience for image capturing. As is shown in Figure 6, the EERs increase drastically when pw¯ is less than 130 pixels. So when we were selecting the imaging sensor and determining the working distance, at least we should guarantee, in the final palm image, the palm width should be large than 130 pixels; 300 pixels is recommended according to Figure 6.

Figure 4.

Images captured by different lenses. (a) The imaging device and different kinds of lenses. (b) An image captured by long-focus lens. (c) An image captured by standard lens. (d) An image captured by ultrawide-angle lens.

Lensfufvθuθvk1k2k3p1p2
Long-focus3507.053497.2410.4°7.9°−0.37−1.36−0.0018−0.0000
Standard706.96707.2948.7°37.5°0.13−0.510.0055−0.0001
Wide-angle435.57436.1072.6°57.7°−0.410.140.00140.0006
Ultrawide217.19217.99111.7°95.5°0.05−0.070.0105−0.0002−0.0018

Table 2.

The calibrated parameters of different camera lenses.

Figure 5.

Images obtained at different distances (PPI) using different distortion models.

Palm
region size
1280
×960
1120
×840
960
×720
800
×600
640
×480
480
×360
320
×240
240
×180
160
×120
80
×60
rw¯304.8266.7228.6190.5152.4114.376.257.238.119.1
pw¯524.8459.2393.6328.0262.4196.8131.298.465.632.8
ppi¯166.6145.8125.0104.183.362.541.731.220.810.4

Table 3.

Palm region size, palm width, and corresponding ppi¯.

pw¯Long-focusStandardWide-angleUltrawide
EER (%)eav¯EER (%)eav¯EER (%)eav¯EER (%)eav¯
524.81.44529.01.53928.61.50828.11.63428.4
459.21.47726.51.63426.31.57125.91.60226.1
393.61.44526.11.61925.91.55325.51.63425.8
328.01.41425.41.57125.31.55025.11.63125.3
262.41.41423.71.60223.61.50823.41.53923.6
196.81.47723.91.57123.51.53923.11.60223.2
131.21.50820.21.78320.01.63419.71.62719.8
98.41.57118.41.75918.31.72818.11.72818.2
65.62.17714.82.13614.72.32514.62.26214.7
32.86.3469.96.3139.86.2749.86.5359.8

Table 4.

The EERs obtained from different palm width using different lens models.

Figure 6.

The relationship between recognition performance, image sharpness, and palm width (in units of pixel).

3.2 Image sharpness range and recognition performance

In the above subsection, based on the imaging model and the capture device, we studied the relationship between palm distance, PPI, and EER. However, the hardware and the parameters of the imaging model are not always available in practice. Besides FOV, depth of focus (DOF) should be considered, since defocus-blur also will affect the final accuracy. DOF is highly related to specific applications. Our previous work [23] shows that the accuracy of palmprint recognition has a relationship with the image sharpness. Here, what we want to know is in which sharpness range the palmprint recognition accuracy is acceptable.

In this section, we try to analyze the palmprint image sharpness based on the Gaussian scale space [24]. The transform function is defined as

Lxy=IxyGσE9

where x,y is the specific coordinates of the pixel and σ is the scale-coordinate. Gσ is the Gaussian smooth filter used for smooth the input image, and σ is its standard deviation. I is the initial image, and L is the smoothed image. So images in the scale space have different sharpness levels. As is shown in Figure 7, scale space function tries to generate all the potential palmprint images that may be captured in practice. In order to achieve the scale-invariant capacity, SIFT [24] tries to utilize all the information of the scale space. The method proposed in [25] is utilized here to conduct SIFT-based palmprint verifications, in which each palmprint ROI image will match against all the other images in the database. After SIFT feature extraction and matching, the random sample consensus (RANSAC) algorithm will be used to further delete the outliers. The matching between two images captured from the same palm is genuine matching, and the matching between two images captured from different palms is impostor matching. The matching number is selected as the matching score. A Gaussian image pyramid is a sampling subset of the Gaussian scale space. We wonder whether all the image layers in the Gaussian image pyramid has the same contribution to the final matches. In this experiment, once two key points from the two intra-class images are matched, the points’ scales are recorded. At last, the statistical information of σ is shown in Figure 8. From it, we can see that the contributions of different scales are not the same; most of the distinctive local patterns only exist in some specific scales. The other layers are not discriminative. So the captured palm ROI image should not fall into those useless scale ranges. In fact, the palmprint shows different patterns at different scales. When the image is captured clearly, the palmprint consists of principal lines, wrinkles, ridges, valleys, and some minutiae points. When σ is increasing, the palmprint ROI image tends to show the spot patterns; the fine-grained ridges and valleys are smoothed and reduced to be large-scale textures. It could be seen in Figure 1. Different patterns have different discriminative capacities; as a result, the recognition performance changes with the image sharpness. In practice, the scale index σ corresponds to palm distance. Once the palm is moved away from the DOF of the system, the generated image suffers from defocus-blur, and the recognition performance changes.

Figure 7.

The palmprint Gaussian scale space.

Figure 8.

Scale contributions for key point matching: (a) obtained from COEP, (b) obtained from IITD, (c) obtained from KTU, (d) obtained from GPDS.

In order to analyze the recognition performance variations, we utilize the Gaussian image pyramid to generate palmprint images at different scales. For a given dataset, all the ROI images in it are filtered with Gaussian filter banks, and then 20 scaled datasets are generated. The σ used in this experiment is defined as

σ=σ02o+s/SE10
k=21/SE11
id=oominS+sE12

where σ0 is the base standard deviation; k is the step factor for increasing and decreasing σ; S is the number of intervals in each octave; o and s are octave and interval induces, respectively; and id is the image layer ID in the Gaussian scale space. omin is the minimum octave index. If omin<0, it can generate a σ smaller than σ0. Here, σ0=1.6k which is the default setting in VLfeat [26]. In this experiment, omin=2, smin=0, and S=4, so the range of σ is from 0.476 to 5.709, which covers the range used in [27]. So, given one dataset, we can generate 20 datasets according to different scales. The mean EAV (eav¯) is utilized to quantify the sharpness level of each generated dataset. Figure 9 shows the distributions of eav¯ and scale index σ on different publicly available palmprint databases. It shows that the sharpness level decreases almost linearly with id in the Gaussian scale space when id is smaller than 10 (σ=2.3). Of course, the specific parameters of the curves are not the same on different databases; they are related to the database’s initial sharpness level eav¯.

Figure 9.

The curves of eav¯ and corresponding scale induces on different databases.

The work reported in [27] shows that there exist a relationship between the recognition performance and the image sharpness. In their work, a sharpness adjustment technique is developed to improve the system EER. Different sharpness induces are tested, and EAV performs better. But only one touch-based palmprint database is tested in their study. In order to ensure the idea is applicable on different databases, devices, and algorithms, we utilize CompCode [28], OLOF [29], and RLOC [30] to further test the recognition accuracy variations on those generated datasets. In this experiment, different databases are used including GPDS [31], IITD [32], KTU [33], and TJU [34]. Figure 10 shows the curves of EER and corresponding eav¯. From it we can see that the trend of GPDS is not the same with the other databases. It is because GPDS is a difficult database, which contains big illumination variations and localization errors. Hence, the recognition accuracy of this database is affected more by other factors. According to Figure 10, in order to guarantee the system’s discriminative capacity, eav¯ should be large than 10.

Figure 10.

The curves of EER and eav¯ on different databases obtained by different recognition algorithms. (a) The EER is obtained by Competitive Code. (b) The EER is obtained by OLOF. (c) The EER is obtained by RLOC.

Advertisement

4. Conclusions

When designing a touchless palmprint recognition system, FOV and DOF are two key problems of palmprint imaging. FOV is related to image PPI, and DOF is related to image blur. Figure 11 shows the main idea and framework of our system. In this chapter, we first studied the required image PPI for palmprint identification. Based on it, the minimum and maximum palm distances are determined in the FOV. It also provides a reference for image sensor resolution selection. Then, image blur is taken into consideration; different datasets are generated by Gaussian scale space function. The EER variation curves are obtained by different features on different databases. During the image collection process, when the palm moves out of the DOF, the sharpness of the captured image changes, so eav can be an index to show whether the palm is put correctly in the DOF.

Figure 11.

The framework of this chapter.

Based on the findings of this research, when designing new systems, the palm width in the captured image should be larger than 300 pixels; it at least should not smaller than 130 pixels. After the system is deployed, when the user is putting his/her hand, the eav of the ROI image should be larger than 10. A more precise eav threshold should be obtained from the training dataset of the real system, because some other factors may affect the final EER distributions, such as the auto-exposure-control and auto-white-balance-control functions of the imaging sensor. But the major trends are similar. The main contribution of this work is providing some key references for system design based on image sharpness.

Advertisement

Acknowledgments

This work is supported in part by the NSFC under grant 61332011, in part by the Shenzhen Fundamental Research under grants JCYJ20180306172023949 and JCYJ20170412170438636, in part by the Shenzhen Institute of Artificial Intelligence and Robotics for Society.

References

  1. 1. Zhang D, Kong WK, You J, Wong M. Online palmprint identification. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2003;25(9):1041-1050
  2. 2. Han Y, Sun Z, Wang F, Tan T. Palmprint recognition under unconstrained scenes. In: Proceedings of the 8th Asian Conference on Computer Vision (ACCV’07); 18-22 November 2007; Tokyo, Japan. Vol. 4844. Switzerland: Springer; LNCS(PART2). 2007. pp. 1-11
  3. 3. Michael GKO, Connie T, Jin ATB. Touch-less palm print biometrics: Novel design and implementation. Image and Vision Computing. 2008;26(12):1551-1560
  4. 4. Zhang D, Lu G, Li W, Zhang L, Luo N. Palmprint recognition using 3-D information. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 2009;39(5):505-519
  5. 5. Zhang D, Guo Z, Lu G, Zhang L, Zuo W. An online system of multispectral palmprint verification. IEEE Transactions on Instrumentation and Measurement. 2010;59(2):480-490
  6. 6. Michael GKO, Connie T, Jin ATB. Design and implementation of a contactless palm print and palm vein sensor. In: Proceedings of the 11th International Conference on Control, Automation, Robotics and Vision, (ICARCV’10). 7–10 December 2010. Singapore; NewYork: IEEE; 2010. pp. 1268-1273
  7. 7. Bu W, Zhao Q, Wu X, Tang Y, Wang K. A novel contactless multimodal biometric system based on multiple hand features. In: Proceedings of International Conference on Hand-Based Biometrics, (ICHB’11); Hong Kong, China. New York: IEEE; 2011. pp. 289-294
  8. 8. Jia W, Hu RX, Gui J, Zhao Y, Ren XM. Palmprint recognition across different devices. Sensors. 2012;12(6):7938-7964
  9. 9. Zhao Q, Bu W, Wu X, Zhang D. Design and implementation of a contactless multiple hand feature acquisition system. In: Sensing Technologies for Global Health, Military Medicine, Disaster Response, and Environmental Monitoring II; and Biometric Technology for Human Identification IX. Vol. 8371. 2012. pp. 83711Q. Available from: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/8371/1/Design-and-implementation-of-a-contactless-multiple-hand-feature-acquisition/10.1117/12.919100.full [Accessed: 03 May 2020]
  10. 10. Nikisins O, Eglitis T, Pudzs M, Greitans M. Algorithms for a novel touchless bimodal palm biometric system. In: Proceedings of 2015 International Conference on Biometrics, (ICB’15); 19-22 May 2015; Phuket, Thailand. New York: IEEE; pp. 436-443
  11. 11. Qu X, Zhang D, Lu G. A novel line-scan palmprint acquisition system. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2016;46(11):1481-1491
  12. 12. Qu X, Zhang D, Lu G, Guo Z. Door knob hand recognition system. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2017;47(11):2870-2881
  13. 13. Zhang L, Cheng Z, Shen Y, Wang D. Palmprint and palmvein recognition based on DCNN and a new large-scale contactless palmvein dataset. Symmetry. 2018;10(4):1-15
  14. 14. Li W, Zhang D, Lu G, Luo N. A novel 3-D palmprint acquisition system. IEEE Transactions on Systems, Man, and Cybernetics - Part A. 2012;42(2):443-452
  15. 15. Guo Z, Zhang D, Zhang L, Zuo W, Lu G. Empirical study of light source selection for palmprint recognition. Pattern Recognition Letters. 2011;32(2):120-126
  16. 16. Guo Z, Zhang D, Zhang L. Is white light the best illumination for palmprint recognition? In: Proceedings of International the 13th Conference on Computer Analysis of Images and Patterns (CARP’09); 2–4 September 2009; Münster, Germany. Switzerland: Springer; 2009. pp. 50-57
  17. 17. Liang X, Zhang D, Lu G, Guo Z, Luo N. A novel multicamera system for high-speed touchless palm recognition. In: IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2019. DOI: 10.1109/TSMC.2019.2898684. Available from: https://ieeexplore.ieee.org/abstract/document/8666082
  18. 18. Tian C, Xu Y, Zuo W. Image denoising using deep CNN with batch renormalization. Neural Networks. 2020;121:461-473
  19. 19. Zhang Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000;22(11):1330-1334
  20. 20. COEP database. Available from: https://www.coep.org.in/resources/coeppalmprintdatabase [Accessed: 03 May 2020]
  21. 21. Average Hand Size For Men, Women, And Children. [Internet] Available from: https://www.theaveragebody.com/average-hand-size/ [Accessed: 09 May 2020]
  22. 22. Rau PP, Zhang Y, Biaggi L, Engels R, Qian L, Ribjerg H. How large is your phone? A cross-cultural study of smartphone comfort perception and preference between Germans and Chinese. Procedia Manufacturing. 2015;3:2149-2154
  23. 23. Zhang K, Huang D, Zhang B, Zhang D. Improving texture analysis performance in biometrics by adjusting image sharpness. Pattern Recognition. 2017;66:16-25
  24. 24. Lowe D. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision. 2004;60(2):91-110
  25. 25. Wu X, Zhao Q, Bu W. A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors. Pattern Recognition. 2014;47(10):3314-3326
  26. 26. Vedaldi A, Fulkerson B. VLFeat: An Open and Portable Library of Computer Vision Algorithms. Available from: http://www.vlfeat.org [Accessed: May 03, 2020]
  27. 27. Zhang K, Huang D, Zhang D. An optimized palmprint recognition approach based on image sharpness. Pattern Recognition Letters. 2017;85:65-71
  28. 28. Kong A, Zhang D. Competitive coding scheme for palmprint verification. In: Proceedings of the 17th International Conference on Pattern Recognition (ICPR’04); 23–26 August 2004. Cambridge, UK. New York: IEEE; 2004. pp. 520-523
  29. 29. Sun Z, Tan T, Wang Y, Li SZ. Ordinal palmprint representation for personal identification [represention read representation]. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05); 20–26 June 2005. San Diego, CA. New York: IEEE; 2005. pp. 279-284
  30. 30. Jia W, Huang D, Zhang D. Palmprint verification based on robust line orientation code. Pattern Recognition. 2008;41(5):1504-1513
  31. 31. GPDS database. Available from: www.gpds.ulpgc.es/downloadnew/download
  32. 32. IITD database. Available from: https://www4.comp.polyu.edu.hk/∼csajaykr/IITD/Database_Palm.htm [Accessed: 03 May 2020]
  33. 33. KTU database. Available from: https://ceng2.ktu.edu.tr/∼cvpr/contactlessPalmDB.htm [Accessed: 03 May 2020]
  34. 34. TJU database. Available from: https://sse.tongji.edu.cn/linzhang/contactlesspalm/index.htm [Accessed: 03 May 2020]

Written By

Xu Liang, Zhaoqun Li, Jinyang Yang and David Zhang

Submitted: 09 March 2020 Reviewed: 14 May 2020 Published: 22 July 2020