Open access

Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics

Written By

Miroslav Bača, Petra Grd and Tomislav Fotak

Submitted: 14 February 2012 Published: 28 November 2012

DOI: 10.5772/51912

From the Edited Volume

New Trends and Developments in Biometrics

Edited by Jucheng Yang, Shan Juan Xie

Chapter metrics overview

5,121 Chapter Downloads

View Full Metrics

1. Introduction

Researchers in the field of biometrics found that human hand, especially human palm, contains some characteristics that can be used for personal identification. These characteristics mainly include thickness of the palm area and width, thickness and length of the fingers. Large numbers of commercial systems use these characteristics in various applications.

Hand geometry biometrics is not a new technique. It is first mentioned in the early 70’s of the 20th century and it is older than palm print which is part of dactiloscopy. The first known use was for security checks in Wall Street.

Hand geometry is based on the palm and fingers structure, including width of the fingers in different places, length of the fingers, thickness of the palm area, etc. Although these measurements are not very distinctive among people, hand geometry can be very useful for identity verification, i.e. personal authentication. Special task is to combine some non-descriptive characteristics in order to achieve better identification results. This technique is widely accepted and the verification includes simple data processing. Mentioned features make hand geometry an ideal candidate for research and development of new acquisition, preprocessing and verification techniques.

Anthropologists believe that humans survived and developed to today’s state (Homo sapiens) thanks to highly developed brains and separated thumbs. Easily moved and elastic human fist enables us catching and throwing various things, but also making and using various kinds of tools in everyday life. Today, human fist is not used just for that purpose, but also as a personal identifier, i.e. it can be used for personal identification. Even old Egyptians used personal characteristics to identify people. Since then technology made a great improvement in the process of recognition, and modern scanners based on hand geometry now use infrared light and microprocessors to achieve the best possible comparison of proposed hand geometry patterns.

During the last century some technologies using hand geometry were developed. They ranged from electromechanical devices to electronic scanners. The history of those devices begins in 1971 when US Patent Office patented device for measuring hand characteristics and capturing some features for comparison and identity verification [1-3]. Another important event in the hand geometry history was in the mid 80's when Sidlauskas patented device for hand scanning and founded Recognition Systems Inc. Of Campbell, California [4]. The absolute peek of this biometric characteristic was in 1996 during the Olympic Games in Atlanta when it was used for access control in the Olympic village [5].

Human hand contains enough anatomical characteristics to provide a mechanism for personal identification, but it is not considered unique enough to provide mechanism for complete personal identification. Hand geometry is time sensitive and the shape of the hand can be changed during illness, aging or weight changing. It is actually based on the fact that every person has differently formed hand which will not drastically change in the future.

When placing a hand on the scanner, the device usually takes three-dimensional image of the hand. The shape and length of the fingers are measured, as well as wrists. Devices compare information taken from the hand scanner against already stored patterns in the database. After the identification data are confirmed, one can usually gain access right to secured place. This process has to be quick and effective. It takes less than five seconds for the whole procedure. Today, hand scanners are well accepted in the offices, factories and other business organization environments.

Based on the data used for personal identification, technologies for reading human hand can be divided in three categories:

  • Palm technology,

  • Hand vein technology,

  • Hand geometry and hand shape technology.

The first category is considered the classic approach in the hand biometrics. As mentioned earlier, it is part of dactiloscopy, so methods used here are similar to those used for fingerprints. The size, shape and flow of papillae are measured and minutiae are the main features in the identification process. Image preprocessing and normalization in this category gives us binary image containing papillae and their distances. Because of the different lightning when taking an image, the palm can be divided into five areas [6], although strictly medically speaking, if we consider the muscles, it has only three areas. The areas of the palm are: lower palm, middle palm, upper palm, thenar (thumb part) and hypothenar (little finger part). The location of these areas can be seen in Figure 1.

Figure 1.

Palm areas according to [6]

Second category uses similar approach for capturing hand image, but instead of using ordinary camera or scanner it rather uses specialized devices containing scanners with infrared light or some other technology that can be used for retrieving image of veins under the human skin. Hand vein biometrics is gaining popularity in the last years and it is likely that this will be one of the main biometric characteristics for the future. Using contactless approach for capturing the structure of human veins gives promising results in this field.

Third category is the primary interest of this chapter. Therefore, it will be explained later in the text.

Hand image taken with digital camera is usually placed on a semitransparent base which is later processed to extract hand shape (usually known as preprocessing of the image to get image data in the form that is suitable for the system it is being used in). It includes extracting small hand curves that can be parts of the one bigger curve which represents hand shape. By using those curves and its characteristics, one can define hand features that will be used in authentication or identification system that is being built.

The first part of the chapter will give an introduction to hand geometry and hand shape, along with description of two different systems for hand geometry. After that, acquiring of characteristics and different extraction techniques will be described. Next chapter will give an overview of new trends in hand geometry. At the end, technology and advantages and disadvantages will be described.

Advertisement

2. Hand geometry and hand shape

Every human hand is unique. In 2002, thirty global features of hand geometry were defined [7]. These features are very exact, but can be represented as global features of contact-less 2D hand geometry. Measures that authors have defined are shown in the Figure 2.

Figure 2.

Hand geometry features according to [7]

Features which authors defined in their works and shown in the Figure 2 are following:

  1. Thumb length

  2. Index finger length

  3. Middle finger length

  4. Ring finger length

  5. Pinkie length

  6. Thumb width

  7. Index finger width

  8. Middle finger width

  9. Ring finger width

  10. Pinkie width

  11. Thumb circle radius

  12. Index circle radius lower

  13. Index circle radius upper

  14. Middle circle radius lower

  15. Middle circle radius upper

  16. Ring circle radius lower

  17. Ring circle radius upper

  18. Pinkie circle radius lower

  19. Pinkie circle radius upper

  20. Thumb perimeter

  21. Index finger perimeter

  22. Middle finger perimeter

  23. Ring finger perimeter

  24. Pinkie perimeter

  25. Thumb area

  26. Index finger area

  27. Middle finger area

  28. Ring finger area

  29. Pinkie area

  30. Largest inscribed circle radius

Those features became typical features in the systems that use hand geometry in the identification or authentication process.

Many hand geometry systems have pegs guided hand placement. User has to place his/her hand according to pegs in the device surface. The image of the hand is captured using ordinary digital camera [8]. Length of the fingers, their width, thickness, curvature and relative location of all mentioned features are different among all people. Hand geometry scanners usually use ordinary CCD camera, sometimes with infrared light and reflectors that are being used for image capturing. This type of scanner does not take palm details such as papillae into consideration. It is not interested in fingerprints, palm life lines or other ridges, colors or even some scars on the hand surface. In the combination with reflectors and mirrors, optical device can provide us with two hand images, one from the top and another from the bottom side of the hand.

Other than digital cameras, document scanners are also commonly used for capturing hand image. While systems that use digital camera take images and place it on a semitransparent base to achieve better contrast, document scanners use its technology only and the process is something slower than one with the digital camera.

Figure 3.

How hand geometry scanners work

Shown in Figure 3, one can see that devices use 28cm optical path length between camera and the surface on which hand is placed on. Reflective optical path minimizes the space needed for building such a device. Device measures a hand couple of times to get a representative sample that will be compared against all others. Using the application defined for given purpose, the processor converts these measurements to biometric pattern. This process is simply called sampling.

2.1. Systems with pegs

Peg based hand geometry systems use pegs on the device board to guide the hand placement on the device. During sampling process scanner prompts a person to put his hand on the board, several times. The board is highly reflective and projects the image of a shaded palm, while the pegs that come out of the surface plate place the fingers in the position necessary for exclusion of a sample. In this way, these systems allow for better measuring compared to systems without pegs, because the hand is fixed to the surface and cannot be shifted. The advantage of this system over the system with no pegs are predefined bases for measurement of characteristics, while the biggest disadvantage is that the system can deform, to a certain extent, the appearance of a hand, so measurements are not very precise, which leads to suboptimal results. It has to be mentioned that various finger positions can effect on variations in measuring features on the fixed axes.

A system that uses pegs was developed by Jain et al. [9]. Their system was able to capture images in 8-bit grayscale, size 640x480 pixels. Authors also developed GUI which helped users place the hand on the surface. Measuring palm and fingers was made through fourteen intersection points. System provided support through control points and helps in defining the interception points. Two different techniques were used to obtain skin color differences, lightning and noise which are relevant for eigenvector calculation. Researchers found that there are no big differences in system characteristics when using either one of proposed techniques. They acquired 500 images from 50 people. The system was divided in two phases: acquisition and verification. In the first phase the new user was added to database or the existing user was updated. Five images of the same hand were extracted. One had to remove its hand from the device surface before every scanning and place it again according to pegs in the surface. Acquired images were used to obtain eigenvector. This process includes calculating arithmetic mean of eigenvalues. The verification phase represents the process of comparing currently acquired hand image with the one that is already in the database. Two hand images were acquired in this phase and the 'mean' eigenvector was calculated. This vector was compared with the vector in the database which was stored for the user that system was trying to verify.

Let, F=(f1, f2,,fd) be an n-dimensional eigenvector stored in the database and Y=(y1,y2,,yd) is an n-dimensional eigenvector of the hand that is being verified. The verification has positive result if the distance between F and Y is smaller than defined threshold. For distance calculating, authors used absolute, weighted absolute, Euclidean and weighted Euclidean distances, with corresponding formulas:

  • Absolute distance:

J=1dyj-fj< ϵαE1
  • Weighted absolute distance:

J=1dyj-fjσj< ϵωαE2
  • Euclidean distance:

J=1dyj-fj2< ϵϵE3
  • Weighted Euclidean distance:

J=1dyj-fj2σj <ϵωϵE4

Where:

  • σj2 is feature variance of jth feature, and

  • ϵα, ϵωα, ϵϵ, ϵωϵ are thresholds for each respective distance metrics

Another research attempt in the field of hand geometry and hand shape has been made in 2000. Sanchez-Reillo and associates developed system which takes 640x640 pixels images in the JPEG format. Surface on which hand had to be placed had 6 guiding pegs. They used 200 images from 20 persons. Not all people had same gender, affiliations or personal habits. Before the features were extracted all images were transformed in the binary form, using the following formula [10]:

IBW=IR+IG-IBE5

Where:

  • IBW is resulting binary image,

  • IR, IG, IB are values of red, green and blue channel respectively, and

  •   is contrast stretching function

Sometimes, digital image does not use its whole contrast range. By stretching lightness values in allowed range the contrast of the image is increasing. This allows better extraction of the hand from the image background. Every 'false' pixel is later (if necessary) removed from the image by using some threshold value. To avoid deviations, image is formatted on the fixed size. Two pegs are used to locate the hand. Afterwards, using Sobel edge detector, system can extract hand shape. Final result of this process is image containing hand shape and side view image containing pegs in the predefined positions. First image is used for extracting palm and fingers features. Authors of this paper extracted 31 features to construct eigenvector. They also defined deviation as distance between middle point of the finger and middle point of the line between two fingers and the height on which the finger was measured. Euclidean distance, Hamming distance and Gaussian Mixture Model (GMM) were used for similarities in eigenvectors. This paper is the first that presented hand geometry based identification with satisfying results. Each subject (user) had form 3 to 5 templates stored in the database, each template containing from 9 to 25 features. The GMM gave the best results in all tested cases. With 5 stored templates the system based on GMM achieved 96% identification accuracy, and 97% verification accuracy.

Techniques that are used in hand geometry biometrics are relatively simple and easy to use [11]. Hand geometry systems have tendency to become the most acceptable biometric characteristic, especially comparing to fingerprints or iris [12]. Beside this fact, it has to be mentioned that this biometric technique has some serious disadvantages, and low recognition rate is probably one of the biggest. Most researchers believe that hand geometry itself cannot satisfy needs of modern biometric security devices [8].

2.2. Systems without pegs

As an alternative for systems that used pegs to measure hand geometry features, researchers started to explore hand shape as new biometric characteristic. Researchers in [13] extracted 353 hand shape images from 53 persons. The number of images per person varied from 2 to 15. Pegs were used to place the hand in the right position. They were removed before the comparison and covered with background color. Hand shape was extracted using hand segmentation. During the finger extraction a set of points is produced as shown in Figure 4.

Figure 4.

Hand shapes of the same hand extracted, overlaid and aligned [13]

Five fingers were extracted from the hand shape and analyzed separately. To automatize the whole process fingers of the same hand were aligned according to set of all defined points. This alignment is also shown in the Figure 4. The mean distance between two points was defined as Mean Alignment Error (MAE). This error was used to quantify matching results. Positive matching is found if the MAE falls in the predefined set of values. This kind of system achieves False Acceptance Rate (FAR) about 2% and False Rejection Rate (FRR) about 1.5%. This can be comparable to professional and commercial hand geometry systems. The outcome of this approach was larger data storage system because few hundred points needed to be stored for just one hand shape. Authors used randomly created set of 3992 image pairs to create set of interclass distances. By using that set of distances, it is possible to calculate distribution and with very high degree of certainty determine which user is real and which is not.

This was just one way of using hand shape for personal verification. Beside that approach, one can use a palm area size approach. Some researchers like Lay [14], conducted researches based on the palm area size. The hand image was acquired by projecting lattice pattern on the top size of the hand. Image was captured in the lattice frame ant it presented the curvature of the hand. An example of this approach can be seen in Figure 5. Author acquired a hundred images (number of persons is not known), size 512x512 pixels. From those images he extracted templates, size 128x128 pixels.

Figure 5.

Capturing lattice pattern of the hand

Curvature lattice image was transformed in binary image. This system did not use pegs for hand placement, but one could not move its hand freely. The hand had to be in the right position for the verification process. The system prompted user to place his hand as much as possible the same it was placed during the registration phase. Acquired binary image was coded in quadric tree on seven levels. Those trees were used in the matching process for calculating the similarity of proposed hands. Value less than Root Mean Square (RMS) is proof of better similarity between images. Author claims that he achieved 99.04% verification accuracy, with FAR = FRR = 0.48%.

One can notice that each of the two described systems is capable of competing with the commercial hand geometry systems. The main problem in those systems is relatively small number of users (from 25 to 55 and from 100 to 350 images). This leaves an open question of systems behavior with the larger number of subjects.

An important element (researchers did not consider it) in this field is aging, i.e. the changing of the hand during the time. Hand image is time sensitive, but there is an open question if it is necessary to change the input images of the hand from time to time. This can be done to achieve better recognition rate and event hand features extraction.

Systems without pegs are more tolerant when it comes to placing a hand on the device used for image acquiring.

Advertisement

3. Hand characteristics acquisition

Hand image acquisition is very simple process, especially when it is done in the system without pegs. Hand acquisition system with pegs consists from light source, camera, mirrors and flat surface (with 5 pegs on it). User puts its hand (palm facing down) on the surface. Pegs are used to guide hand placement so the hand would be in the correct position for the system that is being built. Mirror projects side view image of the hand to the camera. In this way, system can obtain hand image and can extract biometric features from the acquired image. The user is being registered to database along with eigenvector of his hand (from now on we call this “eigen-hand”). The acquired image is compared to the already existing images in the database and, if necessary, new eigen-hand is being calculated. A simple way for acquiring image was presented in [9] where the image was taken in 8-bit grayscale, size 640x480 pixels.

The quality of sampling has an effect on the number of false rejected templates, especially in the beginning of the system usage. Sampling depends on large number of factors. For instance, different heights of the biometric device change relative position of the body and hand. This will lead to different hand shape and differently calculated features. Acquiring hand image on the one height and verifying it on another can cause the system to reject a legal user. Besides that, not knowing how the device works can have great impact on the system and make it complicated to work with. If one wants to reduce this complication in verification phase, it can practice putting its hand on the device’s surface. Practicing includes correctly placing hand on the surface (no matter whether it uses pegs or not). When a human is born, their hands are almost symmetrical. By getting older there are also some changes in our hands, mainly because of environmental factors. Most people become left- or right-handed leading this hand to become a little bigger that the other one. Young people hands are changing much more that the hands of older people. These processes require that hand geometry and hand shape devices are capable of following those changes and learn how to update every change that is made to person’s hand.

Identification systems based on hand geometry are using geometric differences in the human hands. Typical features include length and width of the fingers, palm and fingers position, thickness of the hand, etc. There are no systems that are taking some non-geometric features (e.g. skin color) in consideration. Pegs that some scanners are using are also helpful in determining axes needed for the feature extraction. An example is shown in the Figure 6 where the hand was represented as the vector containing measuring results and 16 characteristic points were extracted:

  1. F1 – thumb width on the second phalange (bones that form toes and fingers)

  2. F2 – index finger length on the third phalange

  3. F3 – index finger length on the second phalange

  4. F4 – middle finger length on the third phalange

  5. F5 – middle finger length on the second phalange

  6. F6 – ring finger width on the third phalange

  7. F7 – ring finger width on the second phalange

  8. F8 – little finger width on the third phalange

  9. F9 – index finger length

  10. F10 – middle finger length

  11. F11 – ring finger length

  12. F12 – little finger length

  13. F13 – palm width based on the four fingers

  14. F14 – palm width in the thumb area

  15. F15 – thickness of the fingers on the first phalange

  16. F16 – thickness of the fingers on the second phalange

Figure 6.

Axes on which hand features are extracted and extracted features [9]

3.1. Extraction techniques

Ross [15] presented two techniques for feature extraction: The Parameter Estimation Technique and The Windowing Technique.

In the Parameter Estimation Technique peg-based acquisition system was used. This approach is called intensity based approach. The other presented technique used fixed windows size and determined points whose intensity was changed along the axes. These techniques will be presented later in the chapter.

Third technique that will be presented here was described in [16]. Since this technique does not have its name we will call it F&K technique which describes hand image through minimum spanning trees.

3.1.1. The parameter estimation technique

In order to offset the effects of background lighting, color of the skin, and noise, the following approach was devised to compute the various feature values. A sequence of pixels along a measurement axis will have an ideal gray scale profile as shown in Figure 7.

Figure 7.

The gray scale profile of pixels along a measurement axis [15]

Total number of pixels considered is referred as Len, Pe and Ps refer to end points within which the object to measured is located and A1, A2 and B are the gray scale values.

The actual gray scale profile tends to be spiky as shown in Figure 7 (right image). The first step author presented was to model the profile. Let the pixels along a measurement axis be numbered from 1 to Len. Let X=x1,x2,,xLen  be the gray values of the pixels along that axis. The following assumptions about the profile were made:

  1. The observed profile (Figure 7 (right)) is obtained from the ideal profile (Figure 7 (left)) by the addition of Gaussian noise to each of the pixels in the latter. Thus, for example, the gray level of a pixel lying between Ps and Pe were assumed to be drawn from the distribution:

Gx/B,σB2=12πσB2exp-12σB2x-B2E6

where σB2 is the variance of x in the interval R, Ps < R ≤ Pe.

  1. The gray level of an arbitrary pixel along a particular axis is independent of the gray level of other pixels in the line.

Operating under these assumptions, author could write the joint distribution of all the pixel values along a particular axis as:

PX/θ=J=1Ps12πσA12exp-12σA12xj-A12J=Ps+1Pe12πσB2exp-12σB2xj-B2J=Pe+1Len12πσA22exp-12σA22xj-A22E7

where θ=Ps,Pe,A1,A2,B,σA12,σA22,σB2 and σA12, σA22 and σB2 are the variances of x in the three intervals [1, Ps], [Ps + 1, Pe] and [Pe + 1, Len] respectively.

The goal now is to estimate Ps and Pe using the observed pixel values along the chosen axis (Authors used Maximum Likelihood Estimate-MLE).

By taking algorithm on both sides of (7) one could obtain likelihood function as:

Lθ=1σA121Psxj-A12+ 1σB2Ps+1Pexj-B2+ 1σA22Pe+1Lenxj-A22+PslogσA12+ Pe-PslogσB2+ Len-PelogσA22E8

The parameters could then be estimated iteratively [15].

The initial estimates of A1, σA12, A2, σA22, B and σB2 were obtained as follows:

  • A1 and σA12 were estimated using the gray values of the first NA1 pixels along the axis

  • A2 and σA22 were estimated using the gray values of the pixels from Len-NA2 to Len

  • B and σB2 were estimated using the gray values of the pixel between Len/2-NB and Len/2+NB.

  • The values of NA1, NA2 and NB were fixed for the system and the values of the Ps and Pe were set to Len/2-10 and Len/2+10 respectively.

3.1.2. The windowing technique

This technique was developed to locate the end points Ps and Pe from the gray scale profile in Figure 7. A heuristic method was adopted to locate these points. A window of length wlen was moved over the profile, one pixel at a time, starting from the left-most pixel.

Let Wi, 0iN, refer to sequence of pixels covered by the window after the ith move, with WN indicating the final position. For each position Wi, author computed four values Maxvalωi, Maxindexωi, Minvalωi and Minindexωi as:

Maxvalωi= maxjωiGjE9
Maxindexωi= arg maxjωiGjE10
Minvalωi= minjωiGjE11
Minindexωi= arg minjωiGjE12

Ps and Pe could then be obtained by locating the position Wi where Maxvalωi-Minvalωi  was the maximum. This indicated a sharp change in the gray scale of the profile.

3.1.3. F&K technique

Fotak and Karlovčec [16] presented a different method of feature extraction. They decided to use mathematical graphs on the two-dimensional hand image. Hand image was normalized by using basic morphological operators and edge detection. They created a binary image from the image captured with an ordinary document scanner. On the binary image the pixel values were analyzed to define the location of characteristic points. They extracted 31 points, shown in the Figure 8.

Figure 8.

Hand shape and the characteristic hand points defined in [16]

For the hand placement on y-axis a referential point on the top of the middle finger was used. The location of that point was determined by using the horizontal line y1. Using that line, authors defined 6 points that represents the characteristic points of index, middle and ring finger. Using lines y2 and y3 they extracted enough characteristic points for four fingers. Thumb has to be processed in the different manner. To achieve that the right-most point of the thumb had to be identified. Using two vertical lines they found the edges of the thumb. By analyzing points on those lines and their midpoints the top of the thumb could be extracted. Example of the thumb top extracting is shown in the Figure 9.

Figure 9.

Extracting characteristic points of the thumb

In order to get enough information for their process, each hand had to be scanned four times. For each characteristic point authors constructed the complete graph. The example of characteristic points from four scans and the corresponding complete graph of one point are shown in the Figure 10 and Figure 11 respectively.

Figure 10.

Characteristic points of the four scanning of the hand

Figure 11.

The complete graph of one characteristic point

The number of edges in the complete graph is well known. In order to construct minimum spanning tree this graph needed to be weighted graph. The weights are distances between two graph vertices that are connected with an edge. Distances were measured using Euclidean distance. In the end, Prim algorithm was used to construct minimum spanning tree of one characteristic point. The same procedure was made for each of 31 points. The example of minimum spanning tree of one characteristic point and all minimum spanning trees are shown in the Figure 12 and Figure 13 respectively.

Figure 12.

Minimum spanning tree of the graph from the Figure 12

Figure 13.

All minimum spanning trees of one user

The verification process is made by comparing every point minimum spanning tree with the location of currently captured corresponding point. The results of the system are very promising for future development, and are FAR = 1.21% and FRR = 7.75%.

Advertisement

4. New trends in hand geometry and hand shape biometrics

So far we described the basics of hand geometry biometrics. In this section we will mention some new trends and new researches in this field. Reading this section requires a great understanding of the hand geometry biometrics and the extraction and verification methods that are mentioned here. We will describe everything in detail, but rather mention some achievements that were produced in last few years.

Hand geometry has been contact-based from its beginnings and still is in almost all commercial systems. Since it has evolved in last 30 years, one can categorize this field as in [17]:

  • Constrained and contact-based

  • Unconstrained and contact-based

While the first category requires a flat platform and pegs or pins to restrict hand degree of freedom, second one is peg- and pin-free, although still requiring a platform to place a hand (e.g. scanner). Main papers of this category were described earlier in this chapter.

The second category gives users more freedom in the process of image acquisition. This step is considered as the evolution forward from constrained contact-based systems. Some newer works in this field are [18], [19]. In the [18] authors presented a method based on three keys. The system was based on using Natural Reference System (NRS) defined on the hand's layout. Therefore, neither hand-pose nor a pre-fixed position were required in the registration process. Hand features were obtained through the polar representation of the hand's contour. Their system uses both right and left hand which allowed them to consider distance measures for direct and crossed hands. Authors of the second paper [19] used 15 geometric features to analyze the effect of changing the image resolution over biometric system based on hand geometry. The images were diminished from an initial 120dpi up to 24dpi. They used two databases, one acquiring the images of the hand underneath whereas the second database acquires the image over the hand. According to that they used two classifiers: multiclass support vector machine (Multiclass SVM) and neural network with error correction output codes.

There are many different verification approaches in the contact-based hand geometry systems. So far, the GMMs and SVM give the best results but they are far from satisfying for commercial use.

Due to user acceptability, contact-less biometrics is becoming more important. In this approach neither pegs nor platform are required for hand image acquisition. Papers in this field are relatively new according to ones in the contact-based approach. It is for the best to present just new trends in contact-less hand geometry biometrics.

The most used verification methods in this approach are k – Nearest Neighbor (k-NN) and SVM. These methods are also the most competitive in the existing literature.

In the last few years, literature on this problem is rapidly increasing. SVM is the most common used verification and identification method. Authors in [20] acquired hand image with static video camera. Using the decision tree they segmented the hand and after that measured the local feature points extracted along fingers and wrists. The identification was based on the geometry measurements of a query image against a database of recorded measurements using SVM. Another use of SVM can be found in the [21]. They also presented biometric identification system based on geometrical features of the human hand. The right hand images were acquired using classic web cam. Depending on illumination, binary images were constructed and the geometrical features (30-40 finger widths) were obtained from them. SVM was used as a verifier. Kumar and Zhang used SVM in their hybrid recognition system which uses feature-level fusion of hand shape and palm texture [22]. They extracted features from the single image acquired from digital camera. Their results proved that only a small subset of hand features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features was evaluated on the diverse classification schemes: naïve Bayes (normal, estimated, multinomial), decision trees (4 5, LMT), k-NN, SVM, and FFN.

A hybrid system fusing the palmprint and hand geometry of a human hand based on morphology was presented in [23]. Authors utilized the image morphology and concept of Voronoi diagram to cut the image of the front of the whole palm apart into several irregular blocks in accordance with the hand geometry. Statistic characteristics of the gray level in the blocks were employed as characteristic values. In the recognition phase SVM was used.

Beside SVM which is the most competitive method in the contact-less hand geometry verification and identification, the literature contains other very promising methods such as neural networks [24], a new feature called 'SurfaceCode' [25] and template distances matching [17].

Mentioned methods are not the only ones but they have the smallest Equal Error Rate and therefore are the most promising methods for the future development of the contact-less hand geometry biometric systems.

Advertisement

5. The hand recognition technology

Hand features, described earlier in the chapter, are used in the devices for personal verification and identification. One of the leading commercial companies in this field is Schlage. In their devices a CCD digital camera is used for acquiring a hand image. This image has size of 32000 pixels. One if their device is shown in Figure 14.

Figure 14.

Schlage HandPunch 4000 [26]

The system presented in the Figure X14 consists from light source, camera, mirrors and flat surface with 5 pegs. The user places the hand facing down on a flat plate on which five pins serve as a control mechanism for the proper accommodation of the right hand of the user. The device is connected with the computer through application which enables to see live image of the top side of the hand as well as side view of the hand. The GUI helps in image acquisition while the mirror in the device used to obtain side view of the hand. This gives a partially three-dimensional image of the hand. The device captures two hand images. After the user places its hand on the device, the hand is being captured. The location and the size of the image are determined by segmentation of reflected light from the dark mask. Second image is captured with the same camera but using the mirror for measuring the hand thickness. By using only binary image and the reflected background the system is not capable of capturing scars, pores or tattoos. On the other hand, big rings, bandages or gloves can have great impact on the image so it could lead to false rejection of the hand.

The captured hand silhouette is used to calculate length, width and the thickness of the four fingers (thumb is not included). The system makes 90 measurements which are stored in the 9B size template. For template matching the Euclidean distance is used. The acquisition procedure takes 30 seconds to complete and during that period user has to place its hand on the device four times. Internal processor generates template which is mean template of all readings during this process. Image captured with this device can be seen in the Figure 15.

Figure 15.

Hand silhouette captured with Schalge device

Advertisement

6. Conclusion

Hand recognition biometrics is probably the most developed and applicable biometric technique that found its application in many organizations. This is due to its user friendliness. Moreover, hand recognition is a simple technique which is very easy to use and does not require much memory space. Hand geometry is invariant from the environmental impacts and has acceptable privacy violation level. For the image capturing one can use classic CCD cameras which are easy to use (it is easy to obtain hand image) and have a low price.

The biggest disadvantages of hand geometry lie in the following facts. The size of the hand restricts biometric systems on the smaller number of applications. From a hundred randomly chosen persons, at least two will have similar hand geometry. The hand injury can potentially have great impact on the recognition system. Measurements have to be done several times, since in the acquisition process one cannot always obtain all information needed.

It is obvious that this technique is easy to forge by finding the most appropriate hand (one has to find the hand that is “close enough”). The technology based on the hand image is the most common in modern biometric systems.

In this chapter we presented the basics of the hand geometry and hand shape biometrics. Researchers in the field of biometrics found that human hand, especially human palm, contains some characteristics that can be used for personal identification. These characteristics mainly include thickness of the palm area and width, thickness and length of the fingers, etc. Hand recognition biometrics is probably the most developed and applicable biometric technique that found its application in many organizations.

References

  1. 1. Ernst R.H. Hand ID System. US Patent1971
  2. 2. Jacoby OH, Giordano AJ, Fioretti WH. Personal Identification Apparatus. US Patent1971
  3. 3. Lay HC. Hand Shape Recognition. US Patent1971
  4. 4. Sidlauskas DP. 3D HHHand Profile Identification Apparatus.US Patent 4736203; 1988
  5. 5. Van TilborgH. C. E.JajodiaS.editorsEncyclopedia of Cryptography and Security 2nd Ed. New York: Springer Science + Business Media, LLC; 2011
  6. 6. Fotak T. Razvoj biometrijskih tehnika. BSc thesis. University of Zagreb, Faculty of organization and informatics; 2008.
  7. 7. BulatovY.JambawalikarS.KumarP.SethiaS.Hand Recognition System Using Geometric Classifiers. DIMACS Workshop on Computational Geometry, 141514-15 November 2002Piscataway, NJ; 2002.
  8. 8. JainA.BolleR.PankantiS.editorsBiometrics.Personalidentification.innetworked.societyNorwell: Kluwer Academic Publishers; 1999
  9. 9. JainA.RossA.PanakantiS. A.prototypehand.geometry-basedverification.systemA. V. B. P.AVBPA: proceedings of the 2nd International Conference on Audio- and Video-based Biometric Person Authentication, Washington DC; 1999
  10. 10. Sanchez-ReilloR.Sanchez-AvilaC.Gonzales-MarcosA.Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics IEEE Transactions on Pattern Analysis and Machine Intelligence20001168 EOF1171 EOF
  11. 11. JainA.HongL.PrabhakarS.Biometricspromising.frontiersfor.theemerging.identificationmarket.Communications of the ACM 20009198
  12. 12. HolmesJ. P.MaxwellR. L.RighL. J. A.performanceevaluation.ofbiometric.identificationdevices.Technical Report SAND910276Sandia National Laboratories; 1990
  13. 13. JainA.DutaN.Deformable matching of hand shapes for verification. IEEE International Conference in Image Processing: Proceedings of the IEEE International Conference in Image Processing. Kobe, Japan; 1999
  14. 14. Lay HC. Hand shape recognition.Optics and Laser Technology 2000
  15. 15. RossA. A.prototypeHand.Geometry-basedVerification.SystemMS Project Report; 1999
  16. 16. FotakT.KarlovčecM.Personal authentication using minimum spanning trees on twodimensional hand image. Varaždin: FOI; 2009
  17. 17. De SantosSierra. A.Sanchez-AvilaC.Bailador delPozo. G.Guerra-CasanovaJ.Unconstrained and Contactless Hand Geometry Biometrics. Sensors 2011111014310164
  18. 18. AdanM.AdanA.ASVasquezTorres. R.Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics Image and Vision Computing2008264451465
  19. 19. MAFerrerFabregas. J.FaundezM.AlonsoJ. B.TraviesoC. M.Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics International Carnahan Conference on Security Technology: Proceedings of the 43rd Annual 2009 International Carnahan Conference on Security Technolog, Zurich; 2009
  20. 20. JiangX.XuW.SweeneyL.LiY.GrossR.YurovskyD.New drections in contact free hand recognition. International Conference in Image Processing : Proceedings of the IEEE International Conference in Image Processing, San Antonio, TX; 2007
  21. 21. FerrerM. A.AlonsoJ. B.TraviesoC. M.Comparing infrared and visible illumination for contactless hand based biometric scheme. International Carnahan Conference on Security Technology: Proceedings of the 42nd Annual IEEE International Carnahan Conference on Security Technology, Prague; 2008
  22. 22. KumarA.ZhangD.Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics IEEE Transactions on Image Processing2006
  23. 23. Wang WC, Chen WS, Shih SW.Basic Principles and Trends in Hand Geometry and Hand Shape Biometrics International Conference on Acoustics: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei; 2009
  24. 24. RahmanA.AnwarF.AzadS. A.SimpleEffectiveTechnique.forHuman.Verificationwith.HandGeometry.International Conference on Computer and Communication Engineering: Proceedings of the International Conference on Computer and Communication Engineering, Kuala Lumpur; 2008
  25. 25. KanhangadV.KumarA.ZhangD.Human Hand Identification with 3D Hand Pose Variations. Computer Society Conference on Computer Vision and Pattern Recognition: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, San Francisco, CA; 2010
  26. 26. Schlage. HandPunch 4000: Biometrics. http://w3.securitytechnologies.com/products/biometrics/time_attendance/HandPunch/Pages/details.aspx?InfoID=18 (accessed 20May 2012

Written By

Miroslav Bača, Petra Grd and Tomislav Fotak

Submitted: 14 February 2012 Published: 28 November 2012