Open access peer-reviewed chapter

A Survey on Methods of Image Processing and Recognition for Personal Identification

Written By

Ryszard S. Choras

Reviewed: 01 March 2018 Published: 29 August 2018

DOI: 10.5772/intechopen.76116

From the Edited Volume

Machine Learning and Biometrics

Edited by Jucheng Yang, Dong Sun Park, Sook Yoon, Yarui Chen and Chuanlei Zhang

Chapter metrics overview

1,293 Chapter Downloads

View Full Metrics

Abstract

The network of blood vessels possesses several properties that make a good biometric feature for personal identification: (1) they are difficult to damage and modify; (2) they are difficult to simulate using a fake template; and (3) vein information can represent the liveness of the person. In the process of recognition of the network of blood vessels, we encounter two main difficulties: the first difficulty concerns the enhancement of the image of blood vessels obtained from the camera working in visible and/or infrared light, and the second one concerns the process of extraction of features and methods of classification. In the first part, this chapter presents the basic methods of preprocessing biometric images. In the second part, we discuss the process of feature extraction with particular emphasis on the feature extraction from images depicting the network of blood vessels. This applies to texture analysis using the co-occurrence matrix, Gabor filtration, moments, and topological features using cross points. In the third part, we present the methods of processing images of the blood vessel network of dorsal part of the hand and wrist. We also discuss the process of reducing the dimensionality of a feature vector using the principal components analysis method.

Keywords

  • biometrics
  • vein patterns
  • feature extraction
  • co-occurrence matrix
  • Gabor’s filters
  • classification

1. Introduction

Biometrics is a powerful field of science for identifying a person using their physiological and behavioral features [1, 2]. Biometrics is the automatic recognition of people based on behavioral or physiological characteristics. During recognition given users are assigned to prescribed classes. We extract the essential features of the object and use these features to classify the object.

Biometric systems in general perform two tasks: identification and verification (recognition) of people (Figure 1). The process of verification (recognition) boils down to distinguishing a specific person from a limited number of people whose biometric data are known. The identification consists of determining the vector of features corresponding to the person being subjected to the identification process and trying to find a match between this vector and the feature vectors in the database containing records (feature vectors) concerning people. As a result, we get a list of the most similar individuals in the database. Identification is much more difficult [3, 4].

Figure 1.

Identification and verification process.

Images play an important role in the identification process of people. Image processing and recognition are fields that use complex signal and image processing algorithms.

The image in digital form is stored as a two-dimensional array. Formally

D=xyxϵMyϵNE1

and

F=fxyxyϵDandfxyϵ01G1E2

where =12m, N=12n, and G-1 is the gray/color maximum value of each resolution cell.

The components of an image processing system are presented on Figure 2.

Figure 2.

Schematic diagram of the image processing and recognition system for personal identification.

The processing generally comprises the steps of acquiring an image, selecting the desired color space, improving image quality, image segmentation, and features extraction for the recognition. Recognition process involves several stages—extraction features and dimensionality reduction which selects the best set of features and rejects irrelevance. The resultant feature vector is the basis for classification.

The image is usually obtained using a CCD camera or NIR camera. It can be a color image (three-color components) or a grayscale image. Usually, color space (RGB with 24 bit) is converted to gray color space (8 bit).

Below, some steps shown in image processing system on Figure 2 will be explained in more detail [5].

Image processing operations can be divided into (Figure 3):

  • Processing of single points of the image.

  • Operations that use pixel group processing.

Figure 3.

Image processing operations.

The first group includes operations related to modification histogram, while the second group includes operations related to edge detection and various types of image filtration.

Transforming the brightness scale of image elements enables:

  • In the case where the brightness range does not cover the entire scale available for the image, the extension of the range (the effect of increase contrast)

  • Emphasizing certain brightness ranges and suppressing others

  • Modifying the brightness of image elements to obtain a uniform image frequency of the occurrence of appropriate levels of brightness

In practice, transformation T can be a logarithmic transformation, exponential transformation, etc. (Figure 4).

Figure 4.

The original fingerprint image (a), the result of logarithmic transformation (b), and the exponential transformation (c).

If hg represents the number of pixels in an image with intensity g, e.g., fxy=g, then the probability density function is defined as prob(fxy=g=hgMN for g=0,1,,G1, and the cumulative density function is defined as cfxy=g=g=0G1prob(fxy=g for g=0,1,,G1.

The gray levels are modified as [5, 6]

g¯=maxmincfxy=g+minE3

where maxandmin are, respectively, the maximum and minimum values of image gray level [6] (Figures 5 and 6).

Figure 5.

Histogram of the original fingerprint image (a) and histogram-enhanced images after logarithmic transformation (b), exponential transformation (c), equalization (d), and CLAHE (contrast limited adaptive histogram equalization) (e).

Figure 6.

The original fingerprint image (a), enhanced image (b), and stretched image (c).

One of the methods of noise elimination (“salt pepper” type) and other image distortions is median filtering (MF). Median filtering is a nonlinear operation, and this fact complicates the mathematical analysis of its properties. It is implemented by moving the window (the mask) along the lines of the digital image and changing the value of the middle window element by the median value of the elements inside the window. MF allows you to keep sharp changes in brightness and high efficiency in eliminating impulsive noise [5].

The 2D MF for an image fxy is defined as

f̂xy=medianA1fxy=medianfx+ry+sE4

where A1 is the MF window.

MF allows you to keep sharp changes in brightness and high efficiency in eliminating impulsive noise (Figure 7).

Figure 7.

Median filtration: Original image (a), image with noise (b), image with “salt and pepper” noise; (d), (e), and (f) image after MF.

Edges carry useful information about object boundaries which can be used for further analysis. Edge detectors can be grouped into two classes: (a) local techniques which use operators on local image neighborhoods and (b) global techniques.

Gradient estimates is done as

f̂=fx2+fy212E5

and can be expressed by (Table 1)

f`̂=w1tf42+w2tf4212E6

or

f`̂=w1tf82+w2tf8212E7
Edge detector operatorsPartial derivatives along x and y axesWeight vectorsKernels
Differentialfx=xyffxy+1fx+1yfx+1y+1
fy=fx+1yfxyfx+1y+1fxy+1
w1=1111;w2=11112×2
Roberts edge detectorsfx=fxyfx+1y+1
fy=fx+1yfxy+1
w1=1001;w2=01102×2
Max. differencefx=maxfxyfx+1yfxy+1fx+1y+1
fx= − min fxyfx+1yfxy+1fx+1y+1fx+1y+1
there is no2×2
Prewitt edge detectorfx=fx1y1+fxy1+fx+1y1fx1y+1+fxy+1+fx+1y+1
fy=fx+1y1+fx+1y+fx+1y+1fx1y1+fx1y+fx1y+1
w1=101101101;w2=1110001113×3
Sobel edge detectorfx=fx1y1+2fxy1+fx+1y1fi1y+1+2fxy+1+fx+1y+1
fy=(fx+1y1+2fx+1j+fx+1y+1fx1y1+2fx1y+fx1y+1
w1=101202101;w2=1210001213×3

Table 1.

Differential gradient operators.

where f4 and f8 are neighborhood pixels.

Another popular operator, not shown in Table 1, is the Canny edge detector operator implemented in accordance with the Figure 8 [7].

Figure 8.

Canny edge detector.

Examples of applications of edge detection operators are shown in Figure 9.

Figure 9.

Original image after edge detector operator: Roberts (a), Prewitt (b), Sobel (c), and Canny.

Let fxy be a function of the brightness of the analyzed image; X a finite subset of the plane on which the function fxy is specified; S=S1S2SK the division of X into K non− empty subsets Si,i=1,2,,K, and Reg the rule specified on the set S and assuming the value true if and only if any pair of points from each subset Si corresponds to a certain homogeneity criterion.

The segmentation of the image fxy according to the Reg rule is the division S=S1S2SK corresponding to the conditions as follows:

a. i=1KSi=X;b. SiSj=0,ij;c. RegSi=truei;d. RegSiSj=falseij.E8

The Reg rule specifies a certain homogeneity criterion and depends on the function of fxy. We consider segmentation as

Seg:fxysi,jE9a
si,j=λiforxySi,i=1,2,,KE9b

wherefxy and si,j are functions that define the input image and the segmented image, respectively, while λi is the label (name) of Si area.

Advertisement

2. Feature extraction

Methods for feature extraction on biometric traits can be categorized into geometrical analysis and textural analysis (Table 2).

Biometric physiological modalityGeometrical featuresTexture features
FingerprintMinutiae singular points
Delta points
Triangulation methods
Crossing number
Analysis texture pattern composed
with ridges and valleys
Spatial distribution of minutiae points
PalmprintPrincipal lines. Line edge map
Wrinkles
Palmar friction ridges
Shape-oriented features
Local line binary pattern
Co-occurrence matrix
Finger knuckle printShape-oriented features: lines, curves,
contours
Curvelet
Co-occurrence matrix
Wavelets
Hand geometryShape-oriented features
Finger length and width
FaceSpatial relationship among eyes, lips, nose, chinGabor’s filtering
LBP
EarForce field transformation
2D and 3D shape descriptors
Moment invariants
IrisPhase-based method
Gabor’s filtering
PeriocularGeometry of eyelids, eye folds, eye cornersLBP
Histogram of oriented gradients
SIFT (shift-invariant feature transform)
RetinaMinutiae singular points
Crossing number
Gabor’s filtering
Vein
Hand vein
Finger vein
Forearm vein
Bifurcation points
Ending points
Gabor’s filtering
Riesz transform
Wavelet, curvelet
Radon transform

Table 2.

Biometric feature extraction methods.

The texture image can be seen as an image area containing repetitive pixel intensity patterns arranged in a certain structural manner. The concept of texture has no formal and mathematical definition, but there are a number of methods for extracting texture features that can be roughly divided into model-based (fractal and stochastic method), statistical, and using signal processing algorithms.

Methods using signal processing algorithms (in the frequency domain and/or space-frequency domain) are widely used in transform-based texture analysis, e.g., Fourier transform, Gabor transform, Riesz transform, Radon transform, and wavelet transform.

One of the popular representations of texture feature is the co-occurrence matrix proposed by Haralick et al. [8, 9, 10]. The gray-level co-occurrence matrix (GLCM) Cdkl counts the co − occurrence of pixels with gray values k and l at a given distance d and then extracts statistical measures from this matrix. The element of co-occurrence matrix is defined as

c(k,l)=x,yD(x+s,y+t)D{ 1if(f(x,y)=kandf(x+Δx,y+Δy)=l0        otherwise +x,yD(x+s,y+t)D{ 1if(f(x,y)=landf(x+Δx,y+Δy)=k0            otherwise E10

These features provide information about the texture and are as follows:

Element difference moment of orderp:klklpCdkl.Whenp=2,itis called the contrast;Entropy.Entropy=klCdkllogCdkl;Energy.Energy=klCdkl2;Inverse difference moment.IDM=kl11+kl2Cdkl.Correlation.Corr=klklCdklμxμyσxσy.E11

The distance d is most often represented in polar coordinates in the form of a discrete distance and an orientation angle. In practice, we use four angles, namely, 0°,45°,90°,135° (Figure 10).

Figure 10.

The gray-level co-occurrence matrices.

Mathematically, Gabor filters is defined as [11]

Gabω,θ(x,y)=12πσxσyexp{ ((xcosθ+ysinθ)22σx2+(xsinθ+ycosθ)22σy2) }[ exp{ i(ωxcosθ+ωysinθ) }exp{ ω2σ22 } ]E12

Typically, Gabor’s filter bank was created by varying the frequency parameter, the orientation parameter, and the variance parameter (Figure 11).

Figure 11.

2D Gabor’s filters in spatial domain: (a) real and (b) imaginary components.

Gabor’s features are obtained by convolution of the image f (x, y) with the Gabω,θxy filter:

Gω,θxy=fxyGabω,θxyE13

where is the convolution operator [11, 12, 13].

Moment-based features can be successfully used as elements of a feature vector in biometrics using blood vessel network [13, 14].

The geometric moments of order p+qof the imagefxy is determined by

mpq=xyfxyhpqxyE14

where hpqxy is a certain polynomial in which x is the degree p, while y is the degree q. If hpqxy=xpyq,then we consider the geometrical moments of the imagexy.

Infinite set of moments mpqpq=01uniquely specifiesfxy and vice versa.

Central moments are defined by

μpq=xyxx̅pyy̅qfxyE15

where x¯=m10m00, y¯=m01m00 .

Standardized central moments receiving as

ηpq=μpqμ00γE16

where γ=12p+q+1,dlap+q=2,3,.

We usually use the first seven combinations of central moments of order 3 known in the literature as Hu moments [15].

The basic set of geometrical moments is non-orthogonal which makes selection of features difficult.

Zernike’s moments are orthogonal and invariant to rotation, translation, and scale change. The complex set of Zernike’s moments is determined by [16]

Anm=n+1πx=1My=1NZnmρθfxyE17

where Znmρθ=Rnmρejmθ and Rnmρ=s=0nm21sns!ρn2ss!n+m2s!nm2s! .

When calculating Zernike’s moments, the size of the image determines the disk size, and the disk center is taken as the origin. In the case of considering moments on the order of 7, we get 20 Zernike’s moments.

In the case of biometric data using images of retinal blood vessels and conjunctival blood vessels, one of the stages of creating a vector of features is to determine geometrical features based on the topological properties of the image [5, 17].

The number of connected points around the point f0 is determined by

Nc4=kSfkfkfk+1fk+2E18

where 4 denote the four-element neighborhood of the image point, fk assumes the value 0 or 1, and S denotes the set of integers [17]. In the case where k9, its value is defined as k8.

If Nc4=3, fk is the bifurcation point, and if Nc4=4, fk, is the cross point.

The feature vector defining the topology of blood vessels is made up of the number of bifurcation points, number of crossing points, coordinates of bifurcation points, and coordinates of crossing points.

By using the relationship between the characteristic points of the user blood vessel image and blood vessel image of template, we can calculate the matching score results.

Advertisement

3. Vein biometrics: feature extraction from hand dorsal and wrist images

One of the most promising and intensively developed biometric methods is the method using the network of blood vessels. The pattern of blood vessels is unique for every human being and also in the case of twins. It is also stable over time [18]. Biometrics associated with the network of blood vessels has a significant advantage over other biometric methods, namely [1, 4, 18]:

  • Allows only identification of living people: the NIR camera records the image only in the case of deoxygenated hemoglobin, and this is possible only in the living organism [19, 26];

  • The network of blood vessels is inside the body, and it is practically impossible to reproduce outside of it, which results in very high level of safety.

  • Usually, we use the network of blood vessels associated with the following parts of the body:

  • Eye. This applies first of all not only to the retinal blood vessels but also to the blood vessels of the conjunctiva.

  • Hand. In this case, we are talking about the network of blood vessels of the finger, palm, hand dorsal, wrist, and forearm [20, 24, 25].

Figure 12 Shows the networks of blood vessels used in biometry.

Figure 12.

The networks of blood vessels: Retina (a), conjunctiva (b), finger (c), palm (d), hand dorsal (e), and wrist (f).

We will consider images from Figure 12(e) and (f), which can be obtained in one process of acquiring biometric patterns. In the literature on the subject, the analysis of this type of images for biometrics is referred to as dorsal vein biometrics and wrist vein biometrics [27, 28].

3.1. Vein biometrics

In the process of identifying people on the basis of dorsal vein images, we use a feature vector constructed from two parts: features calculated on the basis of the co-occurrence matrix and features calculated using Gabor filtration operation [21, 22, 23].

We consider the dorsal vein images shown in Figure 13.

Figure 13.

Dorsal vein images.

We analyze the co-occurrence matrix for =10°145°190°1135°. The five features calculated for each value of distance d are shown in Table 3.

Figure 13aFigure 13b
IDMContrastEnergyEntropyCorr.IDMContrastEnergyEntropyCorr.
d10°0.21050.8903.218E-48.3682.271E-40.255141.3180.0077.6622.926E-4
d145°0.17367.5642.622E-48.5072.289E-40.218198.0510.0067.7942.932E-4
d190°0.24431.9363.479E-48.2182.268E-40.30686.9130.0077.4642.934E-4
d1135°0.14680.6212.375E-48.6082.286E-40.214199.1800.0067.8032.931E-4
d20°0,111179.5161.944E-48.8042.265E-40.184378.4810.0057.9912.928E-4
d245°0.102206.5931.757E-48.8612.302E-40.161456.0280.0048.0653.010E-4
d290°0.15588.5282.386E-48.5812.273E-40.226198.5540.0067.7722.963E-4
d2135°0.078288.1561.626E-48.9322.286E-40.161459.7830.0048.0753.002E-4

Table 3.

Features calculated on the basis of co-occurrence matrix.

As a result, on the basis of the co-occurrence matrix, we obtain 40 features.

The second part of the feature vector is obtained by implementing an input image convolution operation with the bank of Gabor filters.

For each of the image, a filtration operation is carried out in accordance with Eq. (13) (Figures 1416).

Figure 14.

Real part of Gabor’s filter responses of a hand dorsal image with Figure 13a. Rows correspond to scale (2, 4, 8), and columns to orientation (0°,30°,60°,90°,120°,150°).

Figure 15.

Real part of Gabor’s filter responses of a hand dorsal image with Figure 13b. Rows correspond to scale (2, 4, 8), and columns to orientation (0°,30°,60°,90°,120°,150°).

Figure 16.

Real part of Gabor’s filter responses of a wrist image with Figure 12f. Rows correspond to scale (2, 4, 8), and columns to orientation (0°,30°,60°,90°,120°,150°).

In the case of biometric identification of people based on texture features obtained using Gabor filter bank, we must solve the problem of a very large dimension of Gabor vector of traits.

3.2. Reduction of dimension of the feature vector by the PCA method

In the case of the 128 × 128 image and 3 × 6 of Gabor’s filter bank, the feature vector has a dimension of 128 × 128 × 3 × 6 = 294,912. The size of the feature is very correlated with each other; after down-sampling (according to factor 8), we get a vector of 36,864 elements or 2304 elements per image.

In order to reduce information redundancy, we use the principal component analysis (PCA) method. In some studies it is also called a Karhunen-Loeve discrete transform [29, 30].

The principal component analysis (PCA) method reduces the amount of data analyzed by subjecting them to linear transformation to a new coordinate system, resulting in new independent variables called the principal components.

The principal features of the PCA method are represented by eigenvectors. The eigenvectors of the covariance matrix are calculated based on the image training set and represent the principal components of the training image set.

We use two author’s database of images of the blood vessel network, namely, a database of dorsal vein images and a database of wrist vein images containing 42 images created as part of a session with students and 58 images found in the resources of www. Each database had 100 images.

The collection of training images consisted of 50 images (50% of images from the student base and 50% of images from web sources).

The PCA algorithm is made as follows:

  • Learning/training phase

The image Gω,θxy has M × N pixels and is converted into a 1 × MN size vector. Images from the training set are presented in the form of a T matrix (Figure 17):

T=G1G2GqE19

where q is the number of images in the training set.

Figure 17.

Image processing using PCA.

We calculate the mean image of all the images from the training database:

Ψ=1q1qGqE20

Then, we calculate the difference between each image from the training database and the mean image:

Φi=GiΨE21

The covariance matrix is defined as

C=1q1qΦiΦit=AAtE22

where

A=Φ1Φ2ΦqE23

and matrix A has a dimension of MN × q.

The covariance matrix has a dimension of MN × MN.

Then, we calculate the eigenvalues and eigenvectors of the covariance matrix:

Cvi=λivii=1,,qE24

Then, we organize our eigenvectors according to their decreasing eigenvalues. We choose k principal components corresponding to k largest eigenvalues.

  • Test/recognition phase

The new image is processed to obtain eigenvectors and eigenvalues. k the main components of the G˜ image are defined as

w=vtG˜ΨE25

where v=v1v2vk.

Approximated image is calculated as

G¯=vw+ΨE26

We choose the k value according to the dependence:

infk=1kλi1qλiE27

where k is the predefined number of eigenvectors and q the total number of eigenvectors.

A high value of k means that a large amount of input information will be stored, e.g., infk0.99 means that we retain 99% of information.

The variance of the first eigenvector is about 60% of the variance of the data set, the variance of the first 30 eigenvectors is about 85% of the variance of the data set, and 45 or more eigenvectors account for over 90% of the variance of the data set (Figure 18).

Figure 18.

Variance as a function of the number of eigenvectors.

By increasing the number of eigenvectors, we increase the recognition efficiency.

We defined the vectors of features as follows:

FeatVect=FV1FV2E28

where

FV1 is Gabor’s feature vector

FV2 is the co-occurrence feature vector

The quality of biometric systems is measured by two parameters: false acceptance rate (FAR) and false reject rate (FRR). FAR indicates the situation when the biometric input image is incorrectly accepted, and the FRR indicates the rejection of the user who should be correctly verified.

The size of the FV1 vector has been set to 60 eigenvectors. The featVect size is 100. For these parameters FRR is 1.16% and FAR is 0.26%.

Advertisement

4. Conclusion

Recognition of people in biometric systems is based on the physiological or behavioral features that a person possesses.

In this chapter, we presented the image preprocessing operations used in static biometric systems (physiological modality). In particular, we discussed operations related to the transformation of the brightness scale of the image, modification of the brightness histogram, median filtering, edge detection, and image segmentation. In the course of these operations, we obtain an image enabling the extraction and measurement of features that serve as the basis for recognition.

Next, we discuss the feature extraction process, focusing on certain geometrical features and texture features. We present a representation of texture features based on parameters obtained from the co-occurrence matrix and images after Gabor’s filtration with various scaling and orientation parameters. In terms of geometric features, we discuss the moment-based features and geometrical features based on the topological properties of the image.

We provide and discuss the feature extraction process in the images of the blood vessels of the hand dorsal and wrist. We present features calculated on the basis of matrix of co-occurrences and texture characteristics obtained using Gabor’s filter bank. The process of reducing the dimensionality of a feature vector using the PCA method is also considered.

The main contributions of this chapter are the following:

  • A set of information on image processing methods used in biometric systems

  • Presentation of methods for obtaining feature vectors that are the basis of the process of recognizing people

  • Explaining the problem of reducing the dimensionality of the feature vectors

  • Showing parameters that are the basis for recognizing people based on images of the blood vessels of the hand dorsal and wrist

The chapter can be the basis for further studies and works on the image processing in biometric systems, especially based on images of the blood vessel network.

References

  1. 1. Li SZ, Jain AK, editors. Encyclopedia of Biometrics. New York: Springer Science + Business Media; 2015
  2. 2. Ross A, Nandakumar K, Jain AK. Handbook of Multibiometrics. New York: Springer; 2006
  3. 3. Maltoni D, Maio D, Jain AK, Prabhakar S. Handbook of Fingerprint Recognition. New York: Springer; 2003
  4. 4. Jain AK, Flynn PJ, Ross A, editors. Handbook of biometrics. New York: Springer; 2007
  5. 5. Gonzales RC, Woods RE. Digital Image Processing. Upper Saddle River: Pearson Prentice Hall; 2008
  6. 6. Zuiderveld K. Contrast Limited Adaptive Histogram Equalization. Cambridge: Academic; 1994
  7. 7. Canny J. A computational approach to edge detection. IEEE Transactions on PAMI. 1986;8(6):769-789
  8. 8. Haralick R, Shanmugam K, Dinstein I. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics. 1973;SMC-3(6):610-621
  9. 9. Daugman JG. High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1993;25:1148-1161
  10. 10. Haralick RM. Statistical and structural approaches to texture. IEEE Transactions on Systems, Man, and Cybernetics. 1979;67:786-804
  11. 11. Gabor D. Theory of communication. Journal of Institution of Electrical Engineers. 1946;93:429-459
  12. 12. Zhang H et al. Finger Vein Recognition Based on Gabor Filter. Intelligence Science and Big Data Engineering; 2013. pp. 827‐834
  13. 13. Choras RS. Iris-based person identification using Gabor wavelets and moments. In: Proceedings 2009 International Conference on Biometrics and Kansei Engineering ICBAKE; 2009. pp. 55‐59
  14. 14. Xueyan L, Shuxu G, Fengli G, Ye L. Vein pattern recognitions by moment invariants. In: Proceedings of the First International Conference on Bioinformatics and Biomedical Engineering; 2007. pp. 612‐615
  15. 15. Hu M. Pattern recognition by moment invariants. In: Proceedings of the IRE. 1969;49:1428
  16. 16. Hong YH, Khotanzad A. Invariant image recognition by Zernike moments. IEEE Trans actions on PAMI. 1990;12(5):489-498
  17. 17. Zhang T, Suen C. A fast parallel algorithm for thinning digital patterns. Communications of the ACM. 1984;27:236-239
  18. 18. Kirbas C, Quek K. Vessel extraction techniques and algorithm: A survey. In: Proceedings of the 3rd IEEE Symposium on Bioinformatics and Bioengineering; 2003
  19. 19. Kono M et al. Near-infrared finger vein patterns for personal identification. Applied Optics. 2002;41(35):7429-7436
  20. 20. Ding Y, Zhuang D, Wang K. A study of hand vein recognition method. In: Proceedings of IEEE International Conference on Mechatronics & Automation; 2005. pp. 2106‐2110
  21. 21. Choras RS. Personal identification using forearm vein patterns. In: 2017 International Conference and Workshop on Bioinspired Intelligence (IWOBI); 2017. pp. 1‐5
  22. 22. Choras RS. Biometric personal authentication using images of forearm vein patterns. In: 2017 International Conference on Signals and Systems (ICSigSys); 2017. pp. 40-43
  23. 23. Daugman JG. Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression. IEEE Transactions on Acoustics, Speech, and Signal Processing. 1988;36:1169-1179
  24. 24. Kang BJ et al. Multimodal biometric method based on vein and geometry of a single finger. IET Computer Vision. 2010;4(3):209-217
  25. 25. Kumar A, Prathyusha KV. Personal authentication using hand vein triangulation and knuckle shape. IEEE Transactions on Image Processing. 2009;18:2127-2136
  26. 26. Lee EC et al. New finger biometric method using near infrared imaging. Sensors. 2011;11:2319-2333
  27. 27. Wang Y, Li K, Cui J. Hand-dorsa vein recognition based on partition local binary pattern. In: IEEE 10th International Conference on Signal Processing (ICSP); 2010. pp. 1671-1674
  28. 28. Tanaka T, Kubo N. Biometric authentication by hand vein patterns. In: Proceedings of SICE Annual Conference; 2004. pp. 249‐253
  29. 29. Liu CJ, Wechsler H. Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Transactions on Image Processing. 2002;11(4):467-476
  30. 30. Tao J, Jiang W, Gao Z, Chen S, Wang C. Palmprint recognition based on 2-dimension PCA. In: First International Conference on Innovative Computing, Information and Control; 2006;1:326-330

Written By

Ryszard S. Choras

Reviewed: 01 March 2018 Published: 29 August 2018