Differential gradient operators.
The network of blood vessels possesses several properties that make a good biometric feature for personal identification: (1) they are difficult to damage and modify; (2) they are difficult to simulate using a fake template; and (3) vein information can represent the liveness of the person. In the process of recognition of the network of blood vessels, we encounter two main difficulties: the first difficulty concerns the enhancement of the image of blood vessels obtained from the camera working in visible and/or infrared light, and the second one concerns the process of extraction of features and methods of classification. In the first part, this chapter presents the basic methods of preprocessing biometric images. In the second part, we discuss the process of feature extraction with particular emphasis on the feature extraction from images depicting the network of blood vessels. This applies to texture analysis using the co-occurrence matrix, Gabor filtration, moments, and topological features using cross points. In the third part, we present the methods of processing images of the blood vessel network of dorsal part of the hand and wrist. We also discuss the process of reducing the dimensionality of a feature vector using the principal components analysis method.
- vein patterns
- feature extraction
- co-occurrence matrix
- Gabor’s filters
Biometrics is a powerful field of science for identifying a person using their physiological and behavioral features [1, 2]. Biometrics is the automatic recognition of people based on behavioral or physiological characteristics. During recognition given users are assigned to prescribed classes. We extract the essential features of the object and use these features to classify the object.
Biometric systems in general perform two tasks: identification and verification (recognition) of people (Figure 1). The process of verification (recognition) boils down to distinguishing a specific person from a limited number of people whose biometric data are known. The identification consists of determining the vector of features corresponding to the person being subjected to the identification process and trying to find a match between this vector and the feature vectors in the database containing records (feature vectors) concerning people. As a result, we get a list of the most similar individuals in the database. Identification is much more difficult [3, 4].
Images play an important role in the identification process of people. Image processing and recognition are fields that use complex signal and image processing algorithms.
The image in digital form is stored as a two-dimensional array. Formally
where , , and
The components of an image processing system are presented on Figure 2.
The processing generally comprises the steps of acquiring an image, selecting the desired color space, improving image quality, image segmentation, and features extraction for the recognition. Recognition process involves several stages—extraction features and dimensionality reduction which selects the best set of features and rejects irrelevance. The resultant feature vector is the basis for classification.
The image is usually obtained using a CCD camera or NIR camera. It can be a color image (three-color components) or a grayscale image. Usually, color space (RGB with 24 bit) is converted to gray color space (8 bit).
Image processing operations can be divided into (Figure 3):
Processing of single points of the image.
Operations that use pixel group processing.
The first group includes operations related to modification histogram, while the second group includes operations related to edge detection and various types of image filtration.
Transforming the brightness scale of image elements enables:
In the case where the brightness range does not cover the entire scale available for the image, the extension of the range (the effect of increase contrast)
Emphasizing certain brightness ranges and suppressing others
Modifying the brightness of image elements to obtain a uniform image frequency of the occurrence of appropriate levels of brightness
In practice, transformation
If represents the number of pixels in an image with intensity
One of the methods of noise elimination (“salt pepper” type) and other image distortions is median filtering (MF). Median filtering is a nonlinear operation, and this fact complicates the mathematical analysis of its properties. It is implemented by moving the window (the mask) along the lines of the digital image and changing the value of the middle window element by the median value of the elements inside the window. MF allows you to keep sharp changes in brightness and high efficiency in eliminating impulsive noise .
where is the
MF allows you to keep sharp changes in brightness and high efficiency in eliminating impulsive noise (Figure 7).
Edges carry useful information about object boundaries which can be used for further analysis. Edge detectors can be grouped into two classes: (a) local techniques which use operators on local image neighborhoods and (b) global techniques.
Gradient estimates is done as
and can be expressed by (Table 1)
|Edge detector operators||Partial derivatives along ||Weight vectors||Kernels|
|Roberts edge detectors|
= − min
|there is no|
|Prewitt edge detector|
|Sobel edge detector|
where and are neighborhood pixels.
Examples of applications of edge detection operators are shown in Figure 9.
Let be a function of the brightness of the analyzed image;
The segmentation of the image according to the
and are functions that define the input image and the segmented image, respectively, while is the label (name) of area.
2. Feature extraction
Methods for feature extraction on biometric traits can be categorized into geometrical analysis and textural analysis (Table 2).
|Biometric physiological modality||Geometrical features||Texture features|
|Fingerprint||Minutiae singular points|
|Analysis texture pattern composed|
with ridges and valleys
Spatial distribution of minutiae points
|Palmprint||Principal lines. Line edge map|
Palmar friction ridges
|Local line binary pattern|
|Finger knuckle print||Shape-oriented features: lines, curves,|
|Hand geometry||Shape-oriented features|
Finger length and width
|Face||Spatial relationship among eyes, lips, nose, chin||Gabor’s filtering|
|Ear||Force field transformation|
2D and 3D shape descriptors
|Periocular||Geometry of eyelids, eye folds, eye corners||LBP|
Histogram of oriented gradients
SIFT (shift-invariant feature transform)
|Retina||Minutiae singular points|
The texture image can be seen as an image area containing repetitive pixel intensity patterns arranged in a certain structural manner. The concept of texture has no formal and mathematical definition, but there are a number of methods for extracting texture features that can be roughly divided into model-based (fractal and stochastic method), statistical, and using signal processing algorithms.
Methods using signal processing algorithms (in the frequency domain and/or space-frequency domain) are widely used in transform-based texture analysis, e.g., Fourier transform, Gabor transform, Riesz transform, Radon transform, and wavelet transform.
One of the popular representations of texture feature is the co-occurrence matrix proposed by Haralick et al. [8, 9, 10]. The gray-level co-occurrence matrix (GLCM) counts the co − occurrence of pixels with gray values
These features provide information about the texture and are as follows:
Mathematically, Gabor filters is defined as 
Typically, Gabor’s filter bank was created by varying the frequency parameter, the orientation parameter, and the variance parameter (Figure 11).
Gabor’s features are obtained by convolution of the image
The geometric moments of order is determined by
where is a certain polynomial in which
Infinite set of moments and vice versa.
Central moments are defined by
where , .
Standardized central moments receiving as
We usually use the first seven combinations of central moments of order 3 known in the literature as Hu moments .
The basic set of geometrical moments is non-orthogonal which makes selection of features difficult.
Zernike’s moments are orthogonal and invariant to rotation, translation, and scale change. The complex set of Zernike’s moments is determined by 
where and .
When calculating Zernike’s moments, the size of the image determines the disk size, and the disk center is taken as the origin. In the case of considering moments on the order of 7, we get 20 Zernike’s moments.
In the case of biometric data using images of retinal blood vessels and conjunctival blood vessels, one of the stages of creating a vector of features is to determine geometrical features based on the topological properties of the image [5, 17].
The number of connected points around the point is determined by
where 4 denote the four-element neighborhood of the image point, assumes the value 0 or 1, and
If , is the bifurcation point, and if , , is the cross point.
The feature vector defining the topology of blood vessels is made up of the number of bifurcation points, number of crossing points, coordinates of bifurcation points, and coordinates of crossing points.
By using the relationship between the characteristic points of the user blood vessel image and blood vessel image of template, we can calculate the matching score results.
3. Vein biometrics: feature extraction from hand dorsal and wrist images
One of the most promising and intensively developed biometric methods is the method using the network of blood vessels. The pattern of blood vessels is unique for every human being and also in the case of twins. It is also stable over time . Biometrics associated with the network of blood vessels has a significant advantage over other biometric methods, namely [1, 4, 18]:
The network of blood vessels is inside the body, and it is practically impossible to reproduce outside of it, which results in very high level of safety.
Usually, we use the network of blood vessels associated with the following parts of the body:
Eye. This applies first of all not only to the retinal blood vessels but also to the blood vessels of the conjunctiva.
Figure 12 Shows the networks of blood vessels used in biometry.
We will consider images from Figure 12(e) and (f), which can be obtained in one process of acquiring biometric patterns. In the literature on the subject, the analysis of this type of images for biometrics is referred to as
3.1. Vein biometrics
In the process of identifying people on the basis of dorsal vein images, we use a feature vector constructed from two parts: features calculated on the basis of the co-occurrence matrix and features calculated using Gabor filtration operation [21, 22, 23].
We consider the dorsal vein images shown in Figure 13.
We analyze the co-occurrence matrix for . The five features calculated for each value of distance
|Figure 13a||Figure 13b|
As a result, on the basis of the co-occurrence matrix, we obtain 40 features.
The second part of the feature vector is obtained by implementing an input image convolution operation with the bank of Gabor filters.
In the case of biometric identification of people based on texture features obtained using Gabor filter bank, we must solve the problem of a very large dimension of Gabor vector of traits.
3.2. Reduction of dimension of the feature vector by the PCA method
In the case of the 128 × 128 image and 3 × 6 of Gabor’s filter bank, the feature vector has a dimension of 128 × 128 × 3 × 6 = 294,912. The size of the feature is very correlated with each other; after down-sampling (according to factor 8), we get a vector of 36,864 elements or 2304 elements per image.
The principal component analysis (PCA) method reduces the amount of data analyzed by subjecting them to linear transformation to a new coordinate system, resulting in new independent variables called the principal components.
The principal features of the PCA method are represented by eigenvectors. The eigenvectors of the covariance matrix are calculated based on the image training set and represent the principal components of the training image set.
We use two author’s database of images of the blood vessel network, namely, a database of dorsal vein images and a database of wrist vein images containing 42 images created as part of a session with students and 58 images found in the resources of www. Each database had 100 images.
The collection of training images consisted of 50 images (50% of images from the student base and 50% of images from web sources).
The PCA algorithm is made as follows:
The image has
We calculate the mean image of all the images from the training database:
Then, we calculate the difference between each image from the training database and the mean image:
The covariance matrix is defined as
The covariance matrix has a dimension of
Then, we calculate the eigenvalues and eigenvectors of the covariance matrix:
Then, we organize our eigenvectors according to their decreasing eigenvalues. We choose
The new image is processed to obtain eigenvectors and eigenvalues.
Approximated image is calculated as
We choose the
where k is the predefined number of eigenvectors and
A high value of
The variance of the first eigenvector is about 60% of the variance of the data set, the variance of the first 30 eigenvectors is about 85% of the variance of the data set, and 45 or more eigenvectors account for over 90% of the variance of the data set (Figure 18).
By increasing the number of eigenvectors, we increase the recognition efficiency.
We defined the vectors of features as follows:
is Gabor’s feature vector
is the co-occurrence feature vector
The quality of biometric systems is measured by two parameters: false acceptance rate
The size of the
Recognition of people in biometric systems is based on the physiological or behavioral features that a person possesses.
In this chapter, we presented the image preprocessing operations used in static biometric systems (physiological modality). In particular, we discussed operations related to the transformation of the brightness scale of the image, modification of the brightness histogram, median filtering, edge detection, and image segmentation. In the course of these operations, we obtain an image enabling the extraction and measurement of features that serve as the basis for recognition.
Next, we discuss the feature extraction process, focusing on certain geometrical features and texture features. We present a representation of texture features based on parameters obtained from the co-occurrence matrix and images after Gabor’s filtration with various scaling and orientation parameters. In terms of geometric features, we discuss the moment-based features and geometrical features based on the topological properties of the image.
We provide and discuss the feature extraction process in the images of the blood vessels of the hand dorsal and wrist. We present features calculated on the basis of matrix of co-occurrences and texture characteristics obtained using Gabor’s filter bank. The process of reducing the dimensionality of a feature vector using the PCA method is also considered.
The main contributions of this chapter are the following:
A set of information on image processing methods used in biometric systems
Presentation of methods for obtaining feature vectors that are the basis of the process of recognizing people
Explaining the problem of reducing the dimensionality of the feature vectors
Showing parameters that are the basis for recognizing people based on images of the blood vessels of the hand dorsal and wrist
The chapter can be the basis for further studies and works on the image processing in biometric systems, especially based on images of the blood vessel network.
Li SZ, Jain AK, editors. Encyclopedia of Biometrics. New York: Springer Science + Business Media; 2015
Ross A, Nandakumar K, Jain AK. Handbook of Multibiometrics. New York: Springer; 2006
Maltoni D, Maio D, Jain AK, Prabhakar S. Handbook of Fingerprint Recognition. New York: Springer; 2003
Jain AK, Flynn PJ, Ross A, editors. Handbook of biometrics. New York: Springer; 2007
Gonzales RC, Woods RE. Digital Image Processing. Upper Saddle River: Pearson Prentice Hall; 2008
Zuiderveld K. Contrast Limited Adaptive Histogram Equalization. Cambridge: Academic; 1994
Canny J. A computational approach to edge detection. IEEE Transactions on PAMI. 1986; 8(6):769-789
Haralick R, Shanmugam K, Dinstein I. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics. 1973; SMC-3(6):610-621
Daugman JG. High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1993; 25:1148-1161
Haralick RM. Statistical and structural approaches to texture. IEEE Transactions on Systems, Man, and Cybernetics. 1979; 67:786-804
Gabor D. Theory of communication. Journal of Institution of Electrical Engineers. 1946; 93:429-459
Zhang H et al. Finger Vein Recognition Based on Gabor Filter. Intelligence Science and Big Data Engineering; 2013. pp. 827‐834
Choras RS. Iris-based person identification using Gabor wavelets and moments. In: Proceedings 2009 International Conference on Biometrics and Kansei Engineering ICBAKE; 2009. pp. 55‐59
Xueyan L, Shuxu G, Fengli G, Ye L. Vein pattern recognitions by moment invariants. In: Proceedings of the First International Conference on Bioinformatics and Biomedical Engineering; 2007. pp. 612‐615
Hu M. Pattern recognition by moment invariants. In: Proceedings of the IRE. 1969; 49:1428
Hong YH, Khotanzad A. Invariant image recognition by Zernike moments. IEEE Trans actions on PAMI. 1990; 12(5):489-498
Zhang T, Suen C. A fast parallel algorithm for thinning digital patterns. Communications of the ACM. 1984; 27:236-239
Kirbas C, Quek K. Vessel extraction techniques and algorithm: A survey. In: Proceedings of the 3rd IEEE Symposium on Bioinformatics and Bioengineering; 2003
Kono M et al. Near-infrared finger vein patterns for personal identification. Applied Optics. 2002; 41(35):7429-7436
Ding Y, Zhuang D, Wang K. A study of hand vein recognition method. In: Proceedings of IEEE International Conference on Mechatronics & Automation; 2005. pp. 2106‐2110
Choras RS. Personal identification using forearm vein patterns. In: 2017 International Conference and Workshop on Bioinspired Intelligence (IWOBI); 2017. pp. 1‐5
Choras RS. Biometric personal authentication using images of forearm vein patterns. In: 2017 International Conference on Signals and Systems (ICSigSys); 2017. pp. 40-43
Daugman JG. Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression. IEEE Transactions on Acoustics, Speech, and Signal Processing. 1988; 36:1169-1179
Kang BJ et al. Multimodal biometric method based on vein and geometry of a single finger. IET Computer Vision. 2010; 4(3):209-217
Kumar A, Prathyusha KV. Personal authentication using hand vein triangulation and knuckle shape. IEEE Transactions on Image Processing. 2009; 18:2127-2136
Lee EC et al. New finger biometric method using near infrared imaging. Sensors. 2011; 11:2319-2333
Wang Y, Li K, Cui J. Hand-dorsa vein recognition based on partition local binary pattern. In: IEEE 10th International Conference on Signal Processing (ICSP); 2010. pp. 1671-1674
Tanaka T, Kubo N. Biometric authentication by hand vein patterns. In: Proceedings of SICE Annual Conference; 2004. pp. 249‐253
Liu CJ, Wechsler H. Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Transactions on Image Processing. 2002; 11(4):467-476
Tao J, Jiang W, Gao Z, Chen S, Wang C. Palmprint recognition based on 2-dimension PCA. In: First International Conference on Innovative Computing, Information and Control; 2006; 1:326-330