Open access peer-reviewed chapter

Multimodal Biometrics for Person Authentication

Written By

Ryszard S. Choras

Submitted: 24 September 2018 Reviewed: 06 February 2019 Published: 14 March 2019

DOI: 10.5772/intechopen.85003

From the Edited Volume

Security and Privacy From a Legal, Ethical, and Technical Perspective

Edited by Christos Kalloniatis and Carlos Travieso-Gonzalez

Chapter metrics overview

1,544 Chapter Downloads

View Full Metrics

Abstract

Unimodal biometric systems have limited effectiveness in identifying people, mainly due to their susceptibility to changes in individual biometric features and presentation attacks. The identification of people using multimodal biometric systems attracts the attention of researchers due to their advantages, such as greater recognition efficiency and greater security compared to the unimodal biometric system. To break into the biometric multimodal system, the intruder would have to break into more than one unimodal biometric system. In multimodal biometric systems: The availability of many features means that the multimodal system becomes more reliable. A multimodal biometric system increases security and ensures confidentiality of user data. A multimodal biometric system realizes the merger of decisions taken under individual modalities. If one of the modalities is eliminated, the system can still ensure security, using the remaining. Multimodal systems provide information on the “liveness” of the sample being introduced. In a multimodal system, a fusion of feature vectors and/or decisions developed by each subsystem is carried out, and then the final decision on identification is made on the basis of the vector of features thus obtained. In this chapter, we consider a multimodal biometric system that uses three modalities: dorsal vein, palm print, and periocular.

Keywords

  • feature transform
  • multimodal biometric recognition
  • levels of fusion
  • dorsal vein
  • periocular
  • palm print
  • PCA

1. Introduction

Biometrics is a technology that uses physical and/or behavioral characteristics of people to identify them. Systems of this type implement two processes (Figure 1) [1]:

  1. Enrollment

  2. Authentication

Figure 1.

Biometric recognition system.

The physical features are fingerprints, hand geometry, handprint, facial image, iris, retina, and ear. Behavioral features are signature, lip motion, speech, dynamics of typing, hand movements, and gait.

The characteristics of effective biometrics are:

  1. Unique features for each individual

  2. Invariant traits over time (e.g., due to the effect of aging)

  3. Features that are relatively easy to obtain (computational complexity small)

  4. Precise algorithms enabling classification

  5. Resistance to various types of attacks

  6. Low cost

  7. Ease of implementation

The security of the biometric system is usually assessed on the basis of some indicators. These are:

  • False match rate (FMR). It belongs to the group of matching errors. This indicator is defined as the expected probability that the downloaded sample will be falsely matched to the template in the database, but it will not be the test user pattern. If the indicator is high, it means that there is a risk that an unauthorized person will be recognized as a system user.

  • False rejection (FRR) is equivalent to the FMR. The difference between these indicators is that FMR refers to a single match, and the FRR refers to a situation where one or more attempts to match a sample to a template from the database may occur. The FRR error is referred to in the literature as type I error.

  • False discrepancy (FNMR). This is the coefficient determining the probability that the sample taken will not be matched to the pattern in the database belonging to the user from whom the sample was taken. In biometric verification (1:1) systems, the indicator means that the sample has not been identified by a specific pattern, while in biometric identification systems (1:N), this indicator determines the probability that a given pattern will not be found in the database.

  • The false acceptance factor (FAR) is equivalent to the FNMR indicator. The difference between him and FNMR is the same as between FRR and FMR.

  • Equal error rate (EER). It is defined as the intersection of the FAR and FRR characteristics in the graph of the dependence of these errors on the threshold of sensitivity (t). This factor indicates the optimal sensitivity threshold at which the same number of people is incorrectly rejected and incorrectly accepted. The lower the EER error value, the better the biometric system is.

The FMR (FRR) and FNMR (FAR) parameters can also be represented by graphs (Figure 2):

  • Receiver operating characteristic (ROC) curve showing the dependence of FNMR on FMR. You can use it to show the accuracy of the system.

  • Detection error trade-off (DET) showing error rates on both axes, most often on a logarithmic scale. This curve is plotted for both matching errors and decisions (Figure 2).

Figure 2.

The graph of FAR, FRR, and EER in receiver operating characteristic (ROC) curve.

If we use only one biometric authentication system, the results obtained are not always good enough. Unimodal biometric systems using a single sensor have many limitations, such as lack of uniqueness, universality, and lack of interference level associated with the acquired data, as a result of which they are unable to provide the required level of identification/verification efficiency (Table 1). This is due to the fact that the reliability of the biometric modality applied is affected by the precision of a single biometric system (Table 2).

Name Description
Distortion of the input biometric data Distorted biometric data may prevent the correct alignment process with database templates, as a result of which users are incorrectly rejected or identified
Intra class variations Biometric data obtained from the person during authentication may differ from the data used to generate the template during registration, thus affecting the matching process. The biometric template should have a small intra-class variance
Interclass similarities Biometric features should be significantly different for different people and should ensure small similarities between classes in the feature space. There is an upper limit to the users who can be effectively distinguished by any biometric system. The capacity of the identification system cannot be arbitrarily increased for fixed sets of feature vectors and the matching algorithm. The biometric template should have large interclass variations
Non-universality Obtaining accurate (useful) biometric data from the users is not always possible
Intruder attacks Attacks of this type involve the manipulation of biometric features to avoid recognition. It is also possible to create artificial biometric patterns in order to accept the identity of another person

Table 1.

Limitations of unimodal biometrics.

Name Description
Recognition accuracy The multi-biometric system ensures greater accuracy and reliability thanks to many independent biometric features that are difficult to attack
Continuous monitoring In case when one biometric modality is obstructed, other modalities of the multi-biometric system ensure correct user identification
Privacy Multi-biometric systems provide greater resistance to certain types of loopholes and attacks. It is difficult and/or impossible to steal many biometric patterns (templates) stored in the biometric database
Biometric data enrollment When biometric input data is unavailable or unacceptable by a biometric system, another biometric system modality may be used
Resistance on spoof attacks Usually the attacker is not able to use many relevant (accurate) spoofed biometrics

Table 2.

Advantages of multi-biometric systems.

Advertisement

2. Multi-biometric systems

2.1 Types of multi-biometric systems

The multi-biometric system can be (Figure 3) (a) a multi-sensor system that allows obtaining data from various sensors using one biometric feature, (b) a system with multiple algorithms processing a single biometric feature, (c) a system consolidating multiple occurrences of the same body trait, (d) a system using multiple templates of the same biometric method obtained with the help of a single sensor, and (e) a multimodal system combining information about the biometric features of the individual to establish his identity [2, 3, 4].

Figure 3.

Types of multi-biometric systems.

2.2 Fusion levels

In multimodal biometric systems, there are a number of strategies (scenarios) for the fusion of biometric information:

  • Data fusion from sensors. Data from various sensors form one vector. Fusion of information obtained from many different sensors for a single biometric feature.

  • The fusion of feature vectors extracted from various biometric modalities for further processing. A merger of information obtained from several unimodal biometric systems that process different body characteristics of the same person (Figure 4a).

  • Fusion at the decision level. The merger of decisions developed on the basis of information from different biometric modalities, and the resultant feature vector defines two main classes, i.e., rejection or acceptance (Figure 4b).

  • Rank level fusion. The classifier determines the rank of each registered biometric identity. A high position is a good indicator of a good fit (Figure 4b).

Figure 4.

Levels of fusion. (a) Feature level fusion and (b) score/rank level fusion.

2.3 Related work

The fusion of biometrics modalities on different levels of multi-biometrics system is extensively studied in the literature (Table 3). For all that the merger at the level of feature vectors is relatively poorly discussed. The merger at this level includes the integration of feature vectors corresponding to many sources of information. Because the feature vectors contain more elements than the input biometric data, it is obvious that the merger at the feature vectors level will provide better authentication results. However, mergers at this level are difficult to implement in practice because (i) sets of features of many modalities may be incompatible, (ii) the combination of two feature vectors may result in a vector of features with very large dimensionality, and (iii) a complex comparing system is required.

Biometrics traits Fusion methods Description of the implementation method References
Fingerprint and face Feature In [5], it was proposed to extract face and fingerprint characteristics invariant to the rotation and scaling of Zernike moments (ZM). On the basis of ZM, the fusion of facial features and fingerprints is realized. The RBF network implements the decision-making process. The accuracy rate is 96.55%. Testing result of authentication rate are FAR, 4.95%, and FRR, 1.12% [5]
Score In [6], authors presented score level fusion technique using the SIFT features for the face and the minutiae features for fingerprint. Results are: FAR = 1.98%, FRR = 3.18%, and accuracy = 97.41 [6]
Fingerprint,
finger knuckle print,
finger vein
Finger shape
Feature The multi-set canonical correlation analysis is used to fuse multiple feature sets. The feature based on MCCA achieves the recognition performance, with EER = 2.3900e-04 [1]
With the help of the unified Gabor filter, fingerprint codes and finger vein codes are generated. The extraction of features is carried out by using a supervised local canonical correlation analysis (SLPCCA), and finally the NN-classifier is used [7]
Fingerprint and iris Score In [8], authors propose a frequency approach to generate a unified homogeneous template for fingerprint and iris features. Scores generated from these templates are fused using the sum rule [8]
Palm print and hand shape Feature Information from the face image and gait image are combined at the function level. Using the principal component analysis (PCA) method, facial features were obtained The result of multiple discrimination analysis (MDA) is gait energy image (GEI) Recognition rate results are 91.3% [9]
Palm print and iris Feature In system described in [10], texture parameters are extracted based on Gabor filters. [10]
Fusion of the palm print features and iris features is based on the wavelets. Decision is obtained using kNN classifier. Recognition accuracy is 99.2% and FRR = 1.6%
In [11], fusion method for the information of phases about the iris and palm utilizes a Baud limited image product (BLIP) [11]
Finger knuckle and palm print Feature In this paper, feature extraction method for palm print is monogenic binary coding; for inner knuckle print recognition, two algorithms named ridgelet transform and SIFT are proposed. The extracted feature vectors are classified using SVM [12]
Palm print and face Feature The PCA is used to extract features of palm and face images. Fusion technique concatenated the feature vectors of the face and palm modalities into one fused vector, and feature selection is performed. [13]
Face and gait Feature Method is based on learning face and gait features in image transform spaces. Two methods are considered—PCA and LDA [14]
Face and iris Score Multi-biometrics system using dual iris, visible and thermal face traits is considered. 1D Log-Gabor and Complex Gabor Jet Descriptor (CGJD) were used to extract feature vectors. Authors proposed a score level fusion algorithm [15]
The ordinal measures and local binary pattern (LBP) methods are proposed to extract features from iris and face regions, respectively [16]
Feature Paper [17] presents the extractions of iris features based on 2D Gabor and facial features using the PCA method [17]
Face and hand geometry Feature The 2D DCT is used to extract discriminant face features which are concatenated with hand geometric features. The resultant feature vector is classified using SVM [18]
Face and ear Scores To match score level, fusion is proposed in [19]. Authors use Dempster-Shafer decision theory for each modality. Recognition rate is 95.53% with 4.47% EER [19]
Ocular—iris and conjunctival vascular Score In [4], authors presented fusion of both iris and conjunctival vascular information. A weighted fusion method is proposed for each modality. The fusion resulted in an EER of 2.83% [4]
Face, ear, and signature Rank In [20], the PCA and Fisher’s linear discriminant (FLD) methods in the face, ear, and signature, multimodal biometrics system is proposed. Local features are extracted from face, ear, and signature data. Features are matched using Euclidean distance. This system is using rank level fusion [20]

Table 3.

Summary of works on multimodal biometric systems.

Advertisement

3. The proposed multi-biometric system

The multi-biometric system (dorsal vein + periocular + palm print) is presented in Figure 5.

Figure 5.

Considered multi-biometric system architecture.

In our proposed method, the first is preprocessing block including noise elimination, ROI detection and normalization, and contrast normalization. For all three modalities, noise elimination for an image f x y is performed using median filtering (2D MF) operation formulated as [21]:

f ̂ x y = median A 1 f x y = median f x + r y + s E1

where A 1 is the MF window.

Next step in preprocessing phase is ROI detection and normalization (Figure 6). This operation is quite different for dorsal vein images, palm print images, and periocular images. For dorsal vein images, we use distance transform to detect the dorsal image center and build square ROI based on this center coordinates [22, 23]. The ROI design for palm print images is based on hand-specific points (finger valleys) and two angles [24]. The periocular region is detected based on the center of the iris. Using the conventional algorithm for detecting the iris, we determine the center of the iris and its diameter. The periocular area is a rectangle centered in the iris center [25, 26].

Figure 6.

ROI area for dorsal vein images (a), palm print images (b), and periocular images (c).

After the ROI detection, we perform image size normalization and apply the contrast normalization by using CLAHE algorithm. The image is divided into non-overlapping areas of equal size, and the histograms of each region are calculated. Next, the cutoff threshold for histograms is obtained, and each histogram is processed in such a way that its height does not exceed the cutoff threshold [21].

The sample input images after normalization operations and operations using the CLAHE algorithm are shown in Figure 7. Next processing blocks include feature extraction, feature selection, fusion, and classification.

Figure 7.

Images after normalization (size 150 × 150 pixels) and after applying the CLAHE algorithm.

3.1 Gabor feature extraction

In biologically inspired vision models, receptor fields exist that are the primary aspect of early visual processing in mammalian vision systems. Gabor functions are widely used in image feature analysis because they are similar to receptive field profiles in mammalian cortical simple cells. These fields are modeled using Gabor filters [27].

Imitation of mammalian vision systems (or some of them) in object recognition systems leads to their increased efficiency and plausibility. Object recognition systems that are inspired by the biological approach use filter banks, in particular Gabor filters (Figure 8) [28, 29, 30, 31, 32].

Figure 8.

2D functions and 2D Gabor filter.

The 2D Gabor filter family can be represented as expressed in Eq. (2):

Gab ω , θ x y = 1 2 π σ x σ y G θ x y S ω , θ x y E2

where G θ x y = e xcosθ + ysinθ 2 2 σ x 2 + sinθ + ycosθ 2 2 σ y 2 and S ω , θ x y = e i ωxcosθ + ωysinθ e ω 2 σ 2 2 .

The Gab ω , θ x y can be decomposed into a real R Gab ω , θ x y = 1 2 π σ 2 G θ x y R S ω , θ x y and an imaginary.

I Gab ω , θ x y = 1 2 π σ 2 G θ x y I S ω , θ x y parts (for σ x = σ y = σ ) .

Gabor response images are obtained by convolution operation of multiscale and multi-orientation Gabor filters Gab ω , θ x y with the image f (x, y).

G ω , θ x y = f x y Gab ω , θ x y = Mag ω , θ x y e i Ph ω , θ x y E3
Mag ω , θ x y = R Gab ω , θ x y 2 + I Gab ω , θ x y 2 ,
Ph ω , θ x y = arctan I Gab ω , θ x y R Gab ω , θ x y ,

where and is the convolution operator.

The Gabor filter responses for palm print image and dorsal vein image are shown in Figures 9 and 10, respectively.

Figure 9.

Imaginary part of the Gabor filter responses of a palm print image.

Figure 10.

Imaginary part of the Gabor filter responses of a dorsal vein image.

3.2 Periocular feature extraction by LBP

The periocular area contains the iris, eyes, eyelids, eyelashes, and partially eyebrows. The LBP method can be used to describe the texture of the periocular area, and the feature vectors contain LBP features. The operator of local binary patterns (LBP) was proposed by Ojala [33] as a texture descriptor.

LBP divides the image into non-overlapping blocks of the same size. Local image features are calculated for each block separately. For a set of pixels belonging to a given block, the LBP values are calculated and then a histogram is created. The feature vectors (histograms) of each block are combined to form a global vector of features of the entire image.

LBP analyzes the local neighborhood consisting of g p points located on a circle with radius R and surrounding the center point of g c and checks whether the points of g p are greater or lesser than the g c point value.

The LBP value of the g c point is specified as follows:

LBP P , R = p = 0 P 1 S g p g c 2 p E4

where g p and g c are the luminance values of the neighborhood and center point, respectively.

The idea of this operator is presented in Figure 11.

Figure 11.

The basic idea of LBP approach.

For an image size M × N , the image descriptor is a histogram created from the LBP values:

H k = i = 1 M j = 1 N f LBP P , R i j k ; k 0 K E5
f x y = 1 , x = y 0 , otherwise

where k is one LBP pattern and K is the maximal LBP pattern value (number bin of the histogram).

Using the LBP operator, we obtain 2 P different output values corresponding to 2 P different binary patterns created by P of neighboring pixels. Certain binary patterns contain more information than others, so we can only consider this subset of LBP values. Patterns of this subset are called uniform patterns. So we have a standard LBP P , R operator and an LBP P , R u 2 operator.

Typically image is divided into n blocks and histograms of each block are concatenate into feature vector [34].

In the case LBP P , R operator, the histogram contains 256 bins. In the case of LBP P , R u 2 operator, the histogram contains 59 bins (Figures 12 and 13).

Figure 12.

The original image (a) and image as a result of the LBP operator (b).

Figure 13.

The LBP P , R histogram (a), histograms of the n blocks (b), and the LBP P , R u 2 histogram (c).

Advertisement

4. Feature reduction and data fusion

The multi-biometric system has been tested using certain parts of the following databases: PolyU palmprint [24], IIITD periocular database [25], and Bosphorus hand vein database [35]. We choose 20 subjects with 10 images per subject at random. From 10 images, 5 images are used for training and 5 for the testing.

The combination of feature vectors at this level is difficult to achieve in practice due to the combination of certain fundamentally different feature vectors that can result in a resulting vector of features with very large dimensionality. In a merger at the level of feature vectors, each individual modality process generates a feature vector. The fusion process combines these feature vectors into one vector.

For dorsal vein images and for palm print images, we perform the same image processing operations that the feature vectors have the same sizes. As a result of convolution operation of multiscale and multi-orientation Gabor filters with the input image, we get the Gabor response images. The feature vector has a very large size of (M x N x k x l) where M x N is the image size, k is the number of scales, and l is the number of orientations. In our case, for both dorsal vein images and palm print images, we get a feature vector containing (150 × 150 × 3 × 6) = 405,000 items. The images subjected to the Gabor filtration are rescaled with a scale factor of 0.1, which allows obtaining a vector of features with a size of 1 × 4050 elements.

For periocular images, the feature vector has a size of 36 × 59 = 2124.

Next we reduce dimensionality of these vectors used in PCA method (Figure 14 and Table 4) [5]. Separated features are normalized using zero mean and unit variance as

Figure 14.

Steps to image processing using PCA.

PCA algorithm
Organizing the training set of images
T = G 1 G 2 G q
where q is the number of images in the training set
Calculating the average of the set T
Ψ = 1 q 1 q G q
Calculating
Φ i = G i Ψ
Calculating the covariance matrix C
C = 1 q 1 q Φ i Φ i t = A A t
The eigenvectors and corresponding eigenvalues are computed
C v i = λ i v i i = 1 , , q
The eigenvectors and their corresponding eigenvalues are paired and ordered from high to low. Approximated image is calculated as
G ¯ = vw + Ψ
for v = v 1 v 2 v k

Table 4.

PCA algorithm.

f ¯ i = f i μ i σ i E6

where μ i and σ i are the mean value and standard deviation of the i-th feature, f ¯ i is the normalized i-th feature vector.

Table 5 shows the recognition performance depending on the number of selected eigenvectors.

Modality Number of the eigenvectors
k = 40 k = 60 k = 80 k = 100
Dorsal vein 88 89.3 91.4 92.6
Palm print 88.7 89.3 90.6 92.8
Periocular 86 86.8 89 89.2
Dorsal vein + palm print 90.3 91.1 92.3 93.1
Dorsal vein + periocular 91.1 92 92.4 92.8
Palm print + periocular 90.7 91.4 91.8 92.1
Dorsal vein + periocular + palm print 93.2 94 94.5 95.3

Table 5.

Recognition rates [%] for different modality.

Advertisement

5. Conclusion

In this chapter, Gabor’s functions and LBP features are proposed for recognition in a multi-biometric system that uses three modalities: dorsal vein, periocular, and palm print. Using PCA method dimensionality feature vectors from these modality are reduced. Feature vectors are normalized and fused using concatenation operation. Based on the results, we suggest that multi-biometric system using the fusion of dorsal vein, periocular, and palm print images can offer recognition rate which the unimodal biometric system cannot.

References

  1. 1. Ross AA, Nandakumar K, Jain AK. Handbook of Multibiometrics. Boston, MA, USA: Kluwer; 2006
  2. 2. Travieso CM, del Pozo-Baños M, Alonso JB. Fused intra-bimodal face verification approach based on scale-invariant feature transform and a vocabulary tree. Pattern Recognition Letters;36:254-260
  3. 3. Travieso CM, Zhang J, Miller P, Alonso JB, Ferrer MA. Bimodal biometric verification based on face and lips. Neurocomputing;74(14–15):2407-2410
  4. 4. Ross A. An introduction to multibiometrics. In: Proceedings of the 15th European Signal Processing Conference; 2007. pp. 20-24
  5. 5. Long TB, Thai LH, Hanh T. Multimodal biometric person authentication using fingerprint, face features. In: Anthony P, Ishizuka M, Lukose D, editors. Trends in Artificial Intelligence (LNCS 7458). Springer; 2012. pp. 613-624
  6. 6. Choi H, Choi K, Kim J. Mosaicing touchless and mirror-reflected fingerprint images. IEEE Transactions on Information Forensics and Security. 2010;5(1):52-61
  7. 7. Ghouti L, Bahjat AA. Iris fusion for multibiometric systems. In: Proceedings of the IEEE International Symposium on Signal Processing and Information Technology; 2009. pp. 248-253
  8. 8. Ross A, Jain A. Information fusion in biometrics. Pattern Recognition Letters. 2003;24(13):2115-2125
  9. 9. Chen Y, Parziale G, Diaz-Santana E, Jain AK. 3D touchless fingerprints: Compatibility with legacy rolled images. In: Proceedings of the Biometrics Symposium: Special Session on Research at Biometric Consortium Conference; 2006. pp. 1-6
  10. 10. Froba B, Rothe C, Kublbeck C. Evaluation of sensor calibration in a biometric person recognition framework based on sensor fusion. In: Proceedings of 4th IEEE International Conference on Automatic Face & Gesture Recognition; 2000. pp. 512-517
  11. 11. Meraoumia A, Chitroub S, Bouridane A. Multimodal biometric person recognition system based on fingerprint & finger-knuckleprint using correlation filter classifier. In: Proceedings of the IEEE International Conference on Communications; 2012. pp. 820-824
  12. 12. Bhaskar B. Veluchamy S. Hand based multibiometric authentication using local feature extraction. In: Proceedings of the International Conference on Recent Trends in Information Technology; 2014. pp. 1-5
  13. 13. Bokade GU, Sapkal AM. Feature level fusion of palm and face for secure recognition. International Journal of Electrical and Computer Engineering. 2012;4(2):157
  14. 14. Hossain E, Chetty G. Multimodal face-gait fusion for biometric person authentication. In: Proceedings of the IFIP 9th International Conference on Embedded Ubiquitous Computing; 2011. pp. 332-337
  15. 15. Ding Y, Zhuang D, Wang K. A study of hand vein recognition method. In: Proceedings of the IEEE International Conference on Mechatronics & Automation; 2005. pp. 2106-2110
  16. 16. Miao D, Sun Z, Huang Y. Fusion of multibiometrics based on a new robust linear programming. In: Proceedings of the 22nd International Conference on Pattern Recognition; 2014. pp. 291-296
  17. 17. Shah S, Ross A, Shah J, Crihalmeanu S. Fingerprint mosaicking using thin plate splines. In: Proceedings of the Biometric Consortium Conference; 2005. pp. 1-2
  18. 18. El-Alfy E-SM, BinMakhashen GM. Improved personal identification using face and hand geometry fusion and support vector machines. In: Benlamri R, editor. Networked Digital Technologies. Vol. 294. Springer; 2012. pp. 253-261
  19. 19. Jing X-Y, Yao Y-F, Zhang D, Yang J-Y, Li M. Face and palmprint pixel level fusion and kernel DCV-RBF classifier for small sample biometric recognition. Pattern Recognition. 2007;40(11):3209-3224
  20. 20. Rattani A, Freni B, Marcialis GL, Roli F. Template update methods in adaptive biometric systems: A critical review. In: Tistarelli M, Nixon MS, editors. Advances in Biometrics (LNCS 5558). Springer; 2009. pp. 847-856
  21. 21. Choras, R.S., A survey on methods of image processing and recognition for personal identification. In: Machine Learning and Biometrics, Rijeka, Croatia: InTech; 2018
  22. 22. Tanaka T, Kubo N. Biometric authentication by hand vein patterns. In: Proceedings of the SICE Annual Conference; 2004. pp. 249--253
  23. 23. Ferrer MA, Morales A, Travieso CM, Alonso JB. Low cost multimodal biometric identification system based on hand geometry, palm and finger print texture. In: 41st Annual IEEE International Carnahan Conference on Security Technology; 2007. pp. 52-58
  24. 24. Zhang D, Kong A, You J, Wong M. Online palmprint identification. IEEE Transactions on Pattern Analysis and Machine Intelligence;25:1041-1050
  25. 25. Sharma A, Verma S, Vatsa M, Singh R. On cross spectral periocular recognition. In: Proceedings of International Conference on Image Processing; 2014
  26. 26. Woodard D, Pundlik S, Lyle J, Miller P. Periocular region appearance cues for biometric identification. In: Computer Vision and Pattern Recognition Workshops; 2010. pp. 162-169
  27. 27. Gabor D. Theory of communication. The Journal of The Institute of Electrical Engineers of Japan. 1946;93:429-459
  28. 28. Wang N, Li Q, El-Latif AAA, Yan X, Niu X. A novel hybrid multibiometrics based on the fusion of dual iris, visible and thermal face images. In: Proceedings International Symposium on Biometrics and Security Technology; 2013. pp. 217-223
  29. 29. Choras RS. Image feature extraction techniques and their applications for CBIR and biometrics systems. International Journal of Biology and Biomedical Engineering. 2007;1(1):6-16
  30. 30. Choras RS. Iris-based person identification using Gabor wavelets and moments. In: Proceedings of the International Conference on Biometrics and Kansei Engineering; 2009. pp. 55-59
  31. 31. Choras RS. Personal identification using forearm vein patterns. In: International Conference and Workshop on Bioinspired Intelligence; 2017. pp. 1-5
  32. 32. Choras RS. Biometric personal authentication using images of forearm vein patterns. In: International Conference on Signals and Systems; 2017. pp. 40-43
  33. 33. Ojala T, Pietikäinen M, Mäenpää T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002;24(7):971-987
  34. 34. Wang Y, Li K, Cui J. Hand-dorsa vein recognition based on partition local binary pattern. In: IEEE International Conference on Signal Processing; 2010. pp. 1671-1674
  35. 35. Yüksel A, Akarun L, Sankur, B. Biometric identification through hand vein patterns. In: ICPR’2010: International Conference on Pattern Recognition; Istanbul; 2010

Written By

Ryszard S. Choras

Submitted: 24 September 2018 Reviewed: 06 February 2019 Published: 14 March 2019