Open access peer-reviewed chapter

Novelty Detection‐Based Internal Fingerprint Segmentation in Optical Coherence Tomography Images

Written By

Rethabile Khutlang, Pheeha Machaka, Ann Singh and Fulufhelo Nelwamondo

Submitted: 09 November 2016 Reviewed: 24 January 2017 Published: 09 August 2017

DOI: 10.5772/67594

From the Edited Volume

Computed Tomography - Advanced Applications

Edited by Ahmet Mesrur Halefoglu

Chapter metrics overview

1,408 Chapter Downloads

View Full Metrics

Abstract

Biometric fingerprint scanners scan the external skin's features onto a 2‐D image. The performance of the automatic fingerprint identification system suffers first and foremost if the finger skin is wet, worn out or a fake fingerprint is used. We present an automatic segmentation of the papillary layer method, from images acquired using contact‐less 3‐D swept source optical coherence tomography (OCT). The papillary contour represents the internal fingerprint, which does not suffer from the external finger problems. It is embedded between the upper epidermis and papillary layers. Speckle noise is first reduced using non‐linear filters from the slices composing the 3‐D image. Subsequently, the stratum corneum is used to extract the epidermis. The epidermis, with its depth known, is used as the target class of the ensuing novelty detection. The outliers resulting from novelty detection represent the papillary layer. The contour of the papillary layer is segmented as the boundary between target and rejection classes. Using a mixture of Gaussian's novelty detection routine on images pre‐processed with a regularized anisotropic diffusion filter, the papillary contours—internal fingerprints—are consistent with those segmented manually, with the modified Williams index above 0.9400.

Keywords

  • biometrics
  • novelty detection
  • segmentation
  • internal fingerprint
  • optical coherence tomography (OCT)

1. Introduction

Biometric identification uses identifiers unique to individuals; because of that, it edges out token‐based and knowledge‐based identification systems on safety and reliability. Individuals can be described using biometric identifiers provided that they are sufficiently different for a population. Usable biometric identifiers should be easy to acquire; the acquired measurement should be in a form conducive to the extraction of the descriptive features. Extraction should preferably not be intrusive to an individual. They should be extractable on the whole population using the identification system. Fingerprints are popular biometric identifiers for identifying and authenticating individuals. Of all identifiers, fingerprints perform competitively on the factors used in assessing the suitability of any trait [1].

The external skin of the palm finger consists of a series of ridges and furrows, whose pattern determines the fingerprint's uniqueness. Local ridge characteristics that occur at the ridge's bifurcation and ending contribute to this uniqueness. Old age, disease or manual labour can depreciate the fingerprint's uniqueness if the ridges are worn out.

The external finger skin is the interface between an individual and a fingerprint recognition system. The performance of such systems depends on the conditions of the external finger skin. It suffers if the skin is scarred, wet, worn out or if a fake fingerprint is used [2]. Furthermore, standard 2‐D readers capture the fingerprint onto a glass surface or a fingerprint paper card in the case of ink‐based offline methods. To obtain the 2‐D fingerprint image, the subject has to press their finger against a surface. The performance of a fingerprint recognition system using such contact‐based acquisition is again negatively affected by the pressure exerted by the finger when making contact with the surface [3]. The pressure is usually non‐uniform and results in captured 2‐D images that have non‐linear distortions. Lastly, 2‐D imaging and signal processing of surface topology of a finger do not protect bypassing such systems by a fake fingerprint, which has the third dimension in depth [4].

The internal structure of a finger, the papillary layer, can be used to represent a fingerprint to alleviate the problems associated with the external skin's ridges and valleys. Actually, the papillary layer is the source of the fingerprint structure. It is the blueprint of the visible fingerprint undulations. During development, the basal layer of the epidermis grows faster than the two layers beneath and on top of it, the dermis and upper epidermis layers [5]. Pressure causes the basal layer to deform into the folds pattern that stays with an individual forever. That folds pattern forms an internal fingerprint; it is protected because it is inside the skin, and hence, it cannot be destroyed by superficial skin cuts. The fingerprint recognition system that uses such a fingerprint cannot be fooled by fake fingerprints as fake fingerprints are superficial to finger skin whereas the fingerprint at the papillary layer is beneath the epidermis. Manapuram et al. [6] proposed using optical coherence tomography (OCT) to image the three‐dimensional structure of a finger to a depth that reaches the papillary layer of the finger. The internal fingerprint embedded between the upper epidermis and papillary layers is used for identification, instead of the surface fingerprint. Additionally, since OCT is contact‐less, that fingerprint does not suffer distortions caused by the finger when making contact with the scanner.

A typical fingerprint recognition system is made up of sensing and acquisition, image enhancement, feature extraction, matching and decision‐making components. Since the internal fingerprint is the blueprint to the external skin ridges and folds, the same recognition pipeline is applied. This chapter is focused on segmentation of the internal fingerprint, the papillary layer, from swept source optical coherence tomography (SS‐OCT) 3‐D images. The tissue beneath the surface has been imaged using time domain OCT for biometrics before [4, 6], while in the Fourier domain OCT, swept source OCT is preferred over spectral domain OCT as it minimizes dispersion and speckle effects better [7, 8]. In Ref. [9], spectral domain OCT was used to extract intricate biometric details such as the distribution of sweat pores and the pattern of the capillary bed and on the other hand, Zam et al. [10] used correlation mapping OCT.

Over a pre‐determined depth range, Refs. [8] and [9] averaged XY cross‐sectional images to project a 3‐D image to 2‐D image to segment the internal fingerprint. The fingertip curvature is removed by detecting points at the boundary between air and tissue in Ref. [10]. The points are then aligned to a straight line and concatenated to form a 2‐D fingerprint. Akbari and Sadr [11] used vertical gradient thresholding to segment the external fingerprint from an artificial dummy covering it; this was an implicit segmentation of the internal fingerprint. Column accumulation functions are used to segment the internal fingerprint in Ref. [12]; they actually sample the three‐dimensional contour of the papillary layer.

This chapter presents the pipeline to segment the internal fingerprint. The method preserves the curvature of the papillary layer. First, the cross‐sectional images of the 3‐D OCT image are filtered to reduce the effects of speckle noise. Then, the multilevel image thresholds the Otsu segmentation method [13], which is used to detect the outermost skin layer from each filtered slice. The stratum corneum, the outermost skin layer, is used to estimate the extent of the epidermis. The epidermis forms the target data of the ensuing novelty detection, which is applied to image slices, and the boundary to the outlier objects is the contour of the papillary layer—the internal fingerprint. The segmented internal fingerprint has the 3‐D profile features that 2‐D fingerprints’ pattern loses [14]. OCT does not suffer the problem of typical 2‐D acquisitions, where variance in pressure exerted to acquire 2‐D fingerprints results in deviations in the features extracted from each acquisition on the same finger.

Advertisement

2. Materials and methods

A swept source OCT system (OCS1300SS, Thorlabs, USA) was used to capture the internal finger structure. The central wavelength of 1325 nm and spectral bandwidth of 100 nm were parameters of the swept laser optical source with an average power output of 10.0 mW. The source had an axial scan rate of 16 kHz. The OCT system had a maximum imaging depth of 3 mm. Scattering properties of a sample as a function of depth are contained in an A‐scan for a fixed position of the scanned beam. A collection of A‐scans results in a cross‐sectional image (B‐scan). The collection of B‐scans results in a volumetric image as shown in Figure 1.

Figure 1.

3‐D OCT scan made of 512 cross‐sectional images at a human fingertip.

2.1. 3‐D internal fingerprint segmentation

We use novelty detection machine learning techniques to separate the papillary junction from the upper epidermis. As can be seen in Figure 1, the human finger skin shows distinct regions when imaged with OCT. Corneum stratum is clearly visible as a high‐pixel intensity region due to its tissue scattering properties. The rest of the epidermis forms a different class altogether (hereupon referred to as the upper epidermis), with its low and uniform intensity pixels sandwiched between the stratum corneum and the papillary junction. The papillary junction is a distinct region with high‐pixel intensity and internal fingerprint undulations clearly visible. We use the stratum corneum, Section 2.2, just to locate the extent of the low‐pixel intensity epidermis region—the upper epidermis. This upper epidermis forms the one class in novelty detection that can be characterized and learnt, the target class. Our hypothesis is that the papillary junction pixels, being different in texture, will be classified as an outlier by a novelty detection routine trained using this low‐pixel intensity upper epidermis region. The routines are trained using only such low‐intensity pixel values but are expected to classify all pixels at a height lower than the low pixel's intensity region used for training. Figure 2 shows such a low‐pixel intensity upper epidermis region used as a training set for novelty detection routines for a single cross‐sectional image. This band of training data pixels is located using the stratum corneum edge. Pixels below this band (annotated by a green colour) are sent to novelty detection routines to classify as the target or the outlier.

Figure 2.

A band located using the corneum stratum edge and used as a training set for novelty detection routines.

A 3‐D OCT scan is processed on a cross‐sectional basis to segment the internal fingerprint. First, the cross sections are filtered to reduce speckle noise. Then, a threshold is applied to detect the stratum corneum. The stratum corneum is used to locate the upper epidermis, which in turn is used to train novelty detection routines to find the undulations of the papillary junction. The undulations of the papillary junction are the internal fingerprints; a 3‐D fingerprint is found by concatenating the 2‐D cross‐sectional images. The workflow is illustrated in Figure 3. Below is the expansion of the speckle noise reduction, stratum corneum edge detection and the eventual novelty detection methods used as a papillary contour segmentation pipeline.

Figure 3.

The internal fingerprint segmentation workflow.

2.2. Corneum stratum detection

As a pre-processing step, cross-sectional images of the 3-D OCT scan are filtered using the anisotropic diffusion - Perona and Malik's partial differential equations (PDE) - filter due to its soft‐edge preservation characteristics [15]. The edges of the papillary junction undulations are soft. The heat equation (equivalent to the convolution of the signal with Gaussian's at each scale) with the signal as the initial datum u 0 is improved by the Perona and Malik filter by reformulating it as a non‐linear equation of the porous medium type:

u d t = d i v ( g ( | u | ) u ) ,   u ( 0 ) =   u 0 E1

In this equation, g is a smooth non‐increasing function. Its regularized version was also applied as it is stable in the presence of speckle noise [16]. The stability brought about by Catté et al. [16] is due to the replacement of the gradient | u | by its estimate | D G σ * u | in the Perona and Malik model (1); G σ is a Gaussian function.

The two filters were compared to the total variation denoising technique, which performs denoising as an infinite‐dimensional minimization problem [17]. Speckle noise results from the coherent nature of laser radiation and the interferometric detection of the scattered light [11]. To minimize speckle noise, [14] applied rotating kernel transformation, while [12] employed parallel processing to speed up median filtering to suppress speckle noise.

After the pre‐processing step, we then segment the cross‐sectional images using the two‐image thresholds of the Otsu method to detect the stratum corneum edge. The filtered cross‐sectional image forms an input to a non‐parametric method of threshold selection. The filtered image pixels can be represented in L gray levels [ 1 ,   2 ,   ,   L ] , where we assume that there are two thresholds, 1 k 1 <   k 2   < L , for separating an image into three classes. Otsu [13] introduced the discriminant criterion measures used in discriminant analysis to evaluate how strong a threshold is. In this case, segmentation is seen as an optimization problem to search for the two thresholds, k ' 1 and k ' 2 , that maximize the discriminant criterion measure σ B 2 —a function of the two variables k 1 and k 2 :

σ B 2 ( k 1 , k 2 ) = max 1 k 1 < k 2 < L σ B 2 ( k 1 , k 2 )   E2

Thresholds are selected in a sequential search by using cumulative probabilities of class occurrence and class mean levels [13]. The highest threshold extracts the stratum corneum. The stratum corneum edge is represented by the topmost pixels of its perimeter.

2.3. Novelty detection to segment papillary layer

Novelty detection is used as a segmentation method because the 3‐D internal fingerprint is defined as the boundary between the upper epidermis pixels and the papillary layer pixels. Novelty detection is applied to cases where the class of objects that are not of interest, for example outliers, cannot be sufficiently modelled [18]. Only the upper epidermis pixels, the target class as shown by a green band in Figure 2 can be sufficiently modelled in this case. Novelty detection routines are trained to recognize the upper epidermal layer. The upper epidermal layer is extracted using the detected stratum corneum edge. According to Ref. [8], the epidermis extends to an average of 0.34 mm at the palm finger region. Epidermal pixel values, from a depth of 10 to that of 20 pixels from the stratum corneum edge, are used as objects of novelty detection routines. An example training dataset extracted using this procedure is shown in Figure 2 as a band of pixels annotated by a green colour. All pixels at a depth below the green band constitute the test set that is an input to a trained novelty detection routine. The aim is to train novelty detection techniques to recognize only the upper epidermis layer. The papillary layer would be the rejection region of this technique, with the papillary contour at the boundary between target and rejection (outlier) classes.

Gaussian, mixture of Gaussians (MoG), k‐means and k nearest neighbour (kNN) routines are used. The target data are modelled as a Gaussian distribution when using a Gaussian routine. To create a more robust description of the target objects, MoG uses n Gaussians [19], where n is determined empirically. k‐Means novelty detection technique describes the target dataset using k clusters. The cluster centres are placed using the standard k‐means clustering procedure [19]. The kNN routine labels test objects by comparing them to target objects using the Euclidean distance [20].

Training novelty detection routines involve setting a percentage error a routine can make, that is, the number of target objects that may be misclassified as outliers on the training set. To train a Gaussian routine, the density estimate is avoided to minimize numerical instabilities, and just the Mahalanobis distance is used:

f ( x ) = ( x μ ) T 1 ( x μ )   E3

The distance f ( x ) of a test object x from the mean of the target data; the novelty detection routine is defined as:

h ( x ) = { t a r g e t     i f     f ( x ) θ o u t l i e r   i f   f ( x ) > θ   E4

The mean μ and covariance matrix ∑ are just training set estimates, and the threshold θ is based on the percentage error a routine is allowed to make [21]. An object is labelled a target object if its distance from mean is less than the set threshold and labelled an outlier otherwise. Training MoG and k‐means routines involve determining the number of Gaussians and clusters, respectively, to optimally represent the training data. The leave‐one‐out cross‐validation error is used to optimize k for the kNN routine on the training dataset.

The same percentage error is set during the training of different routines to compare performance on a test set. The function derived using the set percentage error is used to map pixels (extracted at a depth higher than the section used for training) as a target or an outlier on each cross section of the 3‐D image. For each cross‐sectional image of a 3‐D OCT scan, pixels below the green band, as in Figure 2, are objects to be classified as either the target or the outlier. Objects/pixels classified as target objects are labelled one and as outliers, labelled zero. The labels are translated back to the object/pixel coordinates; and the papillary contour is an edge between the zero‐ and the one‐labelled regions of an image. The papillary layer contour is the boundary between the target (upper epidermis) and rejection (papillary layer) classes. The 3‐D papillary contour is obtained by concatenating the detected 2‐D contours together. The procedure is shown in Algorithm 1.

Algorithm 1. Algorithm to segment an internal fingerprint

Input = 512 cross sections

output = []

for all i in length(input) do

this_cross_sec = input(i)

denoised_im = filter(this_cross_sec)

c_s = corneum‐stratum‐detect(denoised_im)

train_data = denoised_im(c_s +10: c_s + 20)

trained_routine = novelty‐detection‐routine(train_data, fraction_reject)

test_im = denoised_im(c_s + 21: length(denoised_im))

test_im_labels = map‐labels(test_im * trained_routine)

papillary_contour = perimeter‐rejection‐class(test_im_labels)

output = concatenate(output, papillary_contour)

end for

2.4. Evaluation of the segmented papillary contour

Papillary contours were outlined manually from the slices composing the 3‐D image. The manual outlines represented the gold standard used in assessing the performance of different routines. The agreement between the gold standard papillary contours and those segmented automatically was assessed using the Hausdorff distance [22] and the modified Williams index (MWI) [23]. The MWI specifies whether the contour outlined by routine 0 agrees with the set of n manual contours as much as a manual contour agrees with another contour from the manual set; it is defined as:

M W I = 1 n j = 1 n 1 D 0 j 2 n ( n 1 ) j = 1 n 1 j = j + 1 n 1 D j j '   E5

where

D j j ' = 1 N i = 1 N H ( x i j , x i j ' )   E6

D j j ' denotes the agreement between two observers j and j ' . x i j denotes observer j outlining image i and H ( x , y ) is the Hausdorff distance between images contours x and y .

Advertisement

3. Results

The two‐image thresholds of the Otsu method were determined automatically for each cross section. The input to the two‐image thresholds Otsu segmentation method was the denoised cross‐sectional image. The output was the exterior edge of the stratum corneum. Each denoising technique pronounced pixel intensities of the stratum corneum layer differently; the Otsu method was able to detect the change in the gradient between the stratum corneum layer and air. Figure 4 shows the detected edge for the three denoising routines that were used.

Figure 4.

A raw cross‐sectional image (a) together with the corneum stratum edge overlaid on its filtered version using anisotropic diffusion (b), its regularized version (c) and the total variation denoising technique (d).

The objects used to train novelty detection routines had one feature, pixel intensity value of the unit 8 cross‐sectional image. The pixels were extracted at 10 pixel depth from the stratum corneum contour detected using the two image thresholds Otsu method. A set of 50 YZ cross‐sectional scans was used to study the agreement between papillary contours segmented by two researchers and the novelty detection routines. The comparisons were made using the Hausdorff distance. T11 and T12 represent the first observer, outlining the papillary contours for the first and second time. T2 represents the second observer, while Gau, MoG, KM and kNN represent Gaussian, mixture of Gaussians, k‐means and k nearest neighbour routines respectively. The agreement between contours outlined by an observer and between observers is shown in Table 1.

T11 and T12 T21 and T22 T11 and T21 T12 and T22
Mean ± Std 4.5424 ± 0.4363 4.4313 ± 0.3823 4.5054 ± 0.4062 4.4993 ± 0.4240

Table 1.

Comparison of manual papillary segmentation between two observers.

The Hausdorff distance was used to determine both the number of Gaussians and clusters to use from the 50 scans. A plot of the Hausdorff distance as a function of the number of Gaussians or clusters is shown in Figure 5. Three Gaussians were used with the mixture of Gaussians, and three clusters were used with the k‐means routine. k was optimized using the leave‐one‐out cross‐validation for each cross‐sectional image segmentation using the kNN routine.

Figure 5.

Hausdorff distance as a function of the number of Gaussians for MoG and clusters for k‐means.

The Hausdorff distance was used to obtain the MWI for comparing computer‐generated contours to hand‐drawn ones. The MWI is the ratio between the average computer‐observer agreement and the average observer‐observer agreement. Contours outlined by two volunteers constituted four manual observations per object. Table 2 shows the agreement between the volunteers and novelty detection routines in segmenting anisotropic diffusion filtered scans and the MWI of the routines, together with the 95% confidence interval estimate for the MWI, assuming a standard normal distribution. Table 3 shows the evaluation of the novelty detection segmentation techniques with the regularized anisotropic diffusion filtered images as the input, while Table 4 shows that of total variation filtered images as input.

T11 and AL T21 and Al MWI Confidence interval
Gau 4.7554 ± 0.4355 4.7282 ± 0.3832 0.9450 (0.9416, 0.9485)
MoG 4.7262 ± 0.3823 4.7111 ± 0.4111 0.9498 (0.9465, 0.9530)
KM 4.7627 ± 0.4917 4.7920 ± 0.4914 0.9386 (0.9353, 0.9420)
kNN 4.7664 ± 0.4053 4.7118 ± 0.4271 0.9455 (0.9421, 0.9489)

Table 2.

Comparison between manual papillary segmentation and segmentation using novelty detection techniques on anisotropic filtered images.

T11 and AL T21 and Al MWI Confidence interval
Gau 4.7113 ± 0.4091 4.7302 ± 0.4106 0.9504 (0.9474, 0.9535)
MoG 4.680 ± 60.3934 4.7088 ± 0.3598 0.9548 (0.9518, 0.9578)
KM 4.7350 ± 0.4038 4.7435 ± 0.4028 0.9473 (0.9442, 0.9505)
kNN 4.7295 ± 0.3667 4.7631 ± 0.3678 0.9474 (0.9445, 0.9502)

Table 3.

Comparison between manual papillary segmentation and segmentation using novelty detection techniques on regularized anisotropic filtered images.

T11 and AL T21 and Al MWI Confidence interval
Gau 4.9777 ± 0.4550 5.0235 ± 0.4336 0.8967 (0.8937, 0.8997)
MoG 5.0263 ± 0.3451 5.0317 ± 0.3532 0.8902 (0.8874, 0.8931)
KM 5.1393 ± 0.4133 5.1580 ± 0.4228 0.8703 (0.8674, 0.8733)
kNN 4.9324 ± 0.3660 4.8878 ± 0.3632 0.9138 (0.9109, 0.9167)

Table 4.

Comparison between manual papillary segmentation and segmentation using novelty detection techniques on total variation filtered images.

Mixture of Gaussians had the best overall performance in segmenting the papillary contour across the three denoising techniques. For each denoising technique, the papillary contour segmented using the MoG has been overlaid and compared with a manual papillary outline for that cross section in Figure 6. Novelty detection techniques generally struggled to segment the papillary contour from scans pre‐processed using the total variation denoising technique. The 3‐D papillary contour segmented using the worst performing novelty detection routine, k‐means, on total variation denoised scans is shown in Figure 7, side by side with the same scan segmented using the MoG routine when pre‐processed using the regularized anisotropic diffusion filter, the best performing workflow.

Figure 6.

A raw cross‐sectional image with papillary contour manually outlined (a) together with the overlaid papillary contour segmented using the MoG routine when that cross section is pre‐processed using anisotropic diffusion (b), its regularized version (c) and the total variation denoising technique (d).

Figure 7.

A 3‐D papillary contour segmented using the k‐means routine on a scan pre‐processed using the total variation technique (a), side by side with the same scan pre‐processed with regularized anisotropic diffusion and segmented using MoG.

Advertisement

4. Discussion

The aim of our chapter was to segment the internal fingerprint from swept source optical coherence tomography 3‐D scans. This forms one component of the internal fingerprint recognition system that uses an OCT scanner as a contact‐less acquisition device. The system is composed of the following components: sensing, 3‐D internal fingerprint segmentation, feature extraction, matching and decision‐making components. We use an SS‐OCT system as a sensing component. The feature extraction and subsequent components have not been implemented.

No evaluation of internal fingerprint segmentation was found in the literature for direct comparison with our results. An edge‐based segmentation technique was applied to detect an artificial layer glued to a fingertip in Ref. [11]. They used vertical gradient edge detection technique, where the dummy fingerprint was glued to the fingertip and the method marked out the border between the real skin and the artificial layer. However, the segmented border was subjectively evaluated.

Both Refs. [8] and [9] averaged XY cross‐sectional images of a 3‐D OCT scan to obtain the 2‐D internal fingerprint image. Lui and Buma [9] had removed the overall fingertip curvature without affecting the fine‐scale undulation of the friction ridge by fitting a third‐order polynomial to the stratum corneum edge. We deduce that their resultant 2‐D internal fingerprint was more accurate than that of Ref. [8] because the rectangle selected to average across cut more evenly across the papillary region for the straightened fingertip rather than the fingertip retaining the original curvature. They both did not perform quantitative evaluation of their XY averaging segmentation technique.

Sousedik et al. [12] segment an OCT scan of a fingertip into two 3‐D fingerprint layers—the surface fingerprint and an internal one embedded between the upper epidermis and the papillary layers. However, they do not produce a continuous 3‐D internal fingerprint layer but rather a scattered point cloud. This is the fundamental difference between their method and the one presented in this chapter, which produces a continuous 3‐D surface of the internal fingerprint. They do not perform quantitative evaluation of the internal fingerprint segmentation but rather they evaluate the ability to detect the two layers of fingerprints; they got a 90% ability in the scans without anomalies related to subject finger motion.

We quantitatively evaluated the performance of the internal fingerprint segmentation method that is proposed [24]. Aside from evaluating performance of the proposed segmentation procedure, using the internal fingerprint has benefits over the surface fingerprint. It is more difficult to spoof automated fingerprint identification systems that use an internal fingerprint. Substrates that are used to make fake fingerprints (gelatin, silicon, waxes of difference concentration) have varying scattering properties. Even those with scattering properties close to those of human fingers (polydimethylsiloxane mixed with titanium oxide [9]) have a problem in that the OCT will reveal a stratum corneum layer of abnormal thickness. The proposed segmentation method uses the fact that the epidermis extends to an average of 0.34 mm at the palm finger region [9]; it can be adapted to raise an alert if an internal fingerprint is not detected within an expected range. Furthermore, using the internal fingerprint minimizes the impact of fingerprint surface cuts, but deep cuts will still present a problem.

An advantage of extracting features from a 3‐D internal fingerprint is that the 2‐D (flat) fingerprint pattern loses the 3‐D profile features that also provide information that can be used to uniquely identify an individual [14]. The automated fingerprint identification system that uses OCT as a sensor includes all of the 2‐D morphological features, along with the 3‐D profile which will increase discrimination performance. Performance comparisons will be made by using extended 2‐D minutiae features in the 3‐D space to include height and angle information and by using finger surface codes as 3‐D features [25]. Figure 8 shows the internal and external fingerprint of the same finger—the external one is a conventional 2‐D optical scan that is cropped to correspond to the OCT‐scanned internal one. We propose that an eventual recognition performance of the system being developed will be an improvement over conventional systems as physical deterioration of an external fingerprint will be countered by using the internal fingerprint.

Figure 8.

The image on the left is a conventional 2‐D external fingerprint that has been cropped to an area corresponding to that scanned by the OCT, right image, of the same finger.

Sousedik et al. [12] used a support vector machine to detect the layer that corresponds to a fake fingerprint glued to a fingertip, using the overall energy of the layers as a feature vector. They did not use machine learning for matching different fingertips but rather for detecting artefacts glued to fingertips. Kumar and Kwong [25] implemented matching the 3‐D surface fingerprints obtained using a single camera, using both the finger surface code and 3‐D minutiae as features. They got a better matching performance when combining 3‐D features with 2‐D features than using just 2‐D features. This suggests that 3‐D fingerprint features will be effective in the matching and the decision‐making component of the internal fingerprint‐based identification system.

With human observers used as a gold standard, mixture of Gaussians performed best in segmenting the 3‐D papillary contour—internal fingerprint—according to the Hausdorff distance. On the selected pre‐processing technique, the regularized anisotropic diffusion filter, the average Hausdorff distance was the lowest. The standard deviation was the lowest, which suggests that the mixture of Gaussians stably produces contours consistent with those of human volunteers. The average Hausdorff distance of the method is higher than that among human volunteers as the method does not outperform humans. To train novelty detection techniques that are compared, they were set at the same percentage error that they can make on the target data.

Other novelty detection routines can be used in the proposed segmentation workflow. There is little difference in the segmentation performance between the routines that were tested. We have not implemented 3‐D internal fingerprint recognition routines to assess the impact of the difference in segmentation performance on the eventual recognition performance of the entire system. The impact of segmentation performance of different novelty detection routines on the overall system recognition performance might not be significant, in which case the novelty detection routines could be ranked by other desirable aspects, for instance speed.

Novelty detection techniques had the poorest performance segmenting OCT scans pre‐processed with the total variation denoising filter, according to the Hausdorff distance. Such filtered cross‐sectional images had high‐ and low‐intensity specks across the epidermal and papillary layers observed visually. The specks negatively affected the performance of novelty detection routines. The kNN routine, which computes the distance to the optimized k nearest neighbours, had the best performance of these routines. Scans pre‐processed with the anisotropic diffusion filter also had such specks, be it on a level less than that observed with the total variation denoising filter. That might explain why superior performance was obtained when using its regularized version. Quantitative performance evaluation of different filtering techniques was not explicitly done, and if done it will definitely order the different OCT speckle noise reduction techniques according to performance.

When the upper limit of the MWI confidence interval is greater than one, it demonstrates that an automatically detected contour agrees with the set of manual contours at least as well as manual segmentations are in agreement. The upper limit obtained with the best segmentation pipeline—using a regularized anisotropic diffusion filter and a mixture of Gaussians routine—was not greater that one. To improve novelty detection, texture features could be explored. The epidermis and the dermis layers have different textures. Moreover, neighbourhood information was not incorporated in novelty detection as individual pixel values were used as features. A nine‐pixel neighbourhood can be used as a feature set instead of a single pixel. This might better represent spatial information. Even though the computer‐observer agreement was not at least as well as observer‐observer agreement, the confidence interval was narrow suggesting that the variability was not erratic and that the method is stable.

Unrolling algorithms can be used to convert the segmented 3‐D internal fingerprints to their 2‐D equivalent fingerprints. The 2‐D equivalent fingerprints will not have the distortions caused by pressure exerted when capturing conventional 2‐D fingerprints. Alternatively, 3‐D profile features can be extracted from the segmented 3‐D internal fingerprints. Such features can be used for matching instead of features extracted from traditional contact‐based scanners that often suffer non‐linear distortions.

Advertisement

5. Conclusion

The proposed workflow automatically segments the contours of the papillary layer—the internal fingerprint. The 3‐D OCT scan is processed on a per cross‐sectional image basis. First, the slices are filtered by a regularized anisotropic diffusion filter to reduce the effects of speckle noise. Then, the Otsu method is used to detect the stratum corneum, and multilevel image thresholds are determined automatically for each slice. The stratum corneum is used as a marker to extract a set of epidermal layer pixels to be used as the training data in the ensuing novelty detection. The mixture of Gaussians mapping is used to label pixels at a depth higher than the depth used to extract the training pixels on each cross section. The contour of the papillary layer is the internal fingerprint. The 3‐D internal fingerprint image is obtained by concatenating the 2‐D papillary contours together.

Laser radiation loses focus towards the extremes of a fingertip because of the fingertip's curvature. The result is that skin penetration weakens and the weak reflected light translates into low‐pixel intensity values. This affects stratum corneum detection, and at times, the Otsu segmentation method returns a non‐continuous stratum corneum edge. Corneum stratum edge detection will be improved using regression methods. The left topmost and right topmost extreme pixels of the returned connected components constituting the stratum corneum can be stitched together. Vertically sweeping an image from left to right, the right topmost extreme point will be connected to the closest, Euclidean distance‐wise, left topmost extreme pixel using regression methods. A linear least squares fitting technique will not always give the output as a single connected object when fed with two disjointed edges in an image; hence, non‐linear regression routines will be investigated.

The epidermis extends to an average of 0.34 mm at the finger region [8]. The 10‐pixels depth from which the extraction of the epidermis layer began was empirically determined. The aim was to exclude the stratum corneum was extracting pixels that forms the training set of novelty detection routines. The routines were given pixel values at a depth higher than the extracted training pixel dataset to classify, with the undulations of the papillary contour expected to be the border between the target and rejection regions.

An automated segmentation workflow has been established for the 3‐D internal fingerprint. The segmented internal fingerprint does not suffer problems associated with the external fingerprint. The method preserves the 3‐D profile features that are lost with the 2‐D fingerprints’ pattern. It has the potential to segment the papillary contours as well as a manual segmentation, with post‐processing improvements.

References

  1. 1. Jain, A., Ross, A., Prabhakar, S.: An introduction to biometric recognition. IEEE Transactions on Circuits and Systems for Video Technology, 2004; 14(1): 4–20
  2. 2. Shiratsuki, A., Sano, E., Shikai, M., et al.: Novel optical fingerprint sensor utilizing optical characteristics of skin tissue under fingerprints. Biomedical Optics, International Society for Optics and Photonics, 2005; 80–87
  3. 3. Maltoni, D., Cappelli, R.: Advances in fingerprint modeling. Image and Vision Computing, 2009; 27(3): 258–268
  4. 4. Cheng, Y., Larin, K. V.: In vivo two‐and three‐dimensional imaging of artificial and real fingerprints with optical coherence tomography. IEEE Photonics Technology Letters, 2007; 19(20): 1634–1636
  5. 5. Kücken, M., Newell, A. C.: A model for fingerprint formation. EPL (Europhysics Letters), 2004; 68(1): 141
  6. 6. Manapuram, R. K., Ghosn, M., Larin, K. V.: Identification of artificial fingerprints using optical coherence tomography technique. Asian Journal of Physics, 2006; 15: 15–27
  7. 7. Dubey, S. K., Mehta, D. S., Anand, A., et al.: Simultaneous topography and tomography of latent fingerprints using full‐field swept‐source optical coherence tomography. Journal of Optics A: Pure and Applied Optics, 2008; 10(1): 015307
  8. 8. Bossen, A., Lehmann, R., Meier, C.: Internal fingerprint identification with optical coherence tomography. IEEE Photonics Technology Letters, 2010; 22(7): 507–509
  9. 9. Liu, M., Buma, T.: Biometric mapping of fingertip eccrine glands with optical coherence tomography. IEEE Photonics Technology Letters, 2010; 22(22): 1677–1679
  10. 10. Zam, A., Dsouza, R., Subhash, H. M., et al.: Feasibility of correlation mapping optical coherence tomography (cmOCT) for anti-spoof sub-surface fingerprinting. Journal of Biophotonics, 2013; 6(9): 663–667
  11. 11. Akbari, N., Sadr, A.: Automation of fingerprint recognition using OCT fingerprint images. Journal of Signal and Information Processing, 2012; 3: 117
  12. 12. Sousedik, C., Breithaupt, R., Busch, C.: Volumetric fingerprint data analysis using optical coherence tomography. In Proceeding of the Biometrics Special Interest Group (BIOSIG), Darmstadt, September 2013, pp. 1–6
  13. 13. Otsu, N.: A threshold selection method from gray‐level histograms. Automatica, 1975; 11(285–296): 23–27
  14. 14. Chang, S., Flueraru, C., Larin, K., et al.: Fingerprint spoof detection by NIR optical analysis. INTECH Open Access Publisher, Ottawa, 2011
  15. 15. Perona, P., Malik, J.: Scale‐space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990; 12(7): 629–639
  16. 16. Catté, F., Lions, P. L., Morel, J. M., et al.: Image selective smoothing and edge detection by nonlinear diffusion. SIAM Journal on Numerical Analysis, 1992; 29(1): 182–193
  17. 17. Getreuer, P.: Rudin‐Osher‐Fatemi total variation denoising using split Bregman. Image Processing On Line, 2012; 10: 74–95
  18. 18. Bishop, C. M.: Novelty detection and neural network validation. IEE Proceedings of Vision, Image and Signal Processing, 1994; 141(4): 217–222
  19. 19. Bishop, C. M.: Neural networks for pattern recognition. Oxford University Press, Oxford, 1995
  20. 20. Duda, R. O., Hart, P. E., Stork, D. G.: Pattern classification. John Wiley & Sons, 2012
  21. 21. Tax, D. M. J. (ed.).: Ddtools, the data description toolbox for matlab. Delft University of Technology, Delft, 2005
  22. 22. Huttenlocher, D. P., Klanderman, G., Rucklidge, W. J.: Comparing images using the Hausdorff distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993; 15(9): 850–863
  23. 23. Chalana, V., Kim, Y.: A methodology for evaluation of boundary detection algorithms on medical images. IEEE Transactions on Medical Imaging, 1997; 16(5): 642–652
  24. 24. Khutlang, R., Nelwamondo, F. V.: Novelty detection‐based internal fingerprint segmentation in optical coherence tomography images. In 2014 Second International Symposium on Computing and Networking (CANDAR), Shizuoka, December 2014, pp. 556–559
  25. 25. Kumar, A., Kwong, C.: Towards contactless, low‐cost and accurate 3D fingerprint identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015; 37(3): 681–696

Written By

Rethabile Khutlang, Pheeha Machaka, Ann Singh and Fulufhelo Nelwamondo

Submitted: 09 November 2016 Reviewed: 24 January 2017 Published: 09 August 2017