Open access peer-reviewed chapter

An Efficient Block-Based Algorithm for Hair Removal in Dermoscopic Images

Written By

Ihab Zaqout

Submitted: 16 June 2018 Reviewed: 06 July 2018 Published: 12 February 2020

DOI: 10.5772/intechopen.80024

From the Edited Volume

Computer Methods and Programs in Biomedical Signal and Image Processing

Edited by Lulu Wang

Chapter metrics overview

1,000 Chapter Downloads

View Full Metrics

Abstract

Hair occlusion in dermoscopy images affects the diagnostic operation of the skin lesion. Segmentation and classification of skin lesions are two major steps of the diagnostic operation required by Dermatologists. We propose a new algorithm for hair removal in dermoscopy images that includes two main stages: hair detection and inpainting. In hair detection, a morphological bottom-hat operation is implemented on Y-channel image of YIQ color space followed by a binarization operation. In inpainting, the repaired Y-channel is partitioned into 256 nonoverlapped blocks and for each block, white pixels are replaced by locating the highest peak of using a histogram function and a morphological close operation. Our proposed algorithm reports a true positive rate (sensitivity) of 97.36%, a false positive rate (fall-out) of 4.25%, and a true negative rate (specificity) of 95.75%. The diagnostic accuracy achieved is recorded at a high level of 95.78%.

Keywords

  • dermoscopy image
  • melanoma
  • hair detection
  • hair removal
  • inpainting
  • skin lesion

1. Introduction

Melanoma, otherwise called malignant melanoma, is a kind of cancer that is created from the pigment-containing cells known as melanocytes. Melanomas commonly occur in the skin, but may rarely occur in the mouth, digestion tracts, or eye. Malignant melanoma is the most forceful and hazardous skin disease. It is created in the cells that give the skin its color (melanocytes) and has a high inclination to spread to different parts of the body. The cure rates depend incredibly on the phase of melanoma, when it is discovered. On the off-chance that melanoma is perceived and treated early, it is quite often repairable; however, in the event that it is not, the disease can progress and spread to different parts of the body, where it turns out to be difficult to treat and can be lethal. While it is not the most well-known of the skin cancers, it causes the most deaths.

The timeline of melanoma is summarized in Table 1, portraying particularly significant disclosures and advances in treatment against the disease. Malignant melanoma is the most genuine type of skin cancer. An early detection and diagnosis of skin cancer prevent its advancement to later stages. Menzies method, the seven-point checklist, the CASH (Color, Architecture, Symmetry, and Homogeneity) algorithm, and the broadly used algorithm is the ABCD/ABCDE (Asymmetric, Border, Color, and Dermatoscopic structures) method are computational algorithms have been developed using image processing techniques to help dermatologists in early diagnosis of skin lesions [6, 7, 8, 9]. Several methods for evaluating melanocytic lesions by dermoscopy are described precisely in Section 3.

Year/periodKey developments
Prior to 1750Hippocrates is a Greek physician and is a standout among the most exceptional figures ever of. He is thought to be the first to record a depiction of melanoma, which he portrays as melas, which means dark, and oma, which means tumor, in Greek [1]. Various references to “lethal black tumors with metastases and black liquid in the body” are discussed precisely by [2].
1750s–1830sJohn Hunter is recorded as the first to work on a patient and Laennec is the first to recognize melanoma as a sickness isolated from others. Carswell presents the term melanoma [1].
1840s–1900sInformation advances in treatment. Careful anesthesia is received [2], and rules for careful treatment against melanoma are combined. Propelled melanoma is perceived as untreatable [1].
Twentieth centuryThe etiology and hereditary contribution in melanoma are found. Qualities like skin, hair and eye shading are found to have an effect on melanoma improvement [1]. Driver hereditary transformations in melanoma are found [2].
1970s – 1990sA developing number of concentrates in this period recommend that sun presentation assumes a critical part in the improvement of a few melanomas. In the 1980s, the general wellbeing network and promotion bunches start forewarning the general population about the potential dangers of sun introduction. Dermoscopy ends up accessible in the 1990s [3].
Present timeToday, melanoma is dealt with by medical procedure, immunotherapy, directed treatment, chemotherapy and radiation treatment [4]. Melanoma is more typical in regions that are generally Caucasian [5].

Table 1.

Timeline of melanoma.

Extraordinary endeavors have been done by researchers to make compelling and dependable computerized demonstrative techniques for skin lesions, yet very little research has been centered on the hair removal problem. Clearly, human body might be totally covered by hair, which has a scope of various surfaces, orientations, and colors; in this way influence incompletely/totally the presence of skin lesions as appeared in Figure 1.

Figure 1.

Sample of digital dermoscopic images with hair pixels are collected from PH2 dataset [10].

Hair removal is an important step in dermoscopy images to classify the skin lesion correctly into benign, suspicious, or malignant. Various techniques were applied to remove hairs automatically from dermoscopic images are discussed in detail by [11, 12]. The rest of this research is organized as follows: Section 2 describes an overview of related work. Section 3 describes an overview of common methods used for diagnosis of skin lesions while Section 4 describes the proposed technique. The implementation is presented and discussed in Section 5 followed by some remarks and future work.

Advertisement

2. Related work

The pixel-based interpolation technique was proposed by Satheesha et al. [13] to locate a quadratic curve, which detects bended hairs in the binary image mask for removal and replacement. Gabor filtering and PDE-based image reconstruction was proposed by Nasonova et al. [14] for hair removal problem. Moreover, for edge sharpening, they have utilized a warping algorithm to move pixels from the neighborhood of the blurred edge closer to the edge while the overall luminosity and texture patterns of the skin lesions are preserved. In [15], two fundamental steps are proposed to automatically detect and remove hairs from dermoscopy images: generation of a binary image mask by isolating hairs and ruler marking. From the RGB dermoscopy image, a red channel is utilized to perform noise removal followed by a generation of the binary image mask via an adaptive canny edge detector. Furthermore, a repaired task in view of polygons inpainting is implemented on the white regions of the generated mask.

The work of [16] relied on two classes of images: gray scale and RGB images. In gray scale images, in light of edge property, a circular mask is used to remove the nonskin pixels followed by a repair operation accomplished by a normalization process of pixel values. In RGB images, in light of histogram values, a frequency of occurrence of each bin is measured followed by the calculation of minimum distance among neighborhood pixels. An algorithm presented by Abbas et al. [17] for automatically detecting and repairing hair occlusion in dermoscopy images. In the detection stage, hairs are segmented utilizing MF-FDOG, thresholding, and morphological edge-based methods connected for improvement. In the repair stage, to inpaint the image without loss of texture patterns of skin lesions, the fast marching technique is implemented. MRF-based Multi-Label Optimization and Dual-Channel Quaternion Tubularness Filters are proposed by Mirzaalian et al. [18] for hair improvements in dermoscopy images. Their method was approved and contrasted with different methods regarding: hair segmentation accuracy, image inpainting quality, and image classification accuracy. To remove hairs by detecting hair pixels in a binary image mask followed by replacement through pixel interpolation is implemented via the Generalized Radon Transform (GRT). The Radon Transform was chosen to locate quadratic curves characterized by rational angle and scaling [19].

An effective detection of artifacts proposed by Okuboyejo et al. [20] consists of a two-stage artifact detection termed: fast image restoration (FIR) by means of canny algorithm and line segment detection (LSD) operation. To remove artifacts from dermoscopic images, the fast marching method (FMM) was applied at each stage while preserving morphological features during artifacts removal. A threshold set model for digital hair removal from dermoscopic images proposed by Okuboyejo et al. and Koehoorn et al. [21, 22]. They proposed a gap-detection algorithm to find hairs for every threshold and merge results in a single mask image. To locate hairs in the generated mask, morphological filters and medial descriptors are combined. The proposed work of [23] automatically detects and removes hairs and ruler markings from dermoscopy images. In detection stage, they used a curvilinear structure and modeling, and additional feature guided exemplar-based inpainting stage. Extensions to the fast marching method are introduced by Hearn [24] with the aim to enhance the segmentation of medical image data. The proposed algorithm used to limit the occurrence of bleeding across boundaries, including automatic starting point selection and statistical region combination.

Two removal hair approaches are conducted by Sultana et al. [25]. The first method is based on a simple morphological closing operation with a disk-shaped structural element while the top-hat transform combined with a bicubic interpolation utilized in the second approach. An effective hair removal algorithm for dermoscopic imagery is implemented by Bibiloni et al. [26]. They utilized soft color morphology operators that able to cope with color images. The hair removal filter used is basically made out of a morphological curvilinear object detector and a morphological-based inpainting algorithm. A simple approach to automatic hair and consequently noise removal were discussed by Acebuque-Salido and Ruiz [27]. The process starts with a median filter on each color space of RGB, a bottom-hat filter, a binary conversion, a dilation and morphological opening, and then the removal of small connected pixels. The detected hair regions are then filled up using harmonic inpainting. Their experiments were carried out on the PH2 dataset and compared to DullRazor method. Furthermore, they generated synthetic hair on skin images and measured the reconstruction quality using peak signal-to-noise ratio. In the work done by Al-abayechi et al. [28], a hair was removed, and reflective light was reduced using morphological operations and a median filter.

An algorithm for the automated hairs detection was implemented by Chakraborti et al. [29] to 50 dermoscopic melanoma images. They used an adaptive, canny edge-detection method, followed by morphological filtering and an arithmetic addition operation. Their proposed method produced 6.57% segmentation error (SE), 96.28% true detection rate (TDR) and 3.47% false positive rate (FPR). The proposed algorithm by Toossi et al. [30] divided into two phases: detection and removal. In detection, light and dark hairs and ruler marking are segmented via an adaptive canny edge detector and refinement by morphological operators. In removal, the hairs are repaired in view of multi-resolution coherence transport inpainting.

In addition to the above-mentioned hair removal methods, several aspects are captured in Table 2.

MethodHair detectorInpainting method#Test images
DullRazor [31]Generalized morphological closingBilinear interpolation5
E-shaver [32]Prewitt edge detectorColor averaging5
Fiorese et al. [33]Top-hat operatorPDE-based [34]20
Huang et al. [35]Multiscale matched filtersMedian filtering20
Xie et al. [36]Top-hat operatorAnisotropic diffusion [37]40
Abbas et al. [11]Derivative of GaussianCoherence transport [38]100
Koehoorn et al. [21, 22]Multiscale skeletons and morphological operatorsFast marching [39]≅ 300
Our methodTop-hat operatorBlock-based histogram function & morphological close200

Table 2.

Comparison of existing digital hair removal methods.

Advertisement

3. An overview of common methods

The purpose of this section is to review state-of-the-art melanomas diagnosis methods and technologies that have the potential to reduce melanoma mortality. Current methods for the recognition of melanoma go from populace-based instructive crusades and screening to the utilization of calculation driven imaging innovations and execution of measures that distinguish markers of change. Each one of the following methods is used for the dermoscopic separation between benign melanocytic lesions and melanoma based on its own features.

3.1 Menzies method

The Menzies technique is an improved dermoscopy strategy for diagnosing melanomas [40, 41]. In the first arrangement, it had an affectability of 92% and a specificity of 71% for the analysis of melanoma. This strategy utilizes purported “negative” and “positive” highlights. For a melanoma to be analyzed, none of the two “negative highlights” ought to be found and no less than 1 of the 9 “positive highlights” must be available. An injury is suspicious of melanoma on the off chance that it has in excess of one shading and is hilter kilter in design. Suspect lesions showing any of the nine positive highlights for melanoma are thought to be melanoma except if demonstrated something else.

  • Negative features (benign lesions): symmetrical pattern (colors and structure) and single color.

  • Positive features (melanoma): Blue-white veil, multiple brown dots, pseudopods, radial streaming, scar-like depigmentation, multiple (5–6) colors, multiple blue/gray dots, and broadened network.

3.2 Seven-point checklist method

In the last years, a great deal of investigative techniques in view of scored calculations have been acquainted both with streamline the dermoscopic learning and to enhance the early melanoma discovery. The seven-point checklist, distributed in 1998, speaks to a standout amongst the most and most recent approved dermoscopic calculations because of its high affectability and specificity, likewise when used by nonspecialists. The seven criteria were initially tried on 342 melanocytic lesions (117 melanomas and 225 atypical nevi) and were decided for their successive relationship with melanoma [41, 42, 43]. Three of them were characterized as significant criteria (atypical system (score 2), blue-white veil (score 2) and atypical vascular pattern score 2)), though the staying four were considered to be minor (irregular streaks (score 1), irregular dots/globules (score 1), irregular blotches (score 1), and regression structures (score 1)). The seven-point checklist for the dermoscopic separation between benign melanocytic lesions and melanoma (scores in sections). At least three shows melanoma.

3.3 CASH method

The CASH [44] is used for the dermatoscopic differentiation between benign melanocytic lesions and melanoma (scores in brackets) as shown in Table 3. Add up the scores for a total CASH score (2–17): CASH score of 7 or less is likely benign, otherwise the lesion is suspicious of melanoma.

CriteriaLowMediumHigh
Colors: few vs. many
White, black, red, blue, Light brown, dark brown (Score: 1 point/color)
1–2 colors (1–2 points)3–4 colors (3–4 points)5–6 colors (5–6 points)
Architecture: order vs. disorder (Score: 0–2 points)None or mild disorder (no points)Moderate disorder (1 point)Marked disorder (2 points)
Symmetry vs. asymmetry, Contor, colors and structures (Score: 0–2 points)Symmetry in 2 axes (no points)Symmetry in 1 axis (1 point)No symmetry (2 points)
Homogeneity vs. heterogeneity, dots/globules, blotches, pigment network, blue-white veil, polymorphous vessels, regression, streaks (Score: 1 point/structure)One structure (1 point)Two types of structure (2 points)≥3 structures (3–7 points)

Table 3.

Suspicion for melanoma.

3.4 The ABCD method

The ABCD rule of dermoscopy was the primary dermoscopy calculation made to help separate benign from malignant melanocytic tumors. This calculation, which was depicted by Stolz, was created to quantitatively address the significant inquiry in dermoscopy of whether a melanocytic skin lesion under scrutiny is considerate, suspicious (borderline), or malignant. Construct just in light of four dermatoscopic criteria, this technique is moderately simple to learn and apply. The ABCD dermoscopy technique has been widely contemplated, and it has been demonstrated that it enhances the symptomatic execution of clinicians assessing pigmented skin injuries. It is the conclusion of some that this strategy might be especially appropriate for clinicians with constrained dermoscopy encounter. The criteria that consolidate to make the ABCD rule of dermoscopy are asymmetry, border, color, and differential structures. To use these criteria, a scoring framework was produced to compute the total dermoscopy score (TDS) utilizing a straight condition. With this TDS, a reviewing of lesions is conceivable regarding the likelihood that the lesions under investigation are malignant [41, 42]. The likelihood of melanoma depends on adding up the scores of different features as shown in Table 4.

CriteriaScoreWeightResult
Asymmetry, Perpendicular axes: contour, colors and structures0–21.30–2.6
Borders, 8 segments: abrupt ending of pigment pattern0–80.10–0.8
Colors, White, black, red, blue-gray, light-brown (tan), dark-brown1–60.50.5–3.0
Dermatoscopic structures or differential structural components (pigment network, aggregated globules, branched streaks, structureless areas, dots)1–50.50.5–2.5
Total scoreBenign<4.76
Suspicious4.76–5.45
Melanoma>5.45

Table 4.

Total dermoscopic score of ABCD rule.

3.5 CHAOS and clues

A modified form of pattern analysis [45] looks for CHAOS (asymmetry of structure and/or color) and no less than one clue to diagnose malignancy. It can be connected to melanocytic and non-melanocytic lesions. Patterns are portrayed by different components of a similar sort: lines, dots, clods, circles, pseudopods (a line with a bulbous end), and structureless regions. Structureless zones are comprised of colors: black, dark brown, light brown, gray, blue, orange, yellow, white, red, and purple.

  • A single pattern or a single color is a symmetrical structure, that is, benign.

  • Two patterns can have one pattern inside the other pattern or the two patterns may be regularly distributed. Such lesions have a symmetrical structure. Two patterns can be distributed asymmetrically.

  • Multiple patterns/colors may result in symmetrical structure if forming concentric zones. Otherwise, they result in an asymmetrical structure.

Asymmetrical patterns should prompt searching for particular clues to malignancy. The clues to malignancy are: thick reticular lines, gray or blue structures of any kind, pseudopods or radial lines at the periphery, black dots in the periphery, eccentric structureless area of any color, polymorphous vascular pattern, white lines, parallel lines on ridges, and large polygons.

3.6 The BLINCK algorithm

The BLINCK algorithm has been conceived to recognize malignant lesions, especially nodular melanoma, as this tumor regularly needs customary dermatoscopic highlights. It can likewise be utilized for non-melanocytic lesions [46]. Table 5 summarizes the evaluation operation of the BLINCK algorithm. Clues to malignancy are: atypical network, segmental streaks, irregular black dots/globules/clods, eccentric structureless zone, irregular blue or gray color, polymorphous/arborising/glomerular vessels, and parallel ridge pattern or diffuse irregular brown/black pigmentation in acral lesion. A score of ≥2 requires biopsy.

BenignIf not, then consider the following:
LonelyUnsightly ducklingScore 1
IrregularAsymmetrical pigmentation pattern or > 1 colorScore 1
NervousNervous patient/Changing lesionScore 1
KnownKnown clues to malignancyScore 1

Table 5.

Evaluation operation of BLINCK algorithm.

3.7 TADA

TADA is an acronym for Triage Amalgamated Dermoscopic Algorithm. TADA does not require a determination to be made to choose if the lesion ought to be extracted or alluded to a specialist [47]. TADA is accounted for to have sensitivity 94.8% and specificity 72.3% for malignant skin lesions. The first step is to determine whether the lesion has features of: angioma, dermatofibroma, and/or seborrhoeic keratosis. If yes, exclude from further analysis. If no, is there any architectural disorder? If there is architectural disorder, does the lesion have one or more of the following six predictive factors?: starburst pattern, blue-black or gray structures, shiny white structures, negative network, ulcer/erosion, and/or vessels. If yes, consider excision or refer. If no, the lesion is likely to be benign. Any doubt, follow-up or refer.

Advertisement

4. The proposed technique

In this research, we propose a novel technique to remove hair pixels from dermoscopic images. The YIQ (luminance (Y), hue (I), and saturation (Q). The first component, luminance, represents gray scale information, while the last two components make up chrominance (color information)) or National Television System Committee (NTSC: the analogue television system used in North America and Japan) color space is chosen because the hair pixels are well demonstrated by only luminance (Y-channel) image, for example, compared to RGB as shown in Figure 2. In addition to Red, Green, and Red (RGB) color space, the Hue, Saturation, and Value (HSV) and YCbCr (Y is the brightness (luma), Cb is blue minus luma (B-Y), and Cr is red minus luma (R-Y)) color spaces present the hair pixels in more than one channel too. This issue complicates the hair removal task and may affect the performance.

Figure 2.

A digital dermoscopic image presented in RGB (a-d) and YIQ (e-h) color spaces.

The Y-channel image is partitioned into 256 non-overlapped blocks. During experimental studies, several block sizes are tested such as 4×4, 8×8, 16×16, and so on. We concluded that the implementation of block size 16×16 introduced better results for inpainting stage as compared with other block sizes. For each block, morphological operators and histogram analysis are implemented to detect hair pixels and inpainting operation as well to replace hair pixels by nonhair skin pixels. This section describes the proposed algorithm for automatic hair detection and inpainting operations. To achieve the aims of this research, Figure 3 describes the work mechanism, and each step is described in the following subsections.

Figure 3.

Flowchart of the proposed method.

4.1 Color space conversion

As depicted in Figure 4, the conversion operation from the input image (RGB) into YIQ color space.

Figure 4.

RGB (a) and YIQ (b) color spaces.

4.2 Hair detection

To detect hair pixels, a morphological “bottom-hat” operation is implemented on Y-channel image, returning the image minus the morphological closing of the image (dilation followed by erosion) to highlight dark hair on a light background as shown in Figure 5. Because the image closing expands the white areas in an image but does not significantly alter those areas which are already white, the only areas left after subtracting the original are those that were originally black but surrounded by white. In general, bottom-hat filtering produces highlighted areas, which more truly follow the shape of the hair. However, the main motivation behind utilizing a bottom-hat filter is still the ability to better preserve the true shape of the hair.

Figure 5.

Hair detection. (a) Y-channel image. (b) Result of bottom-hat operation.

4.3 Binary image conversion

As shown in Figure 6 is the binarization operation of the image resulted from the implementation of bottom-hat operation on repaired Y-channel image.

Figure 6.

Result of the binarization operation.

4.4 Inpainting operation

Divide the repaired Y-channel and the binarized image into 256 non-overlapped blocks. During experimental studies, several block sizes are tested such as 4×4, 8×8, 16×16, and so on. We concluded that the implementation of block size 16×16 introduced better results for inpainting stage as compared with other block sizes.

  • For each block do

    • Apply histogram function using 32 bins. The histogram function is imhist constructed from the image processing toolbox in the MATLAB software. The first parameter used is the sub-image of size 16x16 and the second parameter is the number of bins which is equal to 32 bins. Based on experimental studies, several number of bins tested and found that 32 bins are sufficiently utilized the intensity pixels ranged in [0, 1] into 32 intervals of size 0.0313 each. Furthermore, there were no improvements when number of binds was increased over 32 bins.

    • Find the bin number that contains maximum occurrences (highest peak) of gray scale pixels in each sub-image or block.

    • Find locations of white pixels in binary subimage.

    • Let a = interval lower value and b = interval upper value.

    • For each white pixel do

      1. Generate a random number r in [0,1].

      2. Replace the pixel in the Y-channel by using Eq. (1):

    a+barE1

    where the purpose of having r is to keep a dynamic change in the repaired pixel value among all repaired pixels in each block.

    • End

    • Perform the morphological “close” operation (dilation followed by erosion) on repaired Y-channel image as depicted in Figure 7(b).

  • End

Figure 7.

Repaired Y-channel. (a) Y′-channel before close operation and (b) Y″-channel after close operation.

4.5 Repaired RGB image

The resulted image from the inpainting process as discussed earlier in the previous subsection is subsequently used as an input image to the conversion operation to the RGB color space as depicted in Figure 8.

Figure 8.

The repaired RGB image.

Advertisement

5. Results and discussions

The experiments are executed on processor Intel, core i3-2330 M @ 2.20GHz and RAM 4GB. The system type is windows 7 ultimate of 64-bit OS and the software used for research implementation is MATLAB R2013a. The proposed methodology is tested on PH2 dataset [10]. It consists of 200 8-bit RGB dermoscopic images of melanocytic lesions with a resolution of 768×560 pixels. The dermoscopic images were obtained at the Dermatology Service of Hospital Pedro Hispano, Portugal under the same conditions through Tuebinger Mole Analyzer system using a magnification of 20×. The efficiency of the proposed algorithm is the detection and removal of thin/thick and light/dark hair from dermoscopic images with the preservation of the texture pattern, shape, and colors of skin lesion. Furthermore, any dermoscopic image does not contain hair, the algorithm preserves its features. Figure 9 depicts a sample of results consists of five input images as an initial stage sorted in the first row, and accordingly, their output images as a final stage appear in the last row after the implementation process of the proposed algorithm as discussed earlier represented by bottom arrows as an intermediate step.

Figure 9.

Sample of results.

The statistical analysis based on the metrics of sensitivity, specificity, and diagnostic accuracy was used to determine the performance of hair detection and inpainting operation. Our proposed algorithm reports a true positive rate (sensitivity) of 97.36%, a false positive rate (fall-out) of 4.25%, and a true negative rate (specificity) of 95.75%. The diagnostic accuracy achieved is recorded at level high of 95.78%. To estimate the accuracy of the proposed algorithm and to quantify the automatic hair detection error, quantitative evaluations were performed using three statistical metrics: sensitivity or true detection rate (TDR), specificity or true negative rate (TNR), and diagnostic accuracy (DA). TDR measures the rate of pixels which were classified as hair by both the automatic algorithm and the medical expert, and FPR measures the rate of pixels which were not classified as hair by both the automatic segmentation and the medical expert. These metrics are calculated using Eqs. (2)–(5) as follows:

sensitivityTDR=TPTP+FN×100E2
specificityTNR=TNTN+FP×100E3
falloutFPR=FPFP+TN×100E4
diagnostic accuracyDA=TP+TNTP+FN+FP+TN×100E5

where TP, FP, FN, and TN stand for the number of true positive, false positive, false negative, and true negative, respectively. The quantitative results of the proposed algorithm are summarized in Table 6. They were calculated as follows:

  • False negative (FN): find the differences between the repaired Y-channel (Y″) and the original Y-channel, apply a binarization operation, and then count the white pixels. The results of these sequence of operations are depicted in Figure 10.

  • True positive (TP): apply the binarization operation on the hair segmented image (Y′) yields to the hair segmented binary image (BW). Visually, it is better to represent the white pixels which are hair pixels in red color and black pixels for non-hair pixels in white background as shown in Figure 11(a,c). The white pixels exist in BW and not exist in the images shown in Figure 10(d,h) are counted and preserved in another images as true positive pixels shown in Figure 11(b,d).

  • True negative (TN): perform the complement operation on the hair segmented binary image (as shown Figure 11(a,c)) yields to the images shown in Figure 12(a,b), respectively. The TN is the count of the white pixels exist in the complement image.

  • False positive (FP): count of the remained white pixels.

Count# Hair pixels (predicted)
Class = YesClass = No
# Hair pixels (actual)Class = YesTP (1,924,779)FN (52,256)
Class = NoFP (3,664,600)TN (82,521,688)

Table 6.

Performance evaluation (confusion matrix).

Figure 10.

False negative calculation. (a, e) Y-channel. (b, f) Repaired Y-Channel (Y″). (c, g) Differences between (a, b) and (e, f) illustrated by red dots. (d, h) Y-channel with false negative pixels represented by red dots.

Figure 11.

True positive calculation. (a, c) Hair segmented binary image. (b, d) Truly classified hair pixels.

Figure 12.

Results of complement operation performed on binarized images.

Unfortunately, we could not find a common database that can be shared with other researchers and there is no related work used PH2 dataset [10] to compare the proposed algorithm with others. However, Table 7 compares the proposed hair detection algorithm with some other methods.

Artifact detection methodTDR (%)TNR (%)FPR (%)DA (%)# test images
The proposed algorithm97.3695.754.2595.78200
Multi-resolution [30]93.2488.350
Top-hat operator [36]72.540
DullRazor [31]70.233.448.650
Fast image restoration (FIR) + line segment detection (LSD) [20]98.2793.7596.10299

Table 7.

Comparison of the hair detection algorithms.

Advertisement

6. Conclusion and future work

In this study, a fast and effective method is proposed for hair-occluded removal in dermoscopic images. The implementation of the hair removal process is divided into two main stages: hair detection and inpainting. In hair detection, a morphological bottom-hat operation is implemented on Y-channel image of the YIQ color space followed by a binarization operation. In inpainting, the repaired Y-channel is partitioned into 256 non-overlapped blocks and for each block, white pixels are replaced by locating the highest peak of using a histogram function and a morphological close operation.

Our achieved results indicate high accuracy, and the proposed method can be dedicated to Dermatologists as a pre-processing stage before the lesion segmentation and classification. However, our proposed algorithm reports a true positive rate (sensitivity) of 97.36%, a false positive rate (fall-out) of 4.25%, and a true negative rate (specificity) of 95.75%. The diagnostic accuracy achieved is recorded at a high level of 95.78%.

The following opportunities are suggested for future work:

  • Allocate a dataset to be common among researchers.

  • Other artifacts such as air bubbles can be added for further studies.

Advertisement

Acknowledgments

The author would like to thank Dr. Saeb Zaqout for his valuable comments and constructive criticism of this manuscript.

Written By

Ihab Zaqout

Submitted: 16 June 2018 Reviewed: 06 July 2018 Published: 12 February 2020