Open access peer-reviewed chapter

Diagnosis of Skin Lesions Based on Dermoscopic Images Using Image Processing Techniques

Written By

Ihab Zaqout

Submitted: 16 June 2018 Reviewed: 14 June 2019 Published: 15 July 2019

DOI: 10.5772/intechopen.88065

From the Edited Volume

Pattern Recognition - Selected Methods and Applications

Edited by Andrzej Zak

Chapter metrics overview

1,784 Chapter Downloads

View Full Metrics

Abstract

Great effort has been put into the development of diagnosis methods for the most dangerous type of skin diseases—melanoma. This paper aims to develop a prototype capable of segment and classify skin lesions in dermoscopy images based on ABCD rule. The proposed work is divided into four distinct stages: (1) pre-processing, consists of filtering and contrast enhancing techniques, (2) segmentation, thresholding, and statistical properties are computed to localize the lesion, (3) features extraction, asymmetry is calculated by averaging the calculated results of the two methods: entropy and bi-fold. Border irregularity is calculated by accumulate the statistical scores of the eight segments of the segmented lesion. Color feature is calculated among the existence of six candidate colors: white, black, red, light-brown, dark-brown, and blue-gray. Diameter is measured by the conversion operation from the total number of pixels in the greatest diameter into millimeter (mm), and (4) classification, the summation of the four extracted feature scores multiplied by their weights to yield a total dermoscopy score (TDS); hence, the lesion is classified into benign, suspicious, or malignant. The prototype is implemented in MATLAB and the dataset used consists of 200 dermoscopic images from Hospital Pedro Hispano, Matosinhos. The achieved results show an acceptable performance rates, an accuracy 90%, sensitivity 85%, and specificity 92.22%.

Keywords

  • dermoscopy
  • ABCD rule
  • segmentation
  • feature extraction
  • classification

1. Introduction

Melanoma, also known as malignant melanoma, is the most dangerous type of skin cancer that progresses from the pigment-containing cells known as melanocytes. Sometimes they progress from a mole with concerning changes including an increase in size, irregular edges, and change in color, itchiness, or skin breakdown [1]. Melanomas may rarely occur in the mouth, intestines, or eye but typically occur in the skin [1, 2]. In men, they most commonly occur on the back, while in women, they are most common on the legs [2].

Authors of [2] also mentioned that the ultraviolet light (UV) exposure from either the sun or from other sources, such as tanning devices is the primary cause of melanoma, while about 25% develop from moles. Worldwide, in 2012, it registered 55,000 death cases in 232,000 people. North America, Europe, Australia, and New Zealand have the highest rates of melanoma in the world while it is less common in Latin America, Asia, and Africa.

One of the widely used methods by dermatologists to classify the cancerous skin—melanoma from normal skin is the ABCD rule. It is proved that it can be easily learned and rapidly calculated and has been proven to be a reliable method providing a more objective and reproducible diagnosis of melanoma [3, 4, 5]. To calculate the ABCD score, the “asymmetry, border, colors, and diameter” criteria are approximately estimated (semi-quantitatively). Each of the criteria is then multiplied by a given weight factor to yield a total dermoscopy score (TDS). TDS values less than 4.75 indicate a benign melanocytic lesion, values between 4.8 and 5.45 indicate a suspicious lesion, and values of 5.45 or greater are highly suggestive of melanoma.

To calculate the Asymmetry, the melanocytic lesion is bisected by two 90° axes. If both axes dermoscopically show asymmetric contours with regard to shape, the asymmetry score is 2. If there is asymmetry on one axis only, the score is 1. If asymmetry is absent with regard to both axes the score is 0. The border is calculated by dividing the lesion into eighths. Within each one-eighth, a sharp, abrupt cut-off of pigment pattern at the periphery receives a score 1, otherwise receives a score 0. Color feature is calculated by counting the existence of six different colors: white, red, light brown, dark brown, blue-gray, and black. The Diameter of melanomas is usually greater than 6 mm.

The proposed work relies on extracting and selecting specific information features that can be used to distinguish malignant, suspicious, and benign lesions by setting an automated cancer diagnosis using image processing techniques. More details on the image processing techniques used in this research exist in [6].

To achieve the aim of this research, four stages are implemented sequentially:

  • Pre-processing stage: it consists of filtering and contrast enhancement techniques to remove any unwanted structures (i.e., hair) that might corrupt the image. Also the aim of this stage is to eliminate the background noise and improve the image quality for the purpose of determining the focal areas in the image.

  • Segmentation stage: thresholding for binarization and statistical properties are computed such as, area and center of mass to localize the lesion.

  • Feature extraction/selection stage: this stage quantifies the ABCD rule. Asymmetry feature is calculated by averaging produced results from the two methods: Entropy and Bi-fold. Border irregularity is measured by the partition of the skin’s lesion into eight equal segments, the segment’s perimeter and area is calculated for each segment, and then accumulating their scores. Color feature is calculated among the existence of six candidate colors: white, black, red, light-brown, dark-brown, and blue-gray. Diameter is measured by the conversion operation from the total number of pixels in the greatest diameter into millimeter (mm).

  • The classification of image: this stage depends on the summation of the four extracted feature scores multiplied by their weights to yield a total dermoscopy score (TDS). Based on the TDS, the lesion is classified into benign, suspicious, or malignant.

Section 2 describes an overview of several systems proposed in the literature. Section 3 describes the research methodology. The experimental study and discussion are described in Section 4. Section 5 concludes this paper with some remarks on future work.

Advertisement

2. Related works

Several diagnostic systems for melanoma detection have been proposed. Some systems try to imitate the performance of dermatologists by detecting and extracting several dermoscopic features. These features can then be used to score a lesion in a similar way to the one adopted by dermatologists. In [7], general clinical principles of early melanoma detection are reviewed, providing the clinician with an up-to-date understanding of management strategies for their patients with numerous or atypical nevi. Many researchers have been working on the image processing and computer vision techniques for skin cancer detection. The most probably features to perform skin lesion segmentation used in various papers are shape, color, texture, and luminance.

Three methods of segmentation have been discussed by [8]. The methods are: Otsu’s method, gradient vector flow (GVF), and color based using K-mean clustering. Feature extraction is based on the so-called ABCD-rule of dermatoscopy. While [9], a watershed segmentation is the proposed scheme used for image segmentation, border detection and decision related with structural nature of lesion. For more details of study, the segmentation methods used, a survey work on skin lesion segmentation problem implemented by image processing techniques are described by [10, 11, 12].

Based on a qualitative assessment of asymmetry (of boundary, color, and mass distribution), size functions (SFs) and support vector machine (SVM) are used to implement a new automatic classifier of melanocytic lesions [13]. An automatic identification of asymmetry in digital images containing melanocytic skin lesion using Stolz strategy, based on the ABCD rule is proposed by [14]. A survey on asymmetry analysis of malignant melanoma using image processing techniques to identify the asymmetricity of the melanoma skin lesions was presented by [15].

Several researchers proposed an image analysis tools to check for the various melanoma parameters like asymmetry, border, color, diameter, in terms of texture, size, and shape analysis for image segmentation and feature stages. The extracted feature parameters are used to classify the image as normal skin and melanoma cancer lesion [16, 17, 18, 19, 20, 21, 22].

In [23], they applied a Bag-of-features approach to malignant melanoma detection based on epiluminescence microscopy imaging (low-power microscopy (×50–100), commonly a television microscope applied to a glass slide covering mineral oil on the surface of a skin lesion, to determine malignancy in pigmented lesions). Each skin lesion is represented by a histogram of code words or clusters identified from a training data set. Classification results are achieved based on the implementation of naive Bayes and support vector machine classifiers. Other work utilized the Bag-of-feature model for the detection of melanomas in dermoscopy images and aimed at identifying the role of different local texture and color descriptors [24]. The reported results show that the sensitivity is 93% and specificity is 85%.

The extracted features of segmented lesions used as inputs to the input layer of the artificial neural network. Different configurations of ANNs were implemented by the researchers for classification [25, 26, 27, 28]. In [25], they attached the Dermlite® DL1 dermatoscope to the iPhone. A new method called elliptical symmetry was proposed for quantifying asymmetry. Gaussian smoothing and lacunarity analysis to measure border irregularity were proposed. In Gaussian smoothing, the contour was smoothed and compared with the perimeter of the original lesion. The lacunarity was used to analysis the borders of the image. Finally, the extracted features were fed to input layer of the multi-stage neural network classifier. While [26], 2D-Wavelet transform is the feature extraction method used. These features are given as the input to the artificial neural network classifier. An unsupervised approach for lesion segmentation is proposed by [27]. Iterative thresholding is applied to initialize level set automatically. The accuracy of detected border is compared with Growcut and mean-shift algorithms. Four features relying on visual diagnosis: asymmetry (A), border (B), color (C), and diameter (D) are computed and used to construct a classification module based on artificial neural network for the recognition of malignant melanoma. The authors of [28] used a hybrid algorithm combining a region-oriented and a thresholding method to segment the lesion. A multilayer perceptron NN model with one hidden layer and one output neuron was chosen as a basis for all the different network configurations examined.

As described by [29], the general approach used by a CAD system consists in describing the skin lesion by means of a set of textural and geometrical shape features known as the ABCD rule (asymmetry, border, color and diameter). Software WEKA was used to apply 13 different techniques and a statistical test K-folds to obtain the classification accuracy.

A different approach proposed by [30] named Modified Texture Distinctiveness Lesion Segmentation algorithm (M-TDLS) to segment the skin lesion. Two steps are involved: TD metric calculation and region classification. The RGB image is converted into XYZ color space and the TD metric is calculated to find dissimilarity between two texture distributions.

In [31], they addressed two different systems for the detection of melanomas in dermoscopy images. The first system used global methods to classify skin lesions, whereas the second system used local features and the bag-of-features classifier.

Advertisement

3. Research methodology

This section describes four main stages: preprocessing, segmentation, features extraction, and classification. We have started by reading an RGB image, as for example, as shown in Figure 1.

Figure 1.

Input image [32].

3.1 Preprocessing stage

The preprocessing stage consists of four sequential steps described as follows:

3.1.1 Step 1

For each channel in the RGB image, a 2-D median filtering for noise reduction with mask of size 5 × 5 is implemented and their associated results are depicted in Figure 2.

Figure 2.

The implantation of 2-D median filtering. R, G, B-channels, respectively.

3.1.2 Step 2

For hair removal, two morphological operations are applied on grayscale image f, dilation followed by an erosion with a small shape or template called a structuring element s denoted by (f ⊕ s and f Θ s, respectively). The results are depicted in Figure 3.

Figure 3.

Hair removal operations. R, G, B-channels, respectively.

3.1.3 Step 3

Brightness enhancement operation is applied separately on R, G, and B images. Figure 4 shows the result of the brightness enhancement operation.

Figure 4.

Brightness enhancement. R, G, B-channels, respectively.

3.1.4 Step 4

Based on our experimental studies, the channel B is chosen because it provides better segmentation results compared to others. Therefore, the third channel (B)-image is converted into a binarized form using Otsu’s method, and then converted white pixels into black pixels and vice versa to present the pigment skin lesion. The results of this step are depicted in Figure 5.

Figure 5.

Binarized images: (a) implementation of Ostu’s method and (b) the complement image.

3.2 Segmentation stage

For each candidate region, the statistical properties such as center of mass (xc, yc) and area A are calculated. Based on the size of the region and the overlapping with the center of mass of the image, the region-of-interest (ROI) is identified as depicted in Figure 6.

Figure 6.

ROI segmentation.

3.3 Features extraction stage

This section presents and discusses in detail the methods used to extract the four features asymmetry (A), border irregularity (B), color (C), and diameter (D) from the segmented lesion. According to characteristics of the ABCD rule, each extracted feature plays a distinctive role with its associative weight to calculate the total dermoscopy score (TDS).

3.3.1 Asymmetry

To calculate asymmetry, firstly, the skin lesion is converted into grayscale values. Secondly, it is rotated to vertically and horizontally partitioned into two equal halves. Finally, two methods called Entropy and Bi-fold are implemented, and their calculated average value is assigned as an asymmetry score of the segmented lesion.

Compared with Figure 6, the ROI is rotated by θ° to align the (x, y) coordinated with centroid principal axes as shown in Figure 7. The orientation angle θ° is defined as the angle between the x-axis and axis around which the object can be rotated with minimum inertia:

Figure 7.

Alignment operation.

E1

where m1,1, m2,0, and m0,2 are the second order moments or moment of inertia defined as:

E2

where (x0, y0) is the centroid.

Figure 8 shows the result of the partition operation of the ROI over its closest line of symmetry (i.e., centroid) into two equal parts vertically and horizontally.

Figure 8.

Vertical and horizontal bisections.

The asymmetry feature plays an important role in melanoma diagnosis and for this reason we have suggested two methods for implementation:

3.3.1.1 Entropy function

The entropy is a statistical measure of randomness that can be used to characterize the texture of grayscale image as described by the following:

E3

where p contains the histogram counts of intensity values.

To find the similarity between the two parts (left vs. right and upper vs. lower) of the segmented lesion, their entropies are calculated as follows:

E4

The same process is repeated to find the E(U, L). Therefore, the entropy asymmetry is calculated as follows:

E5

where TE is the entropy threshold value.

3.3.1.2 Bi-fold method

The symmetry obtained by overlapping the two vertical (left vs. right) and horizontal (upper vs. lower) parts along the principal axes of the inertia. The non-overlapped is then compared with the total area of the lesion as follows:

E6

where ∆A is the non-overlapping area between the original and reflected masks and A is the area of the original mask. The result of the non-overlapping operation between left and right halves is depicted in Figure 9a and the result of the non-overlapping operation between upper and right halves is depicted in Figure 9b as well. Hence, the overlapping asymmetry is calculated as follows:

Figure 9.

Non-overlapping area: (a) left-right folding and (b) upper-lower folding.

E7

where TO is an overlapping threshold value.

The overall asymmetry score (AsymScore) of the skin lesion is calculated as:

E8

3.3.2 Border irregularity

From the binarized ROI (see Figure 7), the border irregularity index or compactness index is calculated as follows:

E9

where P is the perimeter and A is the area of the lesion.

Among other edge detection methods, Sobel method is selected because it is relatively inexpensive in terms of computations. On the other hand, the gradient approximation that it produces is relatively crude, in particular for high-frequency variations in the image. As shown in Figure 10, the lesion’s boundary image is partitioned into eight equal segments and for each segment, we have computed its compact index.

Figure 10.

The result of the partitioning process.

The border irregularity index (BIScore) is calculated as follows:

E10

3.3.3 Color feature

The existence of white, black, red, light-brown, dark-brown, and blue-gray colors in the true colored lesion are needed to be examined. Assume that Figure 11 presents the lesion that is needed to be examined for the six candidate colors appearance. The color score is incremented by 1, if the distance between the examined pixel’s value in the lesion and each color reference is below or equal to the pre-calculated threshold value.

Figure 11.

The examined lesion.

Six RGB codes are chosen as reference points for each color used as shown in Table 1.

WhiteBlackRedLight-brownDark-brownBlue-gray
(1, 1, 1)(0, 0, 0)(1, 0, 0)(0.7843, 0.5882, 0.3922)(0.5882, 0.3922, 0.3922)(0.5882, 0.4902, 0.5882)
(0.9608, 0.9608, 0.9608)(0.0392, 0.0392, 0.0392)(1, 0.1961, 0.1961)(0.7843, 0.3922, 0)(0.4902, 0.2941, 0.2941)(0.4902, 0.4902, 0.5882)
(0.9216, 0.9216, 0.9216)(0.0784, 0.0784, 0.0784)(0.7843, 0, 0)(0.7843, 0.3922, 0.1961)(0.3922, 0.1961, 0.1961)(0.3922, 0.3922, 0.4902)
(0.8824, 0.8824, 0.8824)(0.1176, 0.1176, 0.1176)(0.7843, 0.1961, 0.1961)(0.5882, 0.3922, 0.1961)(0.3922, 0.1961, 0)(0.3922, 0.4902, 0.5882)
(0.8431, 0.8431, 0.8431)(0.1569, 0.1569, 0.1569)(0.5882, 0 0)(0.5882, 0.3922, 0)(0.3922, 0, 0)(0.1961, 0.3922, 0.5882)
(0.8039, 0.8039, 0.8039)(0.1961, 0.1961, 0.1961)(0.5882, 0.1961, 0.1961)(0.5882, 0.1961, 0)(0.1961, 0, 0)(0, 0.3922, 0.5882)

Table 1.

RGB codes.

The distance of each pixel in the lesion and color reference is calculated by using the following Euclidean distance:

E11

where k = 1, 2, …, 6 and (i, j) is the pixel’s position in the lesion.

The existence of colors in lesion depends on the comparison between Dk and threshold values. For each color, there is a threshold value Tk is calculated as a distance between the highest and smallest reference points. As for example, the threshold value for white color (i.e., T1) is calculated as follows:

E12

The same process is repeated for other colors to calculate their threshold values. The color score (ColorScore) is incremented by 1 if the Dk ≤ Tk.

3.3.4 Diameter

The number of pixels of the greatest diameter or major axis length of the segmented lesion is transferred into millimeter scale as follows:

E13

where dpi is the dots-per-inch which equals to 96. Then, the diameter score (DMScore) is calculated as follows:

E14

Finally, the calculated values of the four extracted features are multiplied by their weights to receive the total dermoscopic score (TDS). The TDS is calculated by the following equation:

E15

3.4 Classification stage

Based on the result of the TDS, the lesion is classified based of the following criteria:

E16

An example of the whole process yields to the printed results on bottom-left corner of the image is illustrated in Figure 12.

Figure 12.

An example of the implementation of the proposed methodology.

Advertisement

4. Experimental results and discussion

4.1 Hardware and software specifications

The experiments are executed on processor Intel, core i3-2330 M @ 2.20 GHz and RAM 4 GB. The system type is windows 7 ultimate of 64-bit operating and the software used for research implementation is MATLAB R2013a.

4.2 Dataset

The performance of our research is tested on PH2 dataset [32]. It consists of 200 8-bit RGB dermoscopic images of melanocytic lesions with a resolution of 768 × 560 pixels. This image database contains 80 common nevi, 80 atypical nevi, and 40 melanomas. The dermoscopic images were obtained at the Dermatology Service of Hospital Pedro Hispano (Matosinhos, Portugal) under the same conditions through tuebinger mole analyzer system using a magnification of 20 times.

4.3 Implementation of ABCD rule

The ABCD rule is implemented on the PH2 dataset and a random selection of segmentation and classification of successful results are presented in Figure 13. For each image, the segmented lesion is surrounded by a solid blue line, and the calculated value of the TDS and the classification result are presented in the bottom-left corner.

Figure 13.

Results sample.

4.4 Discussion

The results of this research are compared with the results of [31], in terms of accuracy, sensitivity, and specificity. The running time for the diagnosis process of 200 8-bit RGB images is 1670 s, or an average of 8.35 s per each examined lesion.

The performance of the proposed work is evaluated by one of the well-known metrics called the confusion matrix as described in Table 2. It presents the correct and wrong classification rates that resulted from the implementation of the ABCD rule on PH2 dataset. This image database contains 80 common nevi, 80 atypical nevi, and 40 melanomas.

Predicted class
Benign (B)Suspicious (S)High suspicious (H)
Actual classBenign (B)TP(B) = 73E(BS) = 5E(BH) = 2
Suspicious (S)E(SB) = 11TP(S) = 63E(SH) = 6
High suspicious (H)E(HB) = 2E(HS) = 4TP(HS) = 34

Table 2.

The ABCD rule performance confusion matrix.

Table 3 summarizes the calculated values of true positive (TP), false negative (FN), false positive (FP), and true negative (TN) of the three classes, benign (B), suspicious (S), and high suspicious (H).

Benign (B)Suspicious (S)High suspicious (H)
TP736334
FN7176
FP1398
TN107111152

Table 3.

Summary of correct and wrong classifications.

The accuracy, sensitivity, and specificity formulas are described in the following equations, respectively. Table 4 summarizes the achieved performance of the three classes.

ClassAverage (our work)Average [28]
BSH
Accuracy90.0087.0093.0090.00
Sensitivity91.2578.7585.0085.0098.00
Specificity89.1792.5095.0092.2277.50

Table 4.

Benchmarking results of the proposed work applied to the PH2 database.

E17
E18
E19
Advertisement

5. Conclusion and future work

In this work, we have developed an automatic diagnostic system using image processing techniques for preliminary diagnosis of melanoma based on the well-proven commonly used ABCD medical procedure. The proposed work used different image processing capabilities to achieve fast, affordable, easily available and highly accurate melanoma diagnosis. The overall process includes multiple modules for handling various steps: noise removal, contrast enhancement, lesion segmentation, features extraction, and classification (diagnosis).

The achieved results show an acceptable performance rates, an accuracy 90%, sensitivity 85%, and specificity 92.22%.

The following opportunities are suggested for future work:

  • Increase the size of the dataset, and

  • Attach the proposed system to various mobile devices.

Advertisement

Acknowledgments

The author would like to thank Dr. Saeb Zaqout for his valuable comments and constructive criticism of this manuscript.

References

  1. 1. Melanoma Treatment for Health Professionals (PDQ). National Cancer Institute [Internet]. 2015. Available from: http://www.cancer.gov/types/skin/hp/melanoma-treatment-pdq [Accessed: 29 May 2016]
  2. 2. Stewart BW, Wild CP. World cancer report 2014. World Health Organization, International Agency for Research on Cancer; IARC Publications; 2014
  3. 3. Stolz W, Riemann A, Cognetta A, Braun-Falco O. ABCD rule of dermatoscopy: A new practical method for early recognition of malignant melanoma. European Journal of Dermatology. 1994;4(7):521-527
  4. 4. Ahnlide I, Bjellerup M, Nilsson F, Nielsen K. Validity of ABCD rule of dermoscopy in clinical practice. Journal of Acta Dermato-Venereologica. 2016;96:367-372. DOI: 10.2340/00015555-2239
  5. 5. Laura RP, Deysi NC, Diego PP-R, José LN, Lizza AT. Computerized medical diagnosis of melanocytic lesions based on the ABCD approach. CLEI Electronic Journal. 2016;19(2):1-22. DOI: 10.19153/cleiej.19.2.5
  6. 6. Parker LR. Algorithms for Image Processing and Computer Vision. 2nd ed. Indianapolis, Indiana, USA: Wiley Publishing; 2011
  7. 7. Goodson AG, Grossman D. Strategies for early melanoma detection: Approaches to the patient with nevi. Journal of the American Academy of Dermatology. 2009;60(5):719-735. DOI: 10.1016/j.jaad.2008.10.065
  8. 8. Bhuiyan MA, Azad I, Kal-Uddin M. Image processing for skin cancer features extraction. International Journal of Scientific and Engineering Research. 2013;4(2):1-6
  9. 9. Jain JW, Ramteke NS. ABCD rule based automatic computer-aided skin cancer detection using MATLAB. International Journal on Computer Technology and Applications. 2013;4(4):691-697
  10. 10. Malini M, Saranya DAA. Review of segmentation techniques on melanoma detection. International Journal of Advanced Research in Computer Science and Software Engineering. 2015;5(4):1043-1047
  11. 11. Maglogianni I, Kosmopoulos DI. Computational vision systems for the detection of malignant melanoma. Oncology Reports. 2006;15:1027-1032
  12. 12. Deserno TM, Ali A. A systematic review of automated melanoma detection in dermatoscopic images and its ground truth data. In: Proceedings of SPIE Medical Imaging, International Society for Optics and Photonics; 2012. p. 8318
  13. 13. D'Amico M, Stanganelli I. Qualitative asymmetry measure for melanoma detection. In: IEEE International Symposium on Biomedical Imaging; 15-18 April 2004; Arlington, VA, USA; 2004. 2: pp. 1155-1158
  14. 14. Cudek P, Grzymała-Busse W, Hippe ZS. Further research on automatic estimation of asymmetry of melanocytic skin lesions. In: Human Computer Systems Interaction: Backgrounds and Applications 2, Advances in Intelligent and Soft Computing Series. Vol. 99. Berlin, Heidelberg: Springer; 2004. pp. 125-129. DOI: 10.1007/978-3-642-23172-8
  15. 15. Ravichandran KS, Premaladha J. Asymmetry analysis of malignant melanoma using image processing: A survey. Journal of Artificial Intelligence. 2014;7(2):45-53. DOI: 10.3923/jai.2014.45.53
  16. 16. Pise N, Jagtap V, Jain S. Computer aided melanoma skin cancer detection using image processing. In: International Conference on Intelligent Computing, Communication & Convergence (ICCC-2015), Elsevier, Procedia Computer Science; December 2015; Bhubaneswar, Odisha, India; 2015. 48: pp. 735-740
  17. 17. Ananthi B, Balamohan S, Hemalatha M. Melanoma detection using RGB color model in medical imaging. Middle-East Journal of Scientific Research. 2014;21(11):1982-1987. DOI: 10.5829/idosi.mejsr.2014.21.11.21494
  18. 18. Iqbal S, Sophia J, Divyashree A, Mundas M, Vidya R. Implementation of supervised learning for melanoma detection using image processing. International Journal of Research in Engineering and Technology. 2015;4(6):325-329. DOI: 10.15623/ijret.2015.0406055
  19. 19. Grammatikopoulos G, Hatzigaidas A, Papastergiou A, Lazaridis P, Zaharis Z, Kampitaki D, et al. Automated malignant melanoma detection using MATLAB. In: Proceedings of the 5th WSEAS International Conference on Data Networks, Communications & Computers; 16-17 October 2006; Bucharest, Romania; 2006. pp. 91-94
  20. 20. Iqbal S, Sophia M, Divyashree JA, Mundas M, Vidya R. Implementation of Stolz’s algorithm for melanoma detection. International Advanced Research Journal in Science, Engineering and Technology. 2015;2(6):9-12. DOI: 10.14445/23488549/IJECE-V4I4P105
  21. 21. Turkar V, Shetty P. Melanoma decision support system for dermatologist. In: International Conference on Recent Trends in Information Technology and Computer Science (IRCTITCS); 2011. pp. 28-30
  22. 22. Jaiswar S, Kadri M, Gatty V. Skin cancer detection using digital image processing. International Journal of Scientific Engineering and Research (IJSER). 2015;3(6):138-140
  23. 23. Situ N, Yuan X, Chen J, Zouridakis G. Malignant melanoma detection by Bag-of-features classification. In: 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; 20-25 August 2008; Vancouver, Canada; 2008. pp. 3110-3113
  24. 24. Catarina B, Jorge SM, Teresa M. Bag-of-features classification model for the diagnose of melanoma in dermoscopy images using color and texture descriptors. In: 10th International Conference on Image Analysis and Recognition; 26-28 June 2013; Póvoa do Varzim, Portugal; 2013. pp. 547-555
  25. 25. Cheerla N, Frazier D. Automatic melanoma detection using multi-stage neural networks. International Journal of Innovative Research in Science, Engineering and Technology. 2014;3(2):9164-9183
  26. 26. Aswin RB, Abdul Jaleel J, Salim S. Implementation of ANN classifier using MATLAB for skin cancer detection. In: International Conference on Mobility in Computing-ICMiC13; 17-18 December 2013; India; 2013. pp. 87-94
  27. 27. Messadi M, Cherifi H, Bessaid A-H. Segmentation and ABCD rule extraction for skin tumors classification. Journal of Convergence Information Technology (JCIT). 2014;9(2):21-34
  28. 28. Tomatis S, Carraral M, Bono A, Bartoli C, Lualdi M, Tragni G, et al. Automated melanoma detection with a novel multispectral imaging system: Results of a prospective study. Institute of Physics Publishing, Physics in Medicine and Biology. 2005;50(8):1675-1687. DOI: 10.1088/0031-9155/50/8/004
  29. 29. Santiago-Montero R, Hernandez D. Border and asymmetry measuring of skin lesion for diagnostic of melanoma using a perimeter ratio. Asian Journal of Computer Science and Information Technology. 2016;6(2):7-13. DOI: 10.15520/ajcsit.v6i2.41
  30. 30. Akila IS, Sumathi V. Detection of melanoma skin cancer using segmentation and classification algorithms. In: Proceedings on National Conference on Information and Communication Technologies (IJCA 2015); 2015. 2: pp. 1-4
  31. 31. Barata C, Ruela M, Francisco M, Mendonça T, Marques J. Two systems for the detection of melanomas in dermoscopy images using texture and color features. IEEE Systems Journal 2013;99:1–15. DOI: 10.1109/JSYST.2013.2271540
  32. 32. PH2 dataset [Internet]. 2016. Available from: https://www.dropbox.com/s/k88qukc20ljnbuo/PH2Dataset.rar [Accessed: 12 May 2016]

Written By

Ihab Zaqout

Submitted: 16 June 2018 Reviewed: 14 June 2019 Published: 15 July 2019