Preventing Diabetic Retinopathy: Red Lesions Detection in Early Stages

Diabetic retinopathy is the most common diabetic eye disease and a leading cause of blindness in industrialized countries. For example, several studies have provided data of its prevalence in several countries like Spain (values between 44.7% and 26.11% depending on the area and the type of diabetes) Fernandez-Vigo et al. (1993); RA et al. (2010), Australia (overall 15.3% of patients with diabetes had retinopathy) Tapp et al. (2003) or USA (40.3% crude prevalence among diabetic patients) Kempen et al. (2004). Moreover, several worlwide programs like the Aravind Eye Care System (http://www.aravind.org/Default.aspx) promote the sharing of knowlege and resources to avoid blindness, like the caused by diabetic retinopathy.

in industrialized countries.For example, several studies have provided data of its prevalence in several countries like Spain (values between 44.7% and 26.11% depending on the area and the type of diabetes) Fernandez-Vigo et al. (1993); RA et al. (2010), Australia (overall 15.3% of patients with diabetes had retinopathy) Tapp et al. (2003) or USA (40.3% crude prevalence among diabetic patients) Kempen et al. (2004).Moreover, several worlwide programs like the Aravind Eye Care System (http://www.aravind.org/Default.aspx)promote the sharing of knowlege and resources to avoid blindness, like the caused by diabetic retinopathy.Diabetic retinopathy is caused by changes in the blood vessels of the retina.For the most of the cases its detection in an early stage would allow for a treatment with a high healing rate.This is why the screening processes are a very valuable method for prevention of this pathology.Typically, a large amount of fundus images (photo of the back of the inside of the eye) have to be analyzed as diabetic patients have both their eyes examined at least once a year.These photos are then examined by an ophthalmologist (eye-doctor) who can determine if there are any visible signs of the disease present.To effectively manage all this information and the workload it produces, automatic techniques for analyzing the images would represent a major improvement, since manual analysis of this amount of information is a very complex and error prone process.These techniques must be robust, sensitive and specific to be implemented in real-life screening applications.One important symptom of diabetic retinopathy is the development of red lesions such as microaneurysm and white lesions such us exudates and cottonwool spots.Here only red lesions will be located, since they are among the first absolute sign of diabetic retinopathy.
Several studies have been presented to deal with the problem of hard exudates detection.(Leistritz & Schweitzer, 1994) showed that using size, shape, texture, etc. in isolation is insufficient to detect hard exudates accurately.This fact is used in this work too.Several other attempts have been made to detect hard exudates using histogram segmentation.If the background color of a retinal image is sufficiently uniform, a simple and effective method to separate exudates from the background is to select proper thresholds (Leistritz & Schweitzer, 1994;Philips et al., 1993;Ward et al., 1989).In (Goldbaum et al., 1990) Mahalanobis distance is used as the classifier criteria, but results were inconclusive.Many other approximations can be found in literature, like mathematical morphology based (Cree et al., 1997;Spencer et al., 1996) or neural network based (Gardner et al., 1996), with results ranging in sensitivity from 85% and specificity of 76% (Hipwell et al., 2000), sensitivity of 77.5% and specificity of 88.7% in (Sinthanayothin et al., 2002) or sensitivity 93.1% and specificity of 71.4% (Larsen et al., 2003), this last obtained using a commercially available automatic red lesion detection system.More recently, García et al. (García et al., 2009) developed several techniques to deal with the problem of feature detection for the diabetic retinopathy diagnosis and screening, using neural nets like the multilayer perceptron classifier (García et al., 2008) or a radial basis function fed with the output of a logistic regression process, and obtained values ranging in sensitivity from 86.1% to 92.1% and from 71.4% to 86.4% in positive predicted results.
In our work an algorithm for the detection of red lesions in digital color fundus photographs is proposed.The method performs in three stages: in the first stage points candidates to be red lesions are obtained by using a set of correlation filters working in different resolutions, allowing that way the detection of a wider set of points.Then, in the second stage, a region growing segmentation process rejects the points from the prior stage whose size does not fit in the red lesion pattern.Finally, in the third stage three test are applied to the output of the second stage: a shape test to remove non-circular areas, an intensity test to remove that areas belonging to the fundus of the retina and finally a test to remove the points which fall inside the vessels (only lesions outside the vessels are considered).Evaluation was performed on a test composed on images representative of those normally found in a screening set.Moreover, comparison with manually-obtained results from clinical experts are performed, to set the accuracy and reliability of the method.

Red lesion detection algorithm
In this work an algorithm for the detection of red lesions in digital color fundus photographs is proposed.The method performs in three stages, as depicted in Fig. 2. In the first stage, working on the green channel of the input (color) image, several correlation filters are applied, each with a different resolution, detecting this way a wider spectrum of features and getting the initial set of candidate lesions.The output of this stage serves as input for the second stage, where a region growing process is used to preprocess the red lesion candidates set obtaining a more accurate set.Finally, in the third stage the previous set of candidates is filtered by means of four filtering processes: a shape (circularity) filter, which rejects regions with "low circularity", an intensity filter, which removes from the set the candidates which does not fulfill the intensity criteria, a correlation mean filter, which removes the candidate areas which represent an outliers in the correlation filter response and finally a last filter which, using the Fig. 2. General schema of the proposed methodology.Once the image is acquired, the correlation filters are applied to get an initial set of candidate lesions.These candidates are analyzed, first preprocessing them using a growing regions algorithm and then applying several filters in order to obtain the final lesions set.

Correlation filters set
The goal in this stage is to get the set of candidate red lesions.Since the regions corresponding to lesions have a wide variety of sizes, three correlation filters with different resolutions, F 1 , F 2 and F 3 , are applied to the image.As proposed in Niemeijer et al. (2005), since red lesions have the highest contrast in the green plane Hoover & Goldbaum (2000), in place of applying the correlation to the gray-level version of the image, green plane I g is used as the input.In Figure 3 the histogram of the three RGB channels is shown.As depicted, red channel pixels distribution is very scattered, with low frequencies except the very high frequencies.Blue channel is biased to low values with a very short range.Finally, red channel shows a wide range of values within an acceptable frequency range.Shape: red dots have an approximately circled shape, although their borders are very irregular.In Figure 6 histogram of the radii of the red lesions is depicted, assuming a perfect circled shape.Color and intensity: as their name points out, red lesions are mainly red colored, with a color similar to the blood vessels, and are much darker than the surrounding retinal tissues.
Figure 7(a) shows the aspect of a typical red lesion in the green channel of the original image, with the profile in the center of the lesion depicted in 7(b).
In order to design the shapes and sizes of the filter's kernels, red lesions from the images in the test set are isolated and analyzed.As it can be observed in Figure 7(b), a red lesion template can be forged using a Gaussian pattern.
Assuming symmetry in the red lesion, the Gaussian pattern can be described as a one-dimensional function, being the variable the distance (or radius) to the center of the lesion as showed in Equation 1.
where parameters a and b control the shape of the generated Gaussian curve.In Figure 8 a two-dimensional Gaussian template is depicted.
But, since the red lesions have a circular shape, a second, more "ideal", template was tried to correlate with the red lesions, and results from both templates compared.The kernels of these second filters were designed to be again square-shaped, this time with a circle inside simulating the red lesion.Figure 9 shows one of the kernels used with this approximation.
Our first results were obtained by correlating one filter with the images.This filter was designed with following parameters: 4 pixels of radius, size of 19 × 19 pixels, depth (b in the equation which models the Gaussian shape) of 30 units (taken from the maximum depth obtained from the red dots marked by clinician in the test images) and for the intensity the mean green channel value of the analyzed image.But with this approximation poor results were obtained, with a high level of false positives (230).Analyzing the results, we concluded that the variability in size of the red lesions lead to these poor results.To deal with the problem of the size variability of the lesions to detect, a multiscale solution was designed.In our case, after testing several filter configurations, best results were obtained using three filters with the following parameters: Radius: 3, 5 and 10 were the selected sizes for the radius of the filters to cover a wide spectrum of red dots radii.
Size:1 5× 15, 21 × 21 y 43 × 43, to cover the more of the red dots sizes detected by the clinician.

Depth:3 0u n i t s .
Intensity: the mean green channel value of the analyzed image.
The three images resulting from the correlation with the three filters, C 1 , C 2 and C 3 ,eachofsize M × N, are combined using equation 2, so that output from this stage R ′ is an image where each pixel corresponds to the maximum output value from the correlations in its location.Then, a threshold τ is applied to the output, to discard low value correlation pixels, and finally connected regions are built with the values over that threshold.The result from this process will be the candidate red lesion regions.
The threshold τ was set to 0.43, the lower value obtained from the profile of the red lesions, as showed in Figure 7(b).
Also, Figure 7(b) shows the profile in the center of the lesion.As it can be observed in that figure, a red lesion template can be forged using a Gaussian pattern.
Assuming symmetry in the red lesion, the Gaussian pattern can be described as a one-dimensional function, being the variable the distance (or radius) to the center of the lesion as showed in Equation 1.
Figure 10 shows the forged red dot template.
Both the correlation kernels were tested with the images test set, and results are included in Table 1.  and images from healthy patients (no Red Lesions).These results showed an extremely good sensibility but a relatively high average of false positives per image.Although this phase is meant to detect of all the true lesions while successive ones aim to identify them as true or false, an improvement was made in order to lower the false positive rate.But using the same test images images, the results obtained with the circular kernel template allowed to increase the correlation value threshold to 0.5 thus reducing the false positive cases  1. Red lesions candidates detection sensibility and average false positives per image obtained using three Gaussian templates (first and second rows) and using three circular kernel templates (third and fourth rows).
Despite the meaningful reduction of false positives many areas not belonging to red lesions are obtained in this stage (Figure 11).These areas must, of course, be discarded from the result.This will be the objective of the second stage of the algorithm.

Region growing algorithm
To remove false positives from the output of the stage 1, a growing region algorithm is used.The regions which contain less than a given threshold will be removed, since they correspond to noise and false features detected as lesions.In a growing region algorithm, first step consists on finding the seeds for the growing regions.Here the point with the higher correlation value from each region output from stage one will be considered as seed (which will be the center of the lesion, if the region comes from a lesion).Once the seed has been found, the threshold t for the growing process is determined following equation 3Niemeijer et al. (2005).
where i seed is the intensity at the starting seed position, i bg is the intensity of the same pixel in an image resulting from applying a median filter with kernel size 41 × 41 to I g and α ∈ [0, 1].
Here α = 0.5, as proposed in Niemeijer et al. (2005).Since the illumination in the retina is not uniform along all its area, using I g provides our algorithm a more robust behaviour against these photometric changes.
Figure 12 depicts the different results obtained by using a simple median filter and by using the estimated values obtained using eq.3, which clearly improve the results.Fig. 12. Results obtained using equation 3. 12(a) Profile of a lesion and several growing thresholds (median filter and filter from eq. 3).12(b) Results from the different growing thresholds: pixels below estimated threshold (eq. 3) are depicted in green and the pixels between this one and the median filter are depicted in red.
Growing starts in the seed pixel and stops when no more connected pixels below the threshold can be found.An illustration of the region growing algorithm is shown in Figure 13.And Figure 14 shows the evolution of several regions after region growing algorithm is performed.
The grown objects together form the final candidate object set.If the region size is above 200px it will be discarded, since it will be considered as a background area or vessel area.This threshold value has been set empirically, analyzing the sizes of the red lesions in the test set of images, composed by 100 images.Figure 15 depicts the result from this stage applied to image from Figure 11.Image in Figure 15 shows some regions that are not red lesions, and should still be removed from results.This goal will be fulfilled by stage three of the algorithm, commented in the next section.

Feature based filtering process
After candidate regions are detected and region growing algorithm performed, in the last stage, candidates less adjusted to red lesion profile will be removed.To do so, several high-level knowledge based filters have been designed to improve detection results.This improvement is acquired by removing false positives and not doing so with the true positives.

Shape filtering
The first applied filtering process consists on removing the regions which does not fit in the regular shape of the red lesions, which are circular shapes.In this step, the degree of circularity for each region is measured, and using a threshold, non-circular shapes are delete from the set of candidate regions.Circularity C is computed by means of equation 4.
where p represents the perimeter of the candidate region and a represents its area.
To deal with the irregular shapes of the candidate region, which makes computation of perimeter and area more difficult, firstly a closing morphological operator Jähne ( 2005) is applied to the images.This way, inner holes and small irregularities resulting from the region growing stage are removed.
To obtain the perimeter values chaincodes algorithms Russ (1995) were discarded, and a simpler method, counting the pixels of the region which are neighbors of a pixel outside the region, was employed, although this method trends to overestimate perimeter values of figures, which does not represent an important problem for our algorithm, since false positives and red lesions are equally affected by this overestimation.To prove this statement, perimeter values were obtained using chaincodes and using our method, with results depicted in Figure 16.In this graphic circularity values obtained using chaincodes to get the perimeters (c1) are shown in the horizontal axe, and values obtained using the method of frontier pixels are s h o w ni nv e r t i c a la x e( c2).Although c2 > c1 for every values, this does not affect to the filtering process (whatever axe is considered, the more of the false positives is clearly below a threshold).
Once circularity degree has been computed, equation 5 is applied, in order to choose only circular-like regions.
The value U t = 0.375 was determined empirically by evaluating the set of test images (Figure 18). Figure 17 shows several examples of candidate regions removed by this filter.
In Figure 19 the result obtained from this filter is depicted, showing the regions accepted marked as blue and green areas.Table 3 shows the results obtained applying the circularity filtering to the output of the growing-region algorithm stage.
Images type Red Lesions no R. Lesions Sensibility 85% N/A False positives 36 7 Table 3. Results obtained applying the shape filter to the output from stage one described in section 3.

Intensity filtering
The second filtering process performed in this third stage is an intensity-based filtering.Since red lesions are dark structures, lighter structures can be removed from the set of candidate regions.Since illumination is not constant among images, a robust threshold is needed, and it can not be a constant.From the set of test images, a functional threshold was designed, so that it depends only on the mean value of the green plane image.Regions are marked as green and blue areas.
In each of the images, an expert clinician set the right threshold so that minimum of false positives was zero, minimizing the number of false negatives.Equation 6, the intensity threshold, I, was obtained by a minimum square error fitting of a straight line to the set of values obtained from this manual validation.
where Īg represents the mean value of the green plane of the image.Candidate regions above the threshold will be removed, as depicted in Figure 20.
In Figure 21 the result from this process is shown, with candidate regions marked as green and blue areas.
Table 4 shows the obtained intensity filtering results.Table 4. Results obtained applying the intensity filter to the output from stage one described in section 3.

Correlation filtering
The third of the filters consists on a correlation-based filter.Taking as basis the correlation image R ′ from first stage, this filter remove every region whose mean value for this correlation image is not above a given threshold.After testing a set of values in the test images, this threshold was empirically set to the value 0.4 (Figure 22).
In In Figure 24 the results of the accepted and rejected regions are depicted.In Table 5 the performance of the correlation filtering is showed.
Images type Red Lesions no R. Lesions Sensibility 81% N/A False positives 22 4 Table 5. Results obtained applying the correlation filtering to the output from stage one described section 3.

Creases-based filtering
Finally, in the last of the filtering processes the creases computed from the original digital retinal image are used as landmarks (in Figure 25 the cropped creases image of the Figure 11(a) is shown).
Regions intersected by creases will be removed if its mean value is over 0.25.This filter removes the regions near inside the vascular tree, which must be discarded in the analysis of red lesions.Figure 26 shows an example of a candidate region removed by the creases filter.
Figure 27 represents the variation of false positives and sensibility rate as the mean value set for removal of regions in this filter varies.
Figure 28 depicts the output result from this filter, which is also the final result of the algorithm.Again, red lesions are marked as green and blue areas.
The effectiveness of the crease-based filtering is described in Table 6.
Images type Red Lesions no R. Lesions Sensibility 85% N/A False positives 36 7 Table 6.Crease-based filtering results.
Finally, the complete algorithm with all its stages is depicted in Figure 29.

Validation and results
To validate the algorithm described in previous sections, an experiment was designed in collaboration with the oculists of the Complejo Hospitalario Universitario de Santiago (CHUS) and Instituto Tecnológico de Oftalmología de Santiago (ITO).From the set of 75 images, captured using a Canon CR5 nonmydriatic 3CCD camera at 45 o field of view, the clinician manually marked the red lesions detected in 50 of the images.Then, the same images were input to the system, and obtained results were compared.25 images without lesions were also input to the system, in order to evaluate its response to healthy people images, and to count the number of false positives.lesions marked as yellow areas.To visually compare the results obtained with the algorithm described, the red lesions also detected by the system (true positives) are also rounded by a blue circle, the false positives are marked as green circles, and false negatives are marked as red circles.
Results obtained with the whole test set of 75 images reported the numbers in Table 7, for the images with (first column) and without (second column) red lesions.First and second row show the total number of features detected (not applicable in the case of images without lesions for the manual process) for the manual segmentation and using the algorithm, respectively.In the third row the number of false positives is shown (N.A. in the case of images without red lesions).Fourth row contains the number of successfully recovered lesions, and finally fifth row display the number of false negatives (N.A. in the case of images without red lesions).
From the results in Table 7, it can be seen that 70.7% of the detected lesions are correctly detected, with a sensitivity of 0.785%, which compared with the results from the introduction  Table 7. Numbers obtained in the evaluation of the system for the images with (first column) and without (second column) red lesions.First and second rows show the total number of features detected (not applicable, N.A., in the case of images without lesions for the manual process) for the manual segmentation and using the algorithm, respectively.In the third row the number of false positives is shown (again, N.A. in the case of images without red lesions).Fourth row contains the number of successfully recovered lesions, and finally fifth row display the number of false negatives ( N.A. in the case of images without red lesions).
indicates that our system could be a really helpful tool in a real screening system, reducing workload of expert clinicians by pre-filtering patients which present some kind of lesion.

Conclusions
In this work a system to assist in the detection of red lesions on digital retinal angiographies have been presented and validated through the whole process.The system performs in three stages: in the first stage candidate areas to be red lesions are detected by means of a set of correlation filters, adapted to different resolutions.This way features of different sizes can be detected.Then, in the second stage, a growing region algorithm together with a matched threshold allows the rejection of candidates which does not fit in the size of the red lesions.Finally, in a third stage, false positives are remove by filtering the output with four matched filters, which analyze several high level knowledge of the candidate regions: shape (searching for circularity), intensity (searching for dark areas), size (with a correlation filter) and finally discarding regions inside the vascular tree by means of the crease lines.
The whole algorithm has proven to be robust and accurate, with a sensitivity of 0.785%, although a bigger set of images and further validation is needed.

Fig. 1 .
Fig. 1.Digital color retinal image used as input for the diagnosis.

Figure 4 Fig. 4 .
Figure 4 shows an example of the results obtained by an expert clinician working in each of the RGB channels.Red lesions present three common characteristics:

Fig. 5 .
Fig. 5. Histogram of the areas of the red dots.Size of the red lesions is never higher than 200 pixels.

Fig. 6 .
Fig.6.Histogram of the radii of the red lesions.Radius is always between 2 and 7 pixels long.

Fig. 7 .
Fig. 7. Green channel view of a red lesion.The pixels belonging to the lesion are darker than the background.Top: lines mark borders of the lesion, located in the centered square.Bottom: profile of the center of a red lesion.

Fig. 8 .
Fig. 8. Example of one of the three Gaussian kernels which the image is correlated with.

Fig. 9 .
Fig. 9. Example of one of the three kernels which the image is correlated with.

Fig. 10 .
Fig. 10.Example of an artificial red lesion created using a Gaussian template.
Fig. 11.11(a)Digital color retinal image used as input for the diagnosis.11(b)Cropped region from the output image after applying the correlation filters, equation 2 and threshold to image from Figure 11(a).Candidate red lesions are marked in red, green and blue.

Fig. 13 .
Fig. 13.Illustration of the region growing algorithm.Starting at point (3,3), the algorithm evolves until no new pixels can be added to the region.

Fig. 14 .
Fig. 14.Region growing examples.In red clinician red lesions marked by clinician is shown, in green output from candidate regions detection is depicted, and in yellow output regions overlaped between both this stages.

Fig. 15 .
Fig.15.Cropped region from the output image after applying the growing region process to image (Figure11).Regions are marked as green and blue areas.
Fig. 16.Circularity values compared.Red squares represent false positives, and blue squares represent true red dots.

Fig. 17 .
Fig. 17.Regions removed by the circularity filter.Upper row shows the area in the retina and the bottom row removed candidate regions are shown.

Fig. 18 .
Fig. 18.Sensibility and false positives values depending on the circularity threshold used.

Fig. 20 .
Fig. 20.Regions removed by the intensity filter.Upper row shows the area in the retina and the bottom row removed candidate regions are shown.
Figure 23 examples of a candidate lesion removed 23(a), and a candidate region preserved 23(b) by the mean correlation filter are depicted.In 23(a), first row shows the original area in the retina (left) and the candidate lesion (right).In second row values of correlation are shown.In 23(b) the same figures are presented with the same interpretation, but in this case values of correlation lead to the preservation of the candidate region.

Fig. 22 .
Fig. 22. Sensibility and False Positives values depending on the correlation threshold used.

Fig. 23 .
Fig. 23.Example of a candidate lesion removed 23(a), and a candidate region preserved 23(b) by the mean correlation filter.In 23(a), first row shows the original area in the retina (left) and the candidate lesion (right).In second row values of correlation are shown.In 23(b) the same figures are presented with the same interpretation, but in this case values of correlation lead to the preservation of the candidate region.

Fig. 24 .
Fig. 24.Cropped region from the output image after applying the correlation filtering process.Regions are marked as green and blue areas.

Fig. 25 .
Fig. 25.Creases of the cropped region of the image in Figure 11(a), used to remove regions inside or near vessels.
Figure 30 shows an image analyzed by the clinician, with the red 266 Diabetic Retinopathy www.intechopen.com

Fig. 27 .
Fig. 27.Sensibility and False Positives values depending on the creases threshold used.

267
Fig. 29.Representation of the whole algorithm with the different stages where the images are processed.

Fig. 30 .
Fig. 30.Comparison of the manual and automatic red lesions detection in image from Figure 11(a).True positives are rounded by a green circle, the false positives are marked as blue circles, and false negatives are marked as red circles.
Table 1 shows the early results obtained for images from patients with red lesions 255 Preventing Diabetic Retinopathy: Red Lesions Detection in Early Stages

Table 2
describes the quality improvement achieved in this stage comparing the false positive average obtained in section 3.

Table 2 .
Results obtained applying the growing region algorithm.