Hard Exudates
Hard Exudates
Research Article
Automatic Detection of Hard Exudates in Color Retinal
Images Using Dynamic Threshold and SVM Classification:
Algorithm Development and Evaluation
Correspondence should be addressed to Shengchun Long; [email protected] and Xiaoxiao Huang; [email protected]
Received 12 March 2018; Revised 1 December 2018; Accepted 6 January 2019; Published 23 January 2019
Copyright © 2019 Shengchun Long et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Diabetic retinopathy (DR) is one of the most common causes of visual impairment. Automatic detection of hard exudates (HE)
from retinal photographs is an important step for detection of DR. However, most of existing algorithms for HE detection are
complex and inefficient. We have developed and evaluated an automatic retinal image processing algorithm for HE detection
using dynamic threshold and fuzzy C-means clustering (FCM) followed by support vector machine (SVM) for classification. The
proposed algorithm consisted of four main stages: (i) imaging preprocessing; (ii) localization of optic disc (OD); (iii) determination
of candidate HE using dynamic threshold in combination with global threshold based on FCM; and (iv) extraction of eight
texture features from the candidate HE region, which were then fed into an SVM classifier for automatic HE classification. The
proposed algorithm was trained and cross-validated (10 fold) on a publicly available e-ophtha EX database (47 images) on pixel-
level, achieving the overall average sensitivity, PPV, and F-score of 76.5%, 82.7%, and 76.7%. It was tested on another independent
DIARETDB1 database (89 images) with the overall average sensitivity, specificity, and accuracy of 97.5%, 97.8%, and 97.7%,
respectively. In summary, the satisfactory evaluation results on both retinal imaging databases demonstrated the effectiveness of
our proposed algorithm for automatic HE detection, by using dynamic threshold and FCM followed by an SVM for classification.
Input Retinal
Image
Preprocessing
OD Detection
Exudate Detection
Candidate
Vessels Extraction
Exudates
Removal of OD
Region
Nonexudate Exudate
regions regions
Figure 2: Flow chart of our proposed algorithm for automatic detection of HE.
to many factors including the noises introduced during the As proposed by Clara et al. [19], color normalization was
imaging acquisition process and the improper reflection performed by enhancing luminance plane of YIQ color model
of camera flash and retinal pigmentation. Additionally, the instead of enhancing each color plane of RGB. The modified
uneven illumination increases the intensity level near OD and process is as follows:
decreases in regions away from OD. All these factors have
significant impact on HE detection. 𝑌𝑚𝑜𝑑 = 𝑎𝑌 − 𝑏𝐼 − 𝑐𝑄 (1)
In our algorithm, color intensity normalization and con- The modified color model YIQ was then converted back
trast enhancement of the fundus photographs were operated to RGB color model, as shown in the first three images in
with the size of retinal image rescaled to 512 × 512 pixels. Figure 3. The empirical values of 1.8, 0.9, and 0.9 were used for
4 BioMed Research International
Figure 3: Example of retinal fundus image preprocessing. (a) Original retinal image. (b) Color normalized YIQ plane image. (c) Enhanced
RGB plane image. (d) Green channel image after CLAHE (zoom into the blood vessels with brighter strip). (e) Green channel image after
morphological opening. (f) Mean filtering of (e).
parameters a, b, and c, respectively, with which satisfactory OD localization is relatively simple and fast in normal
results were achieved when the images were converted back retinal images because it is where the largest cluster of
to RGB color model, producing greater contrast between the brightest pixels is; however, this becomes more challenging
HE and the background for the next step of HE detection. in the images where the area of bright lesions is also large or
It has been observed that the OD appears most contrasted OD is obscured by retinal blood vessels, for example, when
in the green channel when compared to red and blue channels there is a large hemorrhage on the disc [6]. In our proposed
in the RGB retinal images [20]. Additionally, as the red algorithm, the information of image brightness and retinal
channel is too saturated and the blue channel is the darkest vasculature features were used for OD localization [23],
color channel that does not contain much information, the which involved three steps: retinal blood vessels extraction,
green channel image was only used for the HE detection. the center of OD localization, and OD segmentation.
Furthermore, in order to remove some bright strips down
the central length of the blood vessels, the green plane of the Retinal Blood Vessels Extraction. In general, retinal blood ves-
image after contrast limited adaptive histogram equalization sels in the green channel fundus images do not have enough
(CLAHE) was filtered by applying a morphological opening contrast in comparison with the surrounding background.
using a three-pixel diameter disc [21]. Next, the illumination An enhancement method of CLAHE [24] was applied to
equalization method in [22] was used to correct shade as solve this problem. Next, a mean filtering with a 9 × 9 pixel-
follows: kernel was used to blur the image to reduce the noises. The
retinal blood vessels image 𝐼𝑏V was obtained by subtracting
𝐼𝑖𝑒 = 𝐼 − 𝐼𝑏𝑔 + 𝑢 (2)
the blurred image from the enhanced image by CLAHE,
where a mean filter of size 51 × 51 was applied to the green and the retinal blood vessels image 𝐼𝐵𝑉 was obtained by
channel image I to generate a background image 𝐼𝑏𝑔 which thresholding operator [25] applied to 𝐼𝑏V . This process is
was then subtracted from the I to correct for shade variations. shown in Figure 4, where two example images with different
Finally, the average intensity u of green channel image I was illumination conditions are given.
added to keep the gray range same as in the I. The example
images during the process are shown in Figure 3. The Center of Optic Disc Localization. Retinal blood vessels
originate from OD and spread outwards to the retina and the
2.2.2. Optic Disc Detection and Masking. OD localization is macular region. The vessels are generally aligned vertically
an essential stage in our proposed algorithm because OD in the vicinity of OD [26]. In order to obtain retinal blood
has similar properties as exudates in terms of color and vessels position information, a mean filter of size 61 × 61 was
brightness. The OD is a bright yellow disc in the retina where applied to the green channel image I to generate an average
retinal blood vessels emerge. Therefore, the disc should be intensity image 𝐼𝐺 , and the 𝐼𝐵𝑉 (local average intensity of
masked from the fundus image before further HE detection. 𝐼𝐵𝑉) was computed from the average intensity of the pixels
BioMed Research International 5
Figure 4: Examples of retinal blood vessels extraction on two retinal images with different illumination conditions. (a)+(d) Green channel
images after CLAHE. (b)+(e) Mean filtering of (a) and (d) respectively. (c)+(f) Extracted retinal blood vessels.
(a) (b)
Figure 5: Illustration of the process for localizing the center of the optic disc. (a) Green channel image. The green boxes indicate the size of
the mean filter. (b) The red boxes show the size of the window used to calculate the local average intensity of 𝐼𝐵𝑉 .
within an N × M window as illustrated in Figure 5. In this Optic Disc Segmentation. To detect the OD boundary, the
study, the window size N was between 50 and 60 pixels, and M size m × n of region of interest (ROI) was defined based
was between 20 and 25 pixels. Next, in order to combine the on the localization result of OD center, where m and n
brightness features and blood vessels position information were one-ninth of the respective dimensions of the image
from the green channel image, each pixel 𝐼𝑂𝐷(𝑟, 𝑐) in the multiplied. Since the OD in the retinal images has circular
image was adjusted as follows: boundary shape [27], a circular Hough transform was applied
to segment the OD boundary [23, 28, 29]. The Hough
𝐼𝑂𝐷 (𝑟, 𝑐) = 𝐼𝐵𝑉 (𝑟, 𝑐) − 1.2 ∗ 𝐼𝐺 (𝑟, 𝑐) (3)
transform is a widely considered technique in Computer
The image 𝐼𝑂𝐷 was then traversed with the minimum point Vision and Pattern Recognition to detect geometrical features
identified as the center of OD, as shown in Figure 6(a). that can be defined through parametric equations like straight
6 BioMed Research International
Figure 6: Demonstration of optic disc (OD) localization, segmentation, and masking. (a) Localization of OD center. (b) Segmentation of
OD. (c) Masking OD.
lines and circles. The OD segmentation by applying Hough (5) The segmentation result was obtained by comparing
transform is shown in Figure 6(b). Lastly, the segmented OD the threshold matrix T with the retinal image.
was masked to avoid the interference to the following HE The size of the subimage affects the retinal imaging
detection, as shown in Figure 6(c). segmentation results. Figure 7 shows the FCM clustering
results for different subimage sizes. Taking both the running
2.2.3. Detection of Hard Exudates. There were two main time and accuracy of local threshold into consideration,
procedures. FCM clustering was firstly used to get the the size of 30×40 pixels was selected as the most suitable
local dynamic threshold of each subimage, which was then subimages size.
combined with global threshold matrix to segment color
retinal images. Next, an SVM classification was applied to Feature Extraction for Hard Exudates Detection. In order
distinguish exudates and nonexudates regions. to further segment the exudates regions from the exudates
candidates, some significant features that were commonly
Retinal Image Segmentation Using FCM. The following used by eye care practitioners to visually distinguish HE
describes the image segmentation process using the dynamic from other types of lesions were extracted from each region
threshold in combination with global threshold based on and used as inputs of SVM. The key features included the
FCM clustering: following:
(1) The retinal image was divided into a series of subim-
ages (K subimages), and FCM algorithm was used to assign (i) Mean green channel intensity (f1): a mean filter of
pixels in each subimage to different categories by using size 3×3 was applied to the green channel image. This
fuzzy memberships. FCM is an iterative optimization that feature indicates the gray-scale intensity for all pixels.
minimized the cost function defined as follows: Again, only the features from the green channel were
𝑛 𝑐 extracted.
𝑚 2
𝐽 (𝑈, 𝑉) = ∑ ∑ (𝑢𝑘𝑖 ) 𝑥𝑖 − V𝑘 (4) (ii) Gray intensity (f2): it was the gray-scale value of each
𝑖=1 𝑘=1 pixel.
where 𝑢𝑘𝑖 represents the membership of pixel 𝑥𝑖 in the kth (iii) Mean hue (f3), mean saturation (f4), and mean value
cluster and V𝑘 represents the clustering center of the kth (f5) of retinal image in HSV color model: a mean
cluster. Considering that the gray-scale value was used as filter of size 3×3 was, respectively, applied to the
the only feature for clustering, the midpoint of the clustering three channel image 𝐼ℎ , 𝐼𝑠 , 𝐼V . Because exudates are the
center line was used as the threshold in the segmentation bright lesions on the surface of retina, the information
sense, where the mean of the two clustering centers was about saturation and brightness (f4 and f5) of retinal
obtained as the threshold of the subimage; image is also important.
(2) The entire original retinal image pixels were classified (iv) Energy (f6): energy was the sum of intensity squares
in a similar way as above to obtain the global threshold and of all pixel values in eight-convexity.
construct the global matrix S with the same size as the original
image. (v) Standard deviation (SD) of the green channel image
(3) After the interpolation of the thresholds of the (f7): the morphological opening operation was
respective subimages into a dynamic threshold matrix D of applied to the green channel image to preserve
the same size as the entire original image, a mean filter of size foreground regions that have a similar shape to the
10 × 10 was applied to the matrix D. structuring element or that completely contain the
(4) The final threshold matrix T was constructed as structuring element, while eliminating all the other
regions of foreground pixels.
𝑇 = 𝑘𝑆 + (1 − 𝑘) 𝐷 (5)
(vi) Mean gradient magnitude (f8): it was the magnitude
where the value of k was set to 0.1. of the directional change in intensity of edge pixels.
BioMed Research International 7
Figure 7: FCM clustering segmentation with different subimage sizes. (a) Original retinal image. (b) Zooming into the exudates region. (c)
Segmentation result of original image using FCM. (d) Segmentation result with the subimage size of 15×20 pixels. (e) With the subimage size
of 30×40 pixels. (f) With the subimage size of 60×80 pixels. Note: the background is filled with black.
Exudate Candidate
Training Regions
Select classification
Training samples Feature Extraction Kernel function
results
N
Feature Training Y Optimal separating Samples after
SVM Linearly separable
Extraction hyperplane classification
It helps in distinguishing strong and blurry edges and 250 pixels) of each of the 47 ground truth images were
to differentiate between exudates and other bright manually selected from the e-ophtha EX dataset as training
lesions [3]. samples. These selected regions have been divided into exu-
dates regions and nonexudates regions. Using the e-ophtha
In comparison with other published algorithms where dozens EX dataset, a 10-fold cross-validation was applied to evaluate
of features were used [3, 9, 11], only eight key features the ability of SVM classifier on pixel-level. The database
were extracted in this study to reduce processing time while was randomly split into 10 mutually exclusive subsets (the
maintaining the accuracy of HE extraction. folds) 𝐷1 , 𝐷2 , 𝐷3 , . . . , 𝐷10 , approximately of equal size. The
classifier was trained on 42 selected training images and
SVM Classification. The flow chart of the SVM classification tested the remaining 5 images to output a binary matrix
algorithm is shown in Figure 8. Briefly, the features extracted representing the classification result. This procedure was
from the test images were fed into the trained SVM classifier repeated 10 times.
to output a binary matrix representing the classification For each training image, a certain number of pixels
results. In this study, SVM was applied along with kernel (ranges from 50 to 250) were manually selected to construct
function based on radial basis function (RBF). RBF kernel training vector set. Each pixel constituted a feature vector
function has been widely used with two parameters (C and from the eight key features. 𝑥𝑖 represents the input sample
𝛾) obtained from the grid search method. feature vector set as follows:
For training and cross-validation purposes, a few small
regions (each image is about 1-10 regions, size between 50 𝑥𝑖 = (𝑓1, 𝑓2, 𝑓3, . . . , 𝑓8) (6)
8 BioMed Research International
The acquired training sample set (𝑥𝑗 , 𝑦𝑗 ) was input to train or as a false negative (FN) pixel if it belongs to
the SVM. 𝑦𝑗 is the category flag:
𝐺𝑗 ∩ 𝐷
{𝐺𝑗 | 𝐺𝑗 ∩ 𝐷 = ⌀} ∪ {𝐺𝑗 ∩ 𝐷 | ≤ 𝜎} (10)
{−1, 𝑥𝑗 ∈ 𝐴 𝐺𝑗
𝑦𝑗 = { (7)
1, 𝑥𝑗 ∈ 𝐵 The remaining pixels were considered as true negative (TN)
{
pixels.
𝑗 ⊂ {1, 2, . . . , . . . , 𝑊}, W is the dimension of the set of In this study, the four classes were clearly unbalanced
sample feature vectors. A and B, respectively, represent the as TP, FN, and FP were negligible in practice with respect
HE and non-HE regions. In this study, around 7200 training to TN, computing the specificity, i.e., TN/(FP+TN), and a
vectors (or pixels) from the 42 training images were manually receiver operating characteristic (ROC) curve, which is not
selected by an operator (W=7200). appropriate. Sensitivity (𝑆 = T𝑃/(𝑇𝑃 + 𝐹𝑁)), positive
The 10-fold cross-validation procedure was repeated five prediction value (𝑃𝑃𝑉 = 𝑇𝑃/(𝑇𝑃+𝐹𝑃)), and F-score ((2×𝑆×
times by five different operators to manually select a region 𝑃𝑃𝑉)/(𝑆 + 𝑃𝑃𝑉)) were therefore used as the performance of
from each training image and then run the above procedure HE detection. The PPV combined both TP and FP, indicating
to evaluate the algorithm reliability. the ratio of detected exudates pixels annotated as exudates
pixels by specialists.
2.3. Ensemble Evaluation Criteria. The evaluation criteria
for HE identification were presented at two levels: pixel- 2.3.2. Image-Level Evaluation on DIARETDB1 Database.
level and image-level depending on which database was From clinical point of view, it would also be useful to evaluate
used. The pixel-level determination was based on whether the presence of exudates at the image-level, especially for DR
each pixel of the classification result from the e-ophtha EX screening applications. In order to evaluate the robustness
dataset has exudates in comparison with precisely labelled of our algorithm, our algorithm was independently tested
ground truth. The image-level HE detection was based on to determine whether the testing image contains exudates
the presence or absence of HE in the classification result using the 89 images in the DIARETDB1 database, which has
to determine whether a retinal image in the DIARETDB1 been labelled with ground truth at the image-level. As shown
contains exudates. in Figure 9, each image was labelled by four specialists, if
the ground truth confidence level is greater than or equal to
75%, the image was diagnosed with HE. At the image-level, if
2.3.1. Pixel-Level Evaluation on e-Ophtha EX Database. The
the image according to our algorithm and the ground truth
evaluation can be classically performed by counting the
both contain exudates region, the classification result for this
number of pixels which were correctly classified. However,
retinal image was concluded as a TP. Matlab functionality
this approach was inappropriate for exudates segmenta-
for computing performance measures is publicly available at
tion evaluation because the contours of exudates do not
the DIARETDB1 web page [18]. For example, the processed
match perfectly between the determinations from different
image as Figure 9(d) was fed as an input into the evaluation
observers, resulting in weak agreement on exudates determi-
protocol to obtain the evaluation outcomes (TP, TN, FP, FN).
nation. In this study, a hybrid validation method was used,
Three different evaluation parameters, including the sensi-
where a minimal overlap ratio between ground truth and
tivity, specificity, and accuracy, were then used to determine
candidates was required.
the overall performance of HE detection. Their calculation
Given the segmented exudates connected component set
formulas are shown as follows:
{𝐷1 , 𝐷2 , . . . , 𝐷𝑁} and the ground truth exudates component
set {𝐺1 , 𝐺2 , . . . , 𝐺𝑀}, we have the following. 𝑇𝑁 + 𝑇𝑃
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =
A pixel was considered as a true positive (TP) if it belongs 𝑇𝑃 + 𝐹𝑃 + 𝑇𝑁 + 𝐹𝑁
to
𝑇𝑁
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = (11)
𝐷𝑖 ∩ 𝐺 𝑇𝑁 + 𝐹𝑃
{𝐷 ∩ 𝐺} ∪ {𝐷𝑖 | > 𝜎}
𝐷𝑖 𝑇𝑃
𝑆𝑒𝑛𝑡𝑖V𝑖𝑡𝑦 =
(8) 𝑇𝑃 + 𝐹𝑁
𝐺𝑗 ∩ 𝐷
∪ {𝐺𝑗 | > 𝜎}
𝐺𝑗 2.4. Data Statistical Analysis. For the 10-fold cross-validation
using the e-ophtha EX database, the sensitivity, PPV, and F-
score were calculated for each image, with their mean and
where | ⋅ | is the cardinal of a set and 𝜎 is a parameter ranging
standard deviation (SD) across all the images calculated.
from 0 to 1. 𝜎 was set to 0.2 as used by Zhang et al. [12].
Their SD between the five repeats performed by the five
A pixel was considered as a false positive (FP) if it belongs
different operators were also calculated to demonstrate the
to
reliability of our algorithm. ANOVA analysis was then per-
formed to check the repeatability between the five repeats. For
𝐷 ∩ 𝐺
{𝐷𝑖 | 𝐷𝑖 ∩ 𝐺 = ⌀} ∪ {𝐷𝑖 ∩ 𝐺 | 𝑖 ≤ 𝜎} (9) the independent test on the DIARETDB1 database, the overall
𝐷𝑖 mean sensitivity, specificity, and accuracy were calculated
BioMed Research International 9
0.8
0.6
0.4
0.2
(a) (b)
(c) (d)
Figure 9: Example of one retinal image in DIARETDB1 database. (a) Original image. (b) Exudates regions labelled by four specialists (color
decodes the ground truth confidence). (c) The exudates regions where the ground truth confidence level ≥ 75%. (d) Segmentation result of
our algorithm.
from all the 89 images, which were simply compared with Table 1: Overall performance comparison of our proposed algo-
other published results using the same database. rithm with published studies for HE detection on e-ophtha EX
dataset.
3. Results Methods Sensitivity PPV F-score
Zhang et al. (2014) [12] 74% 72% 73%
3.1. 10-Fold Cross-Validation Results on the e-Ophtha EX
Welfer et al. (2010) [38] 79% 55% 69%
Database. Statistical analysis showed that there was no sig-
Imani et al. (2016) [39] 80.32% 77.28% -
nificant difference between the five repeat measurements
for the evaluation parameters (all p>0.8). As shown in Liu et al. (2017) [30] 76% 75% 76%
Figure 10(a), the overall mean and SD of sensitivity, PPV, Kusakunniran et al. (2018) [40] 56.4% - -
and F-score across all the images e-ophtha EX database Our proposed algorithm 76.5% 82.7% 76.7%
were 76.5%±15.1%, 82.6% ±16.7%, and 76.7% ±12.7%. The
measurement repeatability (SD of the five measurements)
of sensitivity, PPV, and F-score for each individual image is most of the large exudates could be identified successfully.
shown in Figure 10(b). It ranged from 0.3%∼16%, indicating Some FPs with wrongly detected HEs could be caused by the
that our algorithm proposed in this study for HE detection is presence of other bright lesions, such as cotton wool spots and
sufficiently stable. drusens. Some small HE pixels were missed by our proposed
Table 1 also shows our algorithm achieved a higher score algorithms because of their low contrasts.
of PPV values in comparison with other published results also
using pixel-level evaluation on the same database, indicating 3.2. Validation Results on DIARETDB1 Database. Table 2
that our method could distinguish HE from other bright lists the overall evaluation performance of our proposed
lesions more effectively. To visualize the HE detection from algorithm using image-level evaluation in the DIARETDB1
different retinal images, three example images are provided in database. The overall mean sensitivity, specificity, and accu-
Figure 11. Only the exudates regions (the left three subfigures) racy were 97.5%, 97.8%, and 97.7%, respectively, which
were cropped from the original retinal images. Figure 11(a4, compared well with other published results. Some example
b4, c4) shows the results of validation results at the pixel-level images from DIARETDB1 database are shown in Figure 12 to
with 𝜎 = 0.2, where the green, red, blue, and black pixels are demonstrate whether an image has been correctly or wrongly
the TP, FN, FP, and TN pixels, respectively. It can be seen that detected with exudates.
10 BioMed Research International
1 0.3
0.9
0.8
0.2
0.7
0.6
0.1
0.5
0.4
0.3 0
Sensitivity PPV F-score 0 10 20 30 40 47
Image Number
Sensitivity
PPV
F-score
(a) (b)
Figure 10: Data statistical analysis. (a) The overall mean and standard deviation of sensitivity, PPV, and F-score for all the images in the
e-ophtha EX database. They are given separately for the five repeat measurements by five different operators. (b) The repeatability (standard
deviation of 5 repeat measurements by 5 operators on each image) of sensitivity, PPV, and F-score of our algorithm on the e-ophtha EX
database.
Figure 11: Example of pixel-level validation from three example images. (a1, b1, c1) Exudates regions cropped from the original retinal fundus
image. (a2, b2, c2) The ground truth images in e-ophtha EX dataset. (a3, b3, c3) The segmented results with our algorithm. (a4, b4, c4) The
results of pixel-level validation. The green, blue, red, and black pixels are the TP, FN, and FP, and TN pixels, respectively.
BioMed Research International 11
Figure 12: Example images showing the results of HE detection in the DIARETDB1 database. (a) HE was correctly detected. (b) Some small
HE was missed in this image. (c) Failed to detect HE correctly.
Table 2: Overall performance comparison of our proposed algo- database in this study, an accuracy of 89.9% was achieved.
rithm with published algorithms for HE detection on DIARETDB1 Although our OD detection was slightly less accurate than
database. theirs, our method was much simpler and faster. More
Methods Sensitivity Specificity Accuracy importantly, our method is very suitable for the application
Harangi et al. (2014) [11] 92% 68% 82% of HE detection as an intermediate step, and the relatively
high accuracy was comparable with many other complex
Haloi et al. (2015) [9] 96.54% 93.15% -
algorithms with specific aim for OD detection.
Imani et al. (2016) [39] 89.01% 99.93% -
FCM has been implemented in exudates segmentation
Liu et al. (2017) [30] 83% 75% 79% algorithms [13, 33]. Sopharak et al. [34] proposed an FCM
Rekhi et al. (2017) [31] 91.67% 92.68% 92.13% based method to determine whether a pixel has exudates
Fraz et al. (2017) [10] 92.42% 81.25% 87.72% or not, but they only achieved moderately acceptable seg-
Kusakunniran et al. (2018) [40] 89.1% 99.7% 96.2% mentation result with the sensitivity of 80% on DIARETDB1
Our proposed algorithm 97.5% 97.8% 97.7% database. Global threshold is commonly used for image
segmentation. However, using the global information only
may ignore the details from those small HEs. If the gray-
4. Discussion and Conclusion scale value of background is constant, using global threshold
for segmentation would achieve satisfactory results. However,
We have developed and evaluated an automatic retinal image
processing algorithm to detect HEs using dynamic threshold, in many cases, because the contrast between the object and
FCM and SVM. The color retinal images were segmented background changes in different regions, the gray-scale value
using dynamic threshold in combination with the global of background varies, resulting in a poor segmentation out-
threshold, and the segmented regions were classified into come. In other fields, it has been shown that using dynamic
two disjoint classes (exudates and nonexudates pixels) using threshold in combination with the global threshold can
SVM. The algorithm was tested on two publicly available significantly improve the segmentation results. For instance,
databases (DIARETDB1 and e-ophtha EX database), and the combined thresholds have been applied successfully to
the evaluation results quantitatively demonstrated that our distinguish the human skin in color image and melasma
proposed algorithm is reliable in terms of repeatability and image segmentation, where good segmentation results were
also achieved high accuracy for HE detection. achieved [35, 36]. The key advantage of combining the image’s
It is known that OD has similar properties with exudates global information with the local details could overcome the
in terms of color and brightness, masking or removing OD problems associated with using local threshold alone. After
from the fundus image before further processing for HE employing this combined approached, the satisfactory eval-
detection is therefore important, which would improve the uation results (97.5% of sensitivity on DIARETDB1 database,
HE detection accuracy [10, 30, 31]. This study has presented a 76.5% of sensitivity on e-ophtha EX database) were achieved
method for OD localization by combining the information in this study. It is noted that only one feature (the gray-
of brightness and retinal vasculature features. Our method scale value of retinal images) was input into the FCM. More
is inspired by Medhi et al. [23] who used a vertical Sobel input features and the FCM clustering combined with the
mask and considered OD as the region with maximum value morphological technique could be also considered in future
of edge pixels. Unlike other methods with more complicated to achieve higher accuracy.
process [29, 32], we only need to traverse the entire image SVM classifier was selected in this study to distinguish
twice to find the pixel with the largest gray-scale value and true exudates regions from nonexudates regions. One of
the most densely distributed of blood vessels, achieving fast the key reasons is that the sample size of retinal image
localization of OD. Rahebi et al.’s [32] study applied the firefly database used in this paper is not large enough. Using SVM
algorithm and reported a success rate of 94.38% for OD was expected to have better classification result because
localization in the DIARETDB1 database. Using the same SVM can apply the nonlinear relationship between data
12 BioMed Research International
and features better than other classifiers [16]. Secondly, Authors’ Contributions
SVM can have rapid training phase [17]. Akram et al. [3]
proposed a hybrid classifier as a GMM and SVM for exudates Shengchun Long and Xiaoxiao Huang conceived and
detection; however, training GMM model and finding the designed the experiments. Xiaoxiao Huang performed the
optimized parameters for GMM were complicated. In this experiments. Zhiqing Chen and Xiaoxiao Huang analyzed
study, the combined approach using FCM and SVM required the results. All authors reviewed the manuscript. Shahina
less computational expenses. Only eight key features were Pardhan and Dingchang Zheng approved the final version.
used when compared with other algorithms with dozens Zhiqing Chen, Shahina Pardhan, and Dingchang Zheng
of features [9, 11]. The distinguishing features of HE, in contributed equally to this work.
comparison with other lesions as having sharper margins and
bright yellow color, enabled the most representative of eight Acknowledgments
features to be used to achieve more efficient process while
maintaining the accuracy of HE extraction. Jaya et al. [37] Authors thank e-ophtha EX dataset (http://www.eophtha
proposed an expert decision-making system designed using .com) and DIARETDB1 database (http://www.it.lut.fi/project/
a fuzzy support vector machine (FSVM) classifier to detect imageret/diaretdb1/) for providing the fundus images for this
hard exudates. Color and texture features are extracted from work.
the images as input to the FSVM classifier. However, using
one classifier to detect HE and candidate regions of HE not References
extracted in advance, the computational complexity of the [1] Y. Zheng, M. He, and N. Congdon, “The worldwide epidemic of
classifier will increase greatly, resulting in low final detection diabetic retinopathy,” Indian Journal of Ophthalmology, vol. 60,
efficiency. pp. 428–431, 2012.
One limitation of our algorithm is that its performance [2] U. R. Acharya, E. Y. Ng, J. H. Tan, S. V. Sree, and K. H. Ng, “An
depends on the OD detection and retinal blood vessels integrated index for the identification of diabetic retinopathy
removal. Since the applied OD detection was quite simple in stages using texture parameters,” Journal of Medical Systems, vol.
this study, the performance of our method could be further 36, pp. 2011–2020, 2012.
improved by improving the robustness of OD localization [3] M. U. Akram, A. Tariq, S. A. Khan, and M. Y. Javed, “Automated
and blood vessel detection. Secondly, while the retinal image detection of exudates and macula for grading of diabetic mac-
quality was very poor, such as the whole image is very dark ular edema,” Computer Methods and Programs in Biomedicine,
with large artificial shadow (e.g., image029, image047 in vol. 114, no. 2, pp. 141–152, 2014.
DIARETDB1 database), and the contrast between HE and the [4] E. Group, “Grading diabetic retinopathy from stereoscopic
background is not strong enough (e.g., image044, image052 color fundus photographs–an extension of the modified airlie
house classification,” Ophthalmology, vol. 98, no. 5, pp. 786–806,
in DIARETDB1 database), the HE detection result was poor.
1991.
In addition, some big and bright cotton wool spots have been
[5] A. Fagotcampagna, I. Romon, N. Poutignat, and J. Bloch,
wrongly detected as HE and some small HE were ignored. In
“Non-insulin treated diabetes: relationship between disease
future studies, we will improve algorithms to achieve more management and quality of care,” La Revue Du Praticien, vol.
effective detection. Furthermore, we suggest more evalua- 57, pp. 2209–2216, 2007.
tions to be carried out with the proposed algorithms on other [6] H. Li and O. Chutatape, “Fundus image features extraction,”
clinically available data. Such tests could contribute to further in Proceedings of the International Conference of the IEEE In
improvements on the algorithms, resulting in more robust Engineering in Medicine and Biology Society, vol. 4, pp. 3071–
and more accurate detection. In summary, the satisfactory 3073, 2000.
evaluation results on both retinal imaging databases demon- [7] Z. Liu, C. Opas, and S. M. Krishnan, “Automatic image analysis
strated the effectiveness of employing dynamic threshold, of fundus photograph,” in Proceedings of the 19th Annual
fuzzy C-means and SVM in our proposed automatic HE International Conference of the IEEE Engineering in Medicine
detection methods, providing scientific evidence that it has and Biology Society, vol. 2, pp. 524-525, November 1997.
potential for clinical DR diagnosis. [8] H. Li, “Model-based approach for automated feature extraction
in color fundus images,” in Proceedings of the 9th International
Conference on Computer Vision, vol. 1, Nice, France, 2003.
Data Availability [9] M. Haloi, S. Dandapat, and R. Sinha, “A gaussian scale space
approach for exudates detection, classification and severity
The DIARETDB1 and the e-ophtha EX databases used to
prediction,” Computer Science, vol. 56, pp. 3–6, 2015.
support this study are from freely available databases of reti-
[10] M. M. Fraz, W. Jahangir, S. Zahid, M. M. Hamayun, and S. A.
nal images at http://www.it.lut.fi/project/imageret/diaretdb1/
Barman, “Multiscale segmentation of exudates in retinal images
and http://www.eophtha.com, which have been cited. The using contextual cues and ensemble classification,” Biomedical
processed data during the current study are available from the Signal Processing and Control, vol. 35, pp. 50–62, 2017.
corresponding author on reasonable request. [11] B. Harangi and A. Hajdu, “Automatic exudate detection by
fusing multiple active contours and regionwise classification,”
Conflicts of Interest Computers in Biology and Medicine, vol. 54, pp. 156–171, 2014.
[12] X. Zhang, G. Thibault, E. Decencière et al., “Exudate detection
The authors declare that there are no conflicts of interest in color retinal images for mass screening of diabetic retinopa-
regarding the publication of this paper. thy,” Medical Image Analysis, vol. 18, no. 7, pp. 1026–1043, 2014.
BioMed Research International 13
[13] A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, “Auto- sensor networks,” Computer Science, vol. 41, no. 6A, pp. 289–
matic recognition of exudative maculopathy using fuzzy c - 292, 2014 (Chinese).
means clustering and neural networks,” in Proceedings of the [29] H. K. Hsiao, C. C. Liu, C. Y. Yu, S. W. Kuo, and S. S. Yu, “A novel
Medical Image Understanding Analysis Conference, vol. 3, pp. optic disc detection scheme on retinal images,” Expert Systems
49–52, 2001. with Applications, vol. 39, pp. 10600–10606, 2012.
[14] R. F. Moghaddam and M. Cheriet, “A multi-scale framework for [30] Q. Liu, B. Zou, J. Chen et al., “A location-to-segmentation
adaptive binarization of degraded document images,” Pattern strategy for automatic exudate segmentation in colour retinal
Recognition, vol. 43, no. 6, pp. 2186–2198, 2010. fundus images,” Computerized Medical Imaging and Graphics,
[15] E. Espinoza, G. Martinez, J.-G. Frerichs, and T. Scheper, “Cell vol. 55, pp. 78–86, 2017.
cluster segmentation based on global and local thresholding [31] R. S. Rekhi, A. Issac, M. K. Dutta, and C. M. Travieso, “Auto-
for in-situ microscopy,” in Proceedings of the 2006 3rd IEEE mated classification of exudates from digital fundus images,” in
International Symposium on Biomedical Imaging: From Nano to Proceedings of the In International Conference and Workshop on
Macro, pp. 542–545, April 2006. Bioinspired Intelligence, vol. 16, pp. 1–6, 2017.
[16] A. Osareh, M. Mirmehdi, B. T. Thomas, and R. Markham, [32] J. Rahebi and F. Hardala, “A new approach to optic disc
“Comparative Exudate Classification Using Support Vector detection in human retinal images using the firefly algorithm,”
Machines and Neural Networks,” in Medical Image Computing Medical Biological Engineering & Computing, Article ID 453461,
and Computer-Assisted Intervention-MICCAI, Springer Berlin pp. 453–461, 2016.
Heidelberg, Berlin Heidelberg, 2002.
[33] X. Y. Wang and J. Bu, “A fast and robust image segmentation
[17] C. J. C. Burges, “A tutorial on support vector machines for using FCM with spatial information,” Digital Signal Processing,
pattern recognition,” Data Mining Knowledge Discovery, vol. 2, vol. 20, 2010.
pp. 121–167, 1998.
[34] S Akara, U. Bunyarit, and B. Sarah, “Automatic exudate detec-
[18] T. Kauppi, V. Kalesnykiene, J.-K. Kamarainen et al., “The
tion from non-dilated diabetic retinopathy retinal images using
DIARETDB1 diabetic retinopathy database and evaluation
fuzzy c-means clustering,” Sensors, vol. 9, no. 3, pp. 2148–2161,
protocol,” in Proceedings of the 18th British Machine Vision
2009.
Conference (BMVC ’07), pp. 1–10, September 2007.
[35] P. Yogarajah, J. Condell, K. Curran, A. Cheddad, and P. McK-
[19] C. I. e. a. Sánchez, “A novel automatic image processing
evitt, “A dynamic threshold approach for skin segmentation in
algorithm for detection of hard exudates based on retinal image
color images,” International Journal of Biometrics, vol. 4, pp. 38–
analysis,” Medical Engineering & Physics, vol. 30, p. 350, 2008.
55, 2010.
[20] R. J. Winder, P. J. Morrow, I. N. McRitchie, J. R. Bailie, and P.
[36] Y. e. a. Liang, “Hybrid threshold optimization between global
M. Hart, “Algorithms for digital image processing in diabetic
image and local regions in image segmentation for melasma
retinopathy,” Computerized Medical Imaging and Graphics the
severity assessment,” Multidimensional Systems & Signal Pro-
Official Journal of the Computerized Medical Imaging Society,
cessing, vol. 7, pp. 1–8, 2015.
vol. 33, no. 8, p. 608, 2009.
[21] D. Marin, A. Aquino, M. E. Gegundezarias, and J. M. Bravo, [37] T. Jaya, J. Dheeba, and N. A. Singh, “Detection of hard exudates
“A new supervised method for blood vessel segmentation in in colour fundus images using fuzzy support vector machine-
retinal images by using gray-level and moment invariants-based based expert system,” Journal of Digital Imaging, vol. 28, no. 6,
features,” IEEE Transactions on Medical Imaging, vol. 30, Article pp. 761–768, 2015.
ID 146158, pp. 146–158, 2011. [38] D. Welfer, J. Scharcanski, and D. R. Marinho, “A coarse-to-
[22] A. Hoover and M. Goldbaum, “Locating the optic nerve in a fine strategy for automatically detecting exudates in color eye
retinal image using the fuzzy convergence of the blood vessels,” fundus images,” Computerized Medical Imaging and Graphics,
IEEE Transactions on Medical Imaging, vol. 22, no. 8, pp. 951– vol. 34, no. 3, pp. 228–235, 2010.
958, 2003. [39] E. Imani and H. R. Pourreza, A Novel Method for Retinal Exu-
[23] J. P. Medhi and S. Dandapat, “An effective fovea detection and date Segmentation Using Signal Separation Algorithm, Elsevier
automatic assessment of diabetic maculopathy in color fundus North-Holland, Inc, 2016.
images,” Computers in Biology and Medicine, vol. 74, pp. 30–44, [40] W. Kusakunniran, Q. Wu, P. Ritthipravat, and J. Zhang,
2016. “Hard exudates segmentation based on learned initial seeds
[24] K. Zuiderveld, “Contrast limited adaptive histogram equaliza- and iterative graph cut,” Computer Methods and Programs in
tion,” in Graphics Gems (IV), P. Heckbert, Ed., Boston, MASS, Biomedicine, vol. 158, pp. 173–183, 2018.
USA, 1994.
[25] N. Otsu, “A threshold selection method from gray-level his-
tograms,” IEEE Transactions on Systems Man & Cybernetics, vol.
9, pp. 62–66, 2007.
[26] J. M. Provis, P. L. Penfold, E. E. Cornish, T. M. Sandercoe,
and M. C. Madigan, “Anatomy and development of the macula:
Specialisation and the vulnerability to macular degeneration,”
Clinical and Experimental Optometry, vol. 88, no. 5, pp. 269–
281, 2005.
[27] C. Kimme, D. Ballard, and J. Sklansky, “Finding Circles by an
Array of Accumulators,” Communications of the ACM, vol. 18,
no. 2, pp. 120–122, 1975.
[28] T. Chen, Y. Luo, F. Xiao, D. Shi, and S. Zhang, “Uneven clus-
tering algorithm based on clustering optimization for wireless
International Journal of Journal of
Anatomy Biochemistry
Research International Research International
Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Advances in Genetics
Bioinformatics
Hindawi
Research International
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Advances in
Neuroscience
Journal