An Efcient Multi‑Level Pre‑Processing Algorithm for the Enhancement
An Efcient Multi‑Level Pre‑Processing Algorithm for the Enhancement
net/publication/372830810
CITATIONS READS
4 113
5 authors, including:
All content following this page was uploaded by K. Uma Maheswari on 02 August 2023.
ORIGINAL ARTICLE
Abstract
In this paper, a multi-level algorithm for pre-processing of dermoscopy images is proposed, which helps in improving the
quality of the raw images, making it suitable for skin lesion detection. This multi-level pre-processing method has a positive
impact on automated skin lesion segmentation using Regularized Extreme Learning Machine. Raw images are subjected to
de-noising, illumination correction, contrast enhancement, sharpening, reflection removal, and virtual shaving before the skin
lesion segmentation. The Non-Local Means (NLM) filter with lowest Blind Reference less Image Spatial Quality Evaluator
(BRISQUE) score exhibits better de-noising of dermoscopy images. To suppress uneven illumination, gamma correction is
subjected to the denoised image. The Robust Image Contrast Enhancement (RICE) algorithm is used for contrast enhance-
ment, and produces enhanced images with better structural preservation and negligible loss of information. Unsharp masking
for sharpening exhibits low BRISQUE scores for better sharpening of fine details in an image. Output images produced by
the phase congruency–based method in virtual shaving show high similarity with ground truth images as the hair is removed
completely from the input images. Obtained scores at each stage of pre-processing framework show that the performance is
superior compared to all the existing methods, both qualitatively and quantitatively, in terms of uniform contrast, preservation
of information content, removal of undesired information, and elimination of artifacts in melanoma images. The output of the
proposed system is assessed qualitatively and quantitatively with and without pre-processing of dermoscopy images. From
the overall evaluation results, it is found that the segmentation of skin lesion is more efficient using Regularized Extreme
Learning Machine if the multi-level pre-processing steps are used in proper sequence.
Keywords Non-Local Means Filter · Robust Image Contrast Enhancement · Unsharp masking · Dermoscopy · Phase
congruency
1 Introduction
13
Vol.:(0123456789)
Medical & Biological Engineering & Computing
noise present in the dermoscopy images may get amplified contrast of an input image is enhanced by histogram equali-
during sharpening and contrast enhancement. The amplified zation and the reduction of impulsive noise, hair structures,
noise may adversely affect the performance of edge-based and air bubbles is achieved by applying the median filter.
segmentation algorithms used to extract the borders of the Although it preserves the edges, the fine image details are lost
skin lesions. Hence, de-noising is a vital step in the auto- when the window size of the filter is increased above 3 × 3.
mated analysis of dermoscopy images. Furthermore, Jaworek et al. [5] proposed a novel method
Mostly, skin lesions are darker than the background. How- to reduce the border irregularity in dermoscopy images.
ever, due to uneven illumination, some portions of the image The authors highlighted a two-step pre-processing algo-
may appear darker than the background. Those darker regions rithm which includes black frame removal, hair detection,
may get falsely segmented along with the lesions. Therefore, and in painting. Initially, each row of an image is scanned
contrast enhancement and sharpening are indispensable in the in four directions and the rows with 50% of black pixel are
automated analysis of dermoscopy images. Specular reflec- removed in the input image. Next, the black top-hat trans-
tion is another concern that may deteriorate the visual quality form is applied to remove the dark thick hairs from the black
of melanoma images. Hence, reflection removal is needed to frame removal image. Here, the black top-hat transform has
eliminate the background reflections in input images. Hairs are failed to detect the local structures such as dots or globules in
present in dermoscopy images. The hairs, being dark, may get melanoma images. Moreover, Restrepo et al. [6] introduced
falsely segmented along with the lesion, if intensity-based seg- a contrast enhancement technique based on the most discri-
mentation methods are adopted. Hairs need to be removed prior minant projection of the color map in skin lesion images.
to the segmentation of lesions. The process of removing hairs This method overcomes the non-uniform illumination and
from dermoscopy images is usually termed as virtual shaving. color correction problems while detecting the melanoma.
In this paper, a new six-stage pre-processing algorithm Since the color projection is calculated for all directions,
is introduced to improve the segmentation accuracy of skin it increases the complexity of the algorithm. In addition, a
lesion in dermoscopy images. For de-noising the input five-step pre-processing framework is proposed by Mishraet
image, the Non-Local Means (NLM) filter is employed. It et al. [7] which includes elimination of lighting effects, color
ensures the preservation of detailed information of an image. correction, contrast enhancement, image smoothing, and hair
Likewise, gamma correction is applied at the second stage so removal to improve the visual quality of the image. Here, the
that a uniform illumination is achieved. An algorithm termed authors highlighted the problems in skin lesion detection like
as Robust Image Contrast Enhancement (RICE) is employed poor contrast, skin tone variation, artifacts, and non-uniform
for contrast enhancement. This method helps in avoiding the illumination on dermoscopy images.
image from over contrast enhancement. For sharpening, the Furthermore, Cherepkova et al. [8] proposed an
unsharp masking technique is implied to sharpen the edge enhancement and color correction for original dermos-
pixels. For reflection removal, a transmittance estimation- copy images. Accordingly, the enhancement is achieved in
based strategy is adopted. As a result, the undesired infor- six steps, namely retinex, spatiotemporal retinex-inspired
mation is removed, thereby improving the visual quality. envelope with stochastic sampling, automatic white bal-
Under virtual shaving, a phase congruency–based method ance (AWB), contrast enhancement, automatic enhance-
is adopted for removing the hairs without losing the image ment, and histogram equalization. The authors reported
content. The implemented technique in each stage performs an improved sensitivity and accuracy with an average of
efficiently such that a quality image is achieved at the pre- 4 to 8% and 3 to 5% respectively. Due to over exposure in
processed output for melanoma segmentation. The output of visual adjustment, fine image details are lost with partly
the proposed system is evaluated subjectively with ground corrected color. Although AWB provides a good color cor-
truth images and objectively using quality metrics like the rection, some deviations in visual quality occur due to
Disk Similarity Index (DSI), Jacquard Index (JI), Total Seg- the errors in temperature estimation. Also, a two-phase
mentation Coefficient (TSC), and Intersection over Union pre-processing algorithm for dermoscopy image enhance-
(IoU). The output results reveal the multi-level pre-process- ment is proposed by Jayalakshmi et al. [9]. Accordingly, a
ing algorithm outperforms in the segmentation of skin lesion median filter is applied to remove the artifact and K-means
using Regularized Extreme Learning Machine (RELM). clustering is used to eliminate the outlier pixels. The pre-
sented result shows an accuracy of 92.8% with sensitivity
of 93% and specificity of 90% on the Danderm database.
2 Literature survey Furthermore, a three-step framework was proposed to
improve the contrast of the dermoscopy images in [10].
To enhance the dermoscopy image, Madhan Kumar et al. Initially, a median filter is employed to reduce noise in
[4] presented a pre-processing technique in two steps to the raw input images. Next, the morphological operators
remove the noise, fine hairs, and air bubbles. Accordingly, the such as erosion and dilation are implemented to remove the
13
Medical & Biological Engineering & Computing
artifacts like hairs in the filtered image. Finally, intensity In order to overcome the above issues and enhance the spa-
value mapping is applied to enhance the contrast. Through tial quality for skin lesion segmentation in dermoscopy images,
median filtering, a 5 × 5 window is used to remove the a pre-processing module comprising of de-noising, illumina-
image details of 2 pixel wide. Pankaj et al. [11] introduced tion correction, contrast enhancement, sharpening, reflection
a reformed contrast enhancement technique using Krill removal, and hair removal is introduced in this work. Under
Herd (KH) optimization. Here, a new reformed histogram the de-noising phase, the NLM filter with a suitable DoS value
is obtained with a peak cut off. The global histogram equal- is chosen to preserve the fine details of dermoscopic images.
ization helps in the enhancement of medical images like Also, in the contrast enhancement phase, the RICE algorithm
X-ray, MRI, and CT scan. In this approach, the efficiency is is introduced to avoid the non-uniform enhancement by main-
tested through the metrices like Structural Similarity Index taining a mean brightness. In addition, the reflection removal
Matrix (SSIM), End-Point Intersection over union (EPI), is proposed to remove undesired information by separating the
Delta E (DE), and Region Error Change (REC). Jeevakala background image layer from the reflection layer of the dermos-
et al. [12] discussed a sharpening enhancement technique copy image to be analyzed. Thus, by optimizing the smoothing
for MR images. A Laplacian Pyramid and singular value parameter (SP) and rate control parameter (RCP) values in the
decomposition are implemented to decompose the multi- reflection removal process, the visual quality of the image is
scale images into coarse and difference sub-bands. Here, also preserved. Moreover, a phase congruency method with
the weighted sum of singular matrix and its global his- ideal threshold value preserves the image content in virtual
togram equalization increases the contrast in multi-scale shaving of hairs. The rest of the paper is organized as follows:
images. Section 3 explains the pipeline of dermoscopy pre-processing
Though a lot of literatures are enumerated in pre-pro- method in detail. Section 4 describes the results and discussion.
cessing of dermoscopy images, some limitations are iden- Finally, Section 5 draws the conclusion.
tified as follows:
{ }
∑+R1 ∑+R1 [ ] 1≤m≤M (1)
Y(m, n) = W X(m, n), X(m + i, n + j) X(m + i, n + j),
i=−R1 j=−R1 1≤n≤N
Segmented
Image
SKIN LESION VIRTUAL SHAVING REFLECTION
REMOVAL SHARPENING
SEGMENTATION * Phase-
* Reflection *Unsharp Masking
*RELM Congruency
Suppression
13
Medical & Biological Engineering & Computing
where M and N indicate the number of rows and columns After normalization of the weights, the weight corre-
in the input image. The weights W(m,n) are based on the sponding to the pixels, which are closely similar to the pixel
similarity of neighborhood pixels m and n. The similarity to be denoised, will get penalized more. Towards rectifying
is then estimated as: this inadvertent problem, the weight corresponding to the
self-similarity is replaced by the highest value of weight just
h X(m+p,n+p)−X((m+i)+p,(n+j)+p)]2
∑+R2
p=−R2 g [
below it. Therefore, the weight W[X(m,n),X(m + i,n + j)] at
−
� �
W X(m, n), X(m + i, n + j) = e 𝜉2
i = 0 and j = 0 is expressed as:
(2)
(4)
( [ ])
The variable hg is a normalizing constant. It penalizes the 𝑚𝑎𝑥 W X(m, n), X(m + i, n + j) ∀i ≠ 0&j ≠ 0, −R1 ≤ i ≤ +R1 , −R1 ≤ j ≤ +R1
∑+R1 ∑+R1
(3)
[ ] [ ]
0 ≤ W X(m, n), X(m + i, n + j) ≤ 1& W X(m, n), X(m + i, n + j) = 1
i=−R1 j=−R1
To suppress the uneven illumination in the denoised image where Φ and ψ are the control parameters, selected based on
Y , illumination correction is implemented in the dermos- the saliency preservation. It is measured by a Quality assess-
copy images. Hence, to suppress the uneven illumination, ment Metric of Contrast (QMC) [14] in an image. Finally,
gamma correction is subjected to the illumination com- the contrast enhanced image Yc can be reconstructed using
ponent of HSV color space. Initially, the denoised input histogram matching function Thm (.) [15].
image in RGB color space is converted to the HSV color ( )
space. Here, the hue component and saturation component Yc = Thm Yi,̃h(Φ, 𝜓) (7)
are kept intact and the value component alone is decom-
posed using retinex decomposition. Later, the estimated
illumination component is subjected to the Gamma cor- 3.4 Sharpening
rection to suppress the unevenness. Since, the arbitrary
parameter γ controls the effectiveness of the devignetting The principle of unsharp masking is exclusively based on
called as Devignetting Quality Parameter (DQP). In this the concept of estimating difference between the input
work, the DQP value is varied between 0.25 and 2.5 and image and the Gaussian-filtered image [16]. A fraction of
the best value is selected as 2.0. Then, the new value com- the high-frequency content is computed by subtracting the
ponent is reconstructed from the decomposed reflectance Gaussian-filtered image from the input image. Again, it is
component and gamma-corrected illumination component. added back to the input image to get the unsharp masking.
Finally, combining the hue, saturation, and new value To perform the unsharp masking, the Gaussian filter kernel
components together, an illumination-corrected image Yi, is used to compute Gaussian filter mask HG as given by,
is obtained by converting the resultant image in HSV color 1 −
(
x2 +y2
)
13
Medical & Biological Engineering & Computing
Selecting the dimension of Gaussian mask and its SD is where Ys is the input RGB image. The variable T indi-
important to make the strength of smoothing more sensitive. cates the transmittance layer and the variable R indicates
Therefore, SD is computed from the value of the radius of the the reflectance layer of the input image. The notion Γ indi-
mask. The SD of Gaussian mask from its radius is computed cates element-wise multiplication. The notion “∗∗” denotes
using the relation σ = (w − 1) / 4. According to this relation, the 2D-convolution operation. W indicates the matrix that
when the radius of the Gaussian masks increases, the SD also weighs the contribution of the transmittance layer at each
increases proportionally. Therefore, when both SD and dimen- pixel. k is the blurring kernel. The weighing matrix W is
sion of the mask increase together, the degree of smoothing expressed as:
also increases significantly. The identity convolution mask H0
can be calculated as: Wm,n = w, ∀m, n, 1 ≤ m ≤ M, 1 ≤ n ≤ N (12)
{
1 x = 0&y = 0 To avoid losing the high-frequency component during
H0 (x, y) =
0 Otherwise
− w ≤ x ≤ +w and − w ≤ y ≤ +w (9) reflectance removal, the Laplacian-based data fidelity is taken
in the sharpened image. The optimization problem developed
Finally, the sharpened image Ys is obtained by computing for reflection removal image Yr is described as:
the difference between the input image Yc and its Gaussian- ( )||2
filtered output. Yr = argmin||L(T) − L Ys || + 𝜆C(T) (13)
||
T | | ||2
(10)
([ ] )
Ys = Yc ∗∗ H0 + 𝜆 H0 − HG ∗∗ Yc 0 ≤ 𝜆 ≤ 1
where 𝜆 is the regularization parameter, and if 𝜆 value
The fraction of difference between the input and the Gauss- increases, more gradients will be removed. The term C(T)
ian-filtered image merged to the input image is a manually invigorates the smoothening of image without disturbing the
selected parameter λ. This parameter is usually called as scale continuity of large structures.
and if the value of λ is more, the sharper will be the output
image. 3.6 Virtual shaving
∑ ∑
wo (m, n)Aso (m, n)Δ𝜙so (m, n) − T
(14)
s o
𝜙(m, n) = ∑ ∑ , 1 ≤ m ≤ M, 1 ≤ n ≤ N
s o Aso (m, n) + 𝜉s
13
Medical & Biological Engineering & Computing
where the variable ΦP � is the complement of ΦP . Again, the for each of the connected regions. Hair-like structures are
phase angles in ΦN1 are modified such that the angles greater elliptical structures with eccentricity close to 1.
than 𝜋2 are brought to 0 to 𝜋2 as given by: {
1, ifEi < tb
(
ΦN2 (m, n) =ΦP2 (m, n) 𝜋 − ΦN1 (m, n)
) Hi =
0, otherwise
,1 ≤ i ≤ P (22)
(17)
+ ΦP2 �(m, n)ΦN1 (m, n), 1 ≤ m ≤ M, 1 ≤ n ≤ N
The region without hairs is indicated as Hi and the threshold
The term ΦP2 indicates the locations where ΦN1 is tb is arbitrarily selected as 0.6. The resulted virtual shaving image
greater than 𝜋2 and variable ΦP2 � is the complement of ΦP2. Yv for RGB channel without hairs after region filling is given by:
The modified phase angles are then normalized as: ( ) ( ) ( )
YvR = Ψ YrR , Hi , YvG = Ψ YrG , Hi andYvB = Ψ YrB , Hi , 1 ≤ i ≤ P
(23)
𝜋
2
− ΦN2 (m, n)
ΦR (m, n) = , 1 ≤ m ≤ M, 1 ≤ n ≤ N (18)
𝜋
where Ψ indicates the region filling operator and Yr is the
2
reflection removed image.
Later, the phase values ΦR are converted to binary with
a threshold t .
{ 3.7 Segmentation
1, if ΦR (m, n) < t
Φb (m, n) = , 1 ≤ m ≤ M, 1 ≤ n ≤ N
0, otherwise Lesion segmentation means separating the lesion region
(19) from the normal skin region.
Now the binary phase image Φb is then dilated with
disk-shaped structural element SE. The dilation in the
binary image makes the objects visible by filling the small
holes in it. Hence, the dilated phase image ΦD is given by:
ΦD = Φb ⊕ SE (20)
where SE is the structural element described as:
⎡0 1 0⎤
SE = ⎢ 1 1 1 ⎥ (21)
⎣0 1 0⎦
⎢ ⎥
(a) (b)
Then, the connected components P are found on the
dilated binary phase image ΦD . Eccentricity is calculated Fig. 3 Segmented output. a Pre-processed gray scale image. b Seg-
mented skin lesion
13
Medical & Biological Engineering & Computing
It is a crucial step in the analysis of dermoscopy images to error. In this case, ridge regression is quite beneficial in the
identify various global morphological features of the lesion. reduction of variance and prediction error due to smaller value
RELM with ridge regression is employed for segmentation of output weight β. Also, the over fitting problem is addressed
of skin lesion in the proposed system. Based on the ridge with regularization parameter C in RELM which produces
regression model, the stable and better regularization can be better and consistent performance than other segmentation
achieved by adding 1/C to the diagonal elements PTP while algorithms. Figure 3 shows the segmented skin lesion from
estimating the output weight β. gray scale pre-processed image.
Thus, the RELM regression becomes:
)−1 4 Results and discussion
P+ = PT P + I∕C PT (24)
(
where T stands for target estimation and P is the hidden The dermoscopy images are collected from the data archive
neuron matrix. of the International Skin Imaging Collaboration (ISIC) [18].
Also, (24) and (25) aimed at optimizing || Pβ − T ||2 + 1/C The archive comprises a total of 900 dermoscopy images. The
|| β ||2 show that smaller output weight β plays a vital role in test data of the ISIC Melanoma Challenge 2016 is used in
better generalization of RELM. The procedure of RELM is our experiment. The data comprises of 379 images. Out of
given in three steps. 379 images, 273 images comprise melanoma. A total of 106
Step 1: Randomly estimate the hidden neuron parameters, images are of normal lesions. Images with malignant lesions
weight w and bias b. are labeled after performing the biopsy. All images comprising
Step 2: Estimate the hidden layer matrix P using: benign lesions are labeled after a histopathological examina-
tion and prolonged longitudinal follow-up. Associated ground
⎡ P1 ⎤ ⎡ P(x1 ) ⎤ ⎡ G(w1 , b1 , x1 ) ⋯ G(wL , bL , xL ) ⎤
truth segmentations contoured by the expert dermatologists are
P=⎢ ⋮ ⎥=⎢ ⋮ ⎥=⎢ ⋮ ⋱ ⋮
also provided in the archive.
⎥
⎣ PN ⎦ ⎣ P(xN ) ⎦ ⎣ G(w1 , b1 , xN ) ⋯ G(wL , bL , xN ) ⎦
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
(27)
4.2 Validation of NLM filter
Step 3: Calculate the output weight β using:
The influence of DoS on the de-noising quality of the
𝛽 = H+T (28)
NLM filter is analyzed subjectively and objectively in
where H+ is derived from (24) and (25). this section. Under objective evaluation, the BRISQUE
Since hidden neuron parameters are randomly chosen, fast score is evaluated. The test images are filtered by the
learn speed is achieved in RELM. Due to randomness nature, NLM filter, by varying DoS values from 1 to 15, and the
Extreme Learning Machine (ELM) and other Artificial Neural results of some DoS values are shown in Fig. 4. As the
Network (ANN) algorithms have high variance and prediction value of the DoS varies from 1 to 15, the smoothing effect
13
Medical & Biological Engineering & Computing
13
Medical & Biological Engineering & Computing
(a) (b)
(c) (d)
Fig. 8 Variation of the objective quality metrics with respect to DQP. a AGIC vs DQP. b LOE vs DQP. c SFF vs DQP. d VSI vs DQP
DQP is greater than 1. In Fig. 8 c and d, when DQP changes algorithms are depicted in Fig. 7. An ideal devignetting
from 0 to 1, both SFF and VSI increase and reach the maxi- technique should make the background illumination uni-
mum point at DQP = 1. When DQP increases above 1, the form throughout the image surface without intolerably
SFF and VSI start decreasing and above 2.2 the slope of scaling down or boosting the mean brightness. Figure 9a
SFF and VSI increases. This analysis of AGIC, LOE, SFF, depicts the input image for the GC algorithm. In the output
and VSI with respect to DQP indicates the optimum value images of the GC algorithm (Fig. 9b), the background illu-
of DQP suitable for the dermatological images. mination appears to be almost uniform. However, it blurs
The proposed devignetting algorithm is compared both the structures present in the dermoscopy images. The VF
qualitatively and objectively, against three different algo- algorithm introduces processing-induced color artifacts as
rithms, namely Gamma correction (GC) [21], variation- seen in Fig. 9c. It produces output images that are unnatural
based fusion (VF) [22], and sigmoid transform (ST) [23]. in appearance. Output images of the ST in Fig. 9d look sig-
The obtained images by applying different devignetting nificantly darker than the corresponding input images. The
13
Medical & Biological Engineering & Computing
Table 2 AGIC score for different schemes in illumination correction Table 5 VSI score for different schemes in illumination correction
Method Image 1 Image 2 Image 3 Summary of 100 images Method Image 1 Image 2 Image 3 Summary of 100 images
ST 0.5202 0.5012 0.5988 0.5401 ± 0.0517 ST 0.8282 0.7805 0.8457 0.8181 ± 0.0337
VF 0.3880 0.3628 0.4278 0.3929 ± 0.0328 VF 0.8938 0.8886 0.8815 0.8880 ± 0.0062
GC 0.1939 0.1560 0.2964 0.2154 ± 0.0726 GC 0.8302 0.7782 0.8428 0.8171 ± 0.0342
Proposed 0.1828 0.1547 0.2310 0.1895 ± 0.0386 Proposed 0.9927 0.9896 0.9886 0.9903 ± 0.0021
Table 3 LOE score for different schemes in illumination correction output image of the proposed algorithm is natural in appear-
Method Image 1 Image 2 Image 3 Summary of 100 images
ance. In addition, Table 4 shows the highest value of SFF
in the proposed algorithm indicates that the color as well
ST 1383 1587 1472 1480 ± 102.2758 as structural distortions is negligible in the output image.
VF 637.4233 964.4730 753.8274 785.2412 ± 165.7724 Moreover, the higher value of VSI shown in Table 5 justifies
GC 2498 2478 2476 2484 ± 12.1655 that visual saliency maps of the output images are identi-
Proposed 96.1730 377.4392 225.2537 232.9553 ± 140.7912 cal to the visual saliency maps of the corresponding input
images. Therefore, the loss of salient information negligible
in the proposed algorithm is guaranteed. Finally, in Table 6
Table 4 SFF score for different schemes in illumination correction it is evident that the proposed algorithm is computationally
Method Image 1 Image 2 Image 3 Summary of 100 images
faster than the other methods. All these results emphasize
the dominance of the proposed scheme in terms of uniform-
ST 0.8900 0.8836 0.8874 0.8870 ± 0.0032 ity in background illumination, information preservation,
VF 0.9369 0.9022 0.9249 0.9213 ± 0.0176 and computational speed.
GC 0.8849 0.8693 0.8721 0.8754 ± 0.0083
Proposed 0.9834 0.9855 0.9661 0.9783 ± 0.0106
4.4 Validation of the RICE algorithm
background illumination remains as uneven in the dermato- Contrast enhancement is done to increase the gray level dif-
logical photographs. But in Fig. 9e, a uniform background ference between lesion and background. Objective evalua-
illumination is noticed throughout the image surface. More- tion is done with the help of quality metrics like SFF, VSI,
over, the mean brightness is not down-scaled or boosted. PCQI, and OCM. The different techniques considered for
The structures present in the output images remain sharper, comparing the performance of contrast enhancement are
appear natural, and do not cause any processing-introduced CLAHE [21], CVC [24], and LDR [25].
color artifacts. With respect to the subjective quality of the While evaluating the performance of the RICE algorithm,
devignetted images, the proposed devignetting algorithm is a set of low-contrast dermoscopy images are used. Output
superior to ST, VF, and GC methods. The qualitative evalu- images produced by different contrast enhancement tech-
ation is repeated for hundred test images and it is found that niques are depicted in Fig. 10. An ideal enhancement algo-
the proposed algorithm is consistently better than its alterna- rithm increases the gray-scale difference without changing
tives on all test images. the mean brightness of the image. In Fig. 10 b and d, both
The obtained numerical values of AGIC, LOE, SFF, and CLAHE and LDR algorithms made an over-enhancement in
VSI and computational time for different schemes ST, VF, the image. Similarly in Fig. 10c, multiple illumination arti-
GC, and the proposed algorithm are presented in Tables 2, 3, facts are visible at the background region after the enhance-
4, 5, and 6, respectively. As given in Table 2, the minimum ment by the CVC algorithm. Besides, the proposed RICE
value of AGIC indicates that the background illumination algorithm effectively enhances the images without affect-
in output images of the proposed method is uniform. Fur- ing the mean brightness of dermoscopy images as shown
thermore, in Table 3 the low values of LOE justify that the in Fig. 10e. Hence, based on the subjective analysis, it is
13
Medical & Biological Engineering & Computing
Table 6 Computational Method Image 1 (s) Image 2 (s) Image 3 (s) Summary of 100 images (s)
time for different schemes in
illumination correction ST 0.108442 0.057055 0.060224 0.0752 ± 0.0288
VF 108.666069 94.76729 105.5663 102.9999 ± 7.2961
GC 0.124003 0.096456 0.079563 0.1000 ± 0.0224
Proposed 2.070523 2.435887 1.575264 2.0272 ± 0.4319
Table 7 SFF scores for different schemes in contrast enhancement concluded that the RICE algorithm can efficiently enhance
Method Image 1 Image 2 Image 3 Summary of 100 images
the dermoscopy image.
SFF, VSI, PCQI, and OCM values for the output images
CLAHE 0.5943 0.5473 0.7069 0.6162 ± 0.0820 produced by different schemes CLAHE, CVC, LDR, and
CVC 0.9362 0.9438 0.9608 0.9469 ± 0.0126 RICE are presented in Tables 7, 8, 9, and 10. A higher value
LDR 0.9786 0.9479 0.9780 0.9682 ± 0.0176 of SFF in the RICE algorithm reflects the lesser structural
RICE 0.9965 0.9964 0.9945 0.9958 ± 0.0011 distortions present in the output. Likewise, the higher value
of VSI score in the proposed algorithm indicates that the
visual saliency map of the output image is identical to that
of the input image. Similarly, the high values of PCQI score
Table 8 VSI scores for different schemes in contrast enhancement in the RICE algorithm indicate the proper enhancement of
dermoscopy images. The low value of OCM score in the
Method Image 1 Image 2 Image 3 Summary on 100 images
proposed result indicates negligible noise amplification dur-
CLAHE 0.9191 0.8768 0.9040 0.9000 ± 0.0214 ing enhancement. Considering the factors like enhancement
CVC 0.9566 0.9162 0.9517 0.9415 ± 0.0220 in contrast, visual saliency, feature preservation, and infor-
LDR 0.9790 0.9251 0.9663 0.9568 ± 0.0282 mation fidelity together, the RICE algorithm offers better
RICE 0.9958 0.9972 0.9954 0.9961 ± 0.0009 performance compared to other algorithms.
13
Medical & Biological Engineering & Computing
(a) (b)
Fig. 12 Variation of BRISQUE and CPBD score for different values of the λ. a BRISQUE vs λ. b CPBD vs λ
Table 11 BRISQUE score Method Image 1 Image 2 Image 3 Summary of 100 images
obtained for unsharp masking
and local Laplacian filter Local Laplacian filter 22.7326 26.9124 31.5786 27.0745 ± 4.4252
Unsharp masking 5.8528 1.2185 13.7466 6.9393 ± 6.3343
13
Medical & Biological Engineering & Computing
13
Medical & Biological Engineering & Computing
(a) (b)
Fig. 19 PSNR and SSIM plotted for different values of threshold. a PSNR versus threshold. b SSIM versus threshold
of the processed images, the range of threshold between 4.8 Validation of RELM‑based segmentation
0.85 and 0.9 is observed to be ideal for dermatological
photographs. In this section, different segmentation algorithms are applied
The variations of PSNR and SSIM to various values to the pre-processed and without pre-processed dermoscopy
of the threshold are shown in Fig. 19. PSNR and SSIM images. The performance of different algorithms is com-
metrics are computed between the virtually shaved image pared subjectively as well as objectively. The quality met-
and the ground truth image. Both PSNR and SSIM remain rics like DSI, JI, TSC, and IoU [26] are used for objective
consistent for threshold values less than 0.6. But, when comparison. The different segmentation algorithms used are
threshold increases beyond 0.6, both the parameters FCM [27], isolate thresholding method (IT) [28], k-means
exhibit a bell-shaped curve. PSNR has its maximum [29], and RELM.
value when the threshold is between 0.75 and 0.85 and The output of different segmentation algorithms without pre-
SSIM reaches its maximum values when the threshold is processing is shown in Fig. 20a–f and Fig. 22a-f. Here, the skin
between 0.75 and 0.9. A higher value of PSNR and SSIM lesion is not segmented accurately because of the existence of
justifies that the output of the virtually shaved image and noise, non-uniform illumination, and hairs. The virtually shaved
ground truth image is identical. Hence, it is concluded images with a threshold value of 0.85 along with the manually
that from the variations of PSNR and SSIM the optimum segmented ground truth and output of different segmentation
range of threshold for virtual shaving of dermoscopy algorithms are depicted in Fig. 21 and Fig. 23. From the output
images is between 0.75 and 0.9. results of FCM, IT, and k-means (Fig. 21c–e and Fig. 23c-e),
13
Medical & Biological Engineering & Computing
the algorithms failed to segment the skin lesions properly The values of JI, DSI, TSC, and IoU calculated for
(Fig. 22). The output of RELM agrees with the manual seg- 100 dermoscopy images for different segmentation algo-
mentation and effectively segments the skin lesions in Fig. 21f rithms with pre-processing and without pre-processing are
and Fig. 23f. Thus, based on subjective quality, it can be con- tabulated in Tables 12, 13, 14, and 15 respectively. The
cluded that the RELM algorithm is able to segment skin lesions skin lesions segmented manually by experts are used as
efficiently from the dermoscopy images. the ground truth for the calculation of different quality
13
Medical & Biological Engineering & Computing
FCM 0.8313 0.7666 0.5907 0.7062 ± 0.1024 0.7295 ± 0.1245 0.0233 ± 0.0221
IT 0.8584 0.7726 0.5964 0.7112 ± 0.1126 0.7425 ± 0.1336 0.0313 ± 0.0210
k-means 0.7787 0.6502 0.0085 0.4372 ± 0.3878 0.4791 ± 0.4126 0.0419 ± 0.0248
RELM 0.8729 0.9504 0.6972 0.8971 ± 0.0134 0.9402 ± 0.1297 0.0431 ± 0.1163
FCM 0.9079 0.8679 0.7427 0.7865 ± 0.0456 0.8395 ± 0.0862 0.0530 ± 0.0406
IT 0.9238 0.8717 0.7472 0.7989 ± 0.0812 0.8476 ± 0.0907 0.0487 ± 0.0095
k-means 0.8756 0.7880 0.0169 0.4961 ± 0.3214 0.5602 ± 0.4725 0.0641 ± 0.1511
RELM 0.9322 0.9746 0.8216 0.8545 ± 0.0462 0.9895 ± 0.0790 0.1350 ± 0.0328
FCM 0.8310 0.7664 0.5909 0.7012 ± 0.1156 0.7294 ± 0.1242 0.0282 ± 0.0086
IT 0.8581 0.7725 0.5965 0.7202 ± 0.1004 0.7424 ± 0.1334 0.0222 ± 0.0330
k-means 0.7784 0.6500 0.0085 0.4241 ± 0.3941 0.4790 ± 0.4125 0.0549 ± 0.0184
RELM 0.9984 0.9536 0.7026 0.9328 ± 0.1361 0.9549 ± 0.1594 0.0221 ± 0.0233
FCM 0.8216 0.7543 0.5129 0.6894 ± 0.1210 0.6915 ± 0.1296 0.0021 ± 0.0086
IT 0.7773 0.7667 0.6328 0.7548 ± 0.1107 0.7886 ± 0.1274 0.0338 ± 0.0167
k-means 0.8114 0.7900 0.0122 0.5211 ± 0.2831 0.5390 ± 0.3025 0.0179 ± 0.0194
RELM 0.9115 0.9322 0.9001 0.9127 ± 0.1151 0.9338 ± 0.1372 0.0211 ± 0.0221
metrics. A high value of JI, DSI, and TSC indicates that automated segmentation of the RELM algorithm is more
the segmented lesions agree with ground truth. Also, the accurate with the manual segmentation of skin lesions in
IoU score ranges from 0 to 1 which indicates better segmen- dermoscopy images.
tation accuracy. The RELM algorithm exhibits the highest
value for all four metrics. This indicates that the RELM
algorithm has produced more accurate segmentation results 5 Conclusion
compared to other schemes. Objective evaluation results
agree with the inferences drawn by the subjective evalua- In this paper, different enhancement techniques are intro-
tion for the similarity of the skin lesions segmented using duced for pre-processing of dermoscopy images. Here, the
different algorithms with ground truth. optimization-based framework is tested with data archive
The RELM algorithm is used to segment the lesions from of ISIC (2016). Based on the results obtained with and
the pre-processed images. It exhibits a JI, DIS, TSC, and IoU without pre-processed segmentation, it is concluded that
score higher than FCM, IT, and k-means, which shows that the implementation of pre-processing algorithm improves
13
Medical & Biological Engineering & Computing
the success rate in RGB images. The NLM filter has been 7. Mishra NK, Celebi ME (2016) An overview of melanoma detec-
found to preserve very fine details by removing the noise tion in dermoscopy images using image processing and machine
learning. arXiv preprint arXiv:1601.07843
in skin lesion images. Also, the NLM filter exhibits the 8. O Cherepkova, & JY Hardeberg (2018) Enhancing dermoscopy
lowest BRISQUE score compared to anisotropic diffusion images to improve melanoma detection, 2018 Colour and Visual
filter and bilateral filter. The proposed RICE algorithm Computing Symposium (CVCS), 1–6
for contrast enhancement method is found to be superior 9. Jayalakshmi D, Dheeba J (2020) Border detection in skin lesion images
using an improved clustering algorithm. Int J e-Collab 16(4):15–29
to the existing methods including CLAHE, LDR, and 10. Zghal NS, Derbel N (2020) Melanoma skin cancer detection
CVC with better SFF, VSI, PCQI, and OCM scores. The based on image processing. Curr Med Imaging 16(1):50–58
enhancement of dermoscopy images is further improved 11. Kandhway P, Bhandari AK, Singh A (2020) A novel reformed
by eliminating the undesired information due to reflection histogram equalization based medical image contrast enhance-
ment using krill herd optimization. Biomed Signal Process Con-
using reflection removal method. Also, in our framework trol 56:101677
virtual shaving is included to remove the hairs without 12 Jeevakala S, Brintha A (2018) Therese, Sharpening enhance-
any loss of image content with appreciably high PSNR ment technique for MR images to enhance the segmentation.
and SSIM metrics. The values of quality evaluation met- Biomed Signal Process Control 41:21–30
13. Heo Y-C, Kim K, Lee Y (2020) Image de-noising using Non-
rics like PSNR and SSIM are appreciably high for out- Local Means (NLM) approach in magnetic resonance (MR)
put images produced by phase congruency–based virtual imaging: a systematic review. Appl Sci 10(7028):1–16
shaving when the value of the threshold is in the range of 14. Duan X (2019) A multiscale contrast enhancement for mam-
0.85–0.9. However, the proposed system generates better mogram using dynamic unsharp masking in Laplacian Pyramid.
IEEE Trans Radiat Plasma Med Sci 3(5):557–564
results among all comparable methods in terms of quali- 15. Gu K, Zhai G, Yang X, Zhang W, Chen CW (2015) Automatic
tative and quantitative aspects. Therefore, the introduced contrast enhancement technology with saliency preservation.
pre-processing framework is more appropriate for low- IEEE Trans Circuits Syst Video Technol 25(9):1480–1494
quality melanoma images. From the score of quality met- 16. Rajchel M, Oszust M (2021) No-reference image quality assess-
ment of authentically distorted images with global and local
rics like Disk Similarity Index, Jacquard Index, and Total statistics. SIViP 15:83–91
Segmentation Coefficient, it has been concluded that when 17. Arvanitopoulos N, Achanta R, Susstrunk S (2017) Single image
pre-processing steps are used in the proper sequence, seg- reflection suppression. In: Proceedings of the IEEE conference
mentation using Regularized Extreme Learning Machine on computer vision and pattern recognition, pp 4498–4506
18. Li J, Hu Q, Ai M (2020) RIFT: multi-modal image matching
is more efficient than other algorithms. based on radiation-variation insensitive feature transform. IEEE
Trans Image Process 29:3296–3310
19. Codella NC, Gutman D, Celebi ME, Helba B, Marchetti MA,
Declarations Dusza SW, Kalloo A, Liopyris K, Mishra N, Kittler H, Halpern
A (2018) Skin lesion analysis toward melanoma detection: a
Conflict of interest The authors declare no competing interests. challenge at the 2017 international symposium on biomedical
imaging (isbi), hosted by the international skin imaging col-
laboration (isic). In: 2018 IEEE 15th international symposium
on biomedical imaging. IEEE, pp 168–172
20. Mishra D, Chaudhury S, Sarkar M, Soin AS, Sharma V (2018)
References Edge probability and pixel relativity-based speckle reducing
anisotropic diffusion. IEEE Trans Image Process 27(2):649–664
1. Siegel RL, Miller KD, Jemal A (2018) Cancer statistics. CA A 21. Gavaskar RG, Chaudhury KN (2019) Fast adaptive bilateral
Cancer J Clin 68(1):7–30 filtering. IEEE Trans Image Process 28(2):779–790
2. Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal 22. Zhou M, Jin K, Wang S, Ye J, Qian D (2018) Color retinal
A (2018) Global cancer statistics GLOBOCAN estimates of image enhancement based on luminosity and contrast adjust-
incidence and mortality worldwide for 36 cancers in 185 coun- ment. IEEE Trans Biomed Eng 65(3):521–527
tries. CA A Cancer J Clin 68(6):394–424 23. Shamsudeen FM, Raju G (2019) An objective function based
3. Bafounta ML, Beauchet A, Aegerter P, Saiag P (2001) Is dermos- technique for devignetting fundus imagery using MST. Inform
copy (epiluminescence microscopy) useful for the diagnosis of mel- Med Unlocked 14:82–91
anoma? Results of a meta-analysis using techniques adapted to the 24. Srinivas K, Bhandari AK (2020) Low light image enhance-
evaluation of diagnostic tests. Arch Dermatol 137(10):1343–1350 ment with adaptive sigmoid transfer function. IET Image Proc
4. Madhankumar K, Kumar P (2012) Characterization of skin 14(4):668–678
lesions. In: International Conference on Pattern Recognition, 25. Celik T, Tjahjadi T (2011) Contextual and variational contrast
Informatics and Medical Engineering. IEEE, pp 302–306 enhancement. IEEE Trans Image Process 20(12):3431–3441
5. Jaworek-Korjakowska J (2015) Novel method for border irregu- 26. Liu J (2018) A cascaded deep convolutional neural network
larity assessment in dermoscopic color images. Comput Math for joint segmentation and genotype prediction of brainstem
Methods Med 2015:1–11 gliomas. IEEE Trans Biomed Eng 65(9):1943–1952
6. Ocampo-Blandón CF, Restrepo-Parra E, Riaño-Rojas JC, Jar- 27. Bai X et al (2019) Intuitionistic center-free FCM clustering for
amillo-Ayerbe PF (2016) Contrast enhancement by searching MR brain image segmentation. IEEE J Biomed Health Inform
discriminant color projections in dermoscopy images. Revista 23(5):2039–2051
Facultad Ingenieria, Univ Antioquia 79:192–200 28. Quan R et al (2019) A novel IGBT health evaluation method based
on multi-label classification. IEEE Access 7:47294–47302
13
Medical & Biological Engineering & Computing
29. Jaisakthi SM et al (2018) Automated skin lesion segmentation of B. Priestly Shan received his BE
dermoscopic images using GrabCut and K-means algorithms. IET degree in Electronics and Com-
Comput Vision 12(8):1088–1095 munication Engineering from
30. Lee C, Lee C, Kim C (2013) Contrast enhancement based on MS University in 2003 and Mas-
layered difference representation of 2D histograms. IEEE Trans ters in Communication Systems
Image Process 22(12):5372–5384 from Anna University in 2006.
He received his PhD degree from
Publisher's note Springer Nature remains neutral with regard to Anna University of Technology,
jurisdictional claims in published maps and institutional affiliations. Coimbatore, in 2011. Currently,
he is working as Pro-Vice Chan-
Springer Nature or its licensor (e.g. a society or other partner) holds cellor at Alliance University,
exclusive rights to this article under a publishing agreement with the Bangalore. His current research
author(s) or other rightsholder(s); author self-archiving of the accepted interest includes biomedical
manuscript version of this article is solely governed by the terms of image processing, pattern recog-
such publishing agreement and applicable law. nition, and deep learning.
13