0% found this document useful (0 votes)
7 views19 pages

An Efcient Multi‑Level Pre‑Processing Algorithm for the Enhancement

This document presents a multi-level pre-processing algorithm designed to enhance dermoscopy images for melanoma detection, improving image quality through various techniques such as de-noising, illumination correction, and contrast enhancement. The proposed method utilizes the Non-Local Means filter and the Robust Image Contrast Enhancement algorithm, resulting in superior segmentation performance when applied to skin lesions. The study demonstrates that the algorithm outperforms existing methods in terms of image quality and segmentation accuracy, making it a significant advancement in automated melanoma detection.

Uploaded by

if22.eziali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views19 pages

An Efcient Multi‑Level Pre‑Processing Algorithm for the Enhancement

This document presents a multi-level pre-processing algorithm designed to enhance dermoscopy images for melanoma detection, improving image quality through various techniques such as de-noising, illumination correction, and contrast enhancement. The proposed method utilizes the Non-Local Means filter and the Robust Image Contrast Enhancement algorithm, resulting in superior segmentation performance when applied to skin lesions. The study demonstrates that the algorithm outperforms existing methods in terms of image quality and segmentation accuracy, making it a significant advancement in automated melanoma detection.

Uploaded by

if22.eziali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/372830810

An efficient multi-level pre-processing algorithm for the enhancement of


dermoscopy images in melanoma detection

Article in Medical & Biological Engineering & Computing · August 2023


DOI: 10.1007/s11517-023-02897-w

CITATIONS READS

4 113

5 authors, including:

Ojas Singh K. Uma Maheswari


Symbiosis International University SRM TRP Engineering College
24 PUBLICATIONS 135 CITATIONS 17 PUBLICATIONS 181 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by K. Uma Maheswari on 02 August 2023.

The user has requested enhancement of the downloaded file.


Medical & Biological Engineering & Computing
https://doi.org/10.1007/s11517-023-02897-w

ORIGINAL ARTICLE

An efficient multi‑level pre‑processing algorithm for the enhancement


of dermoscopy images in melanoma detection
D. Jeba Derwin1 · O. Jeba Singh1 · B. Priestly Shan1 · K. Uma Maheswari2 · D. Lavanya2

Received: 23 January 2022 / Accepted: 13 April 2023


© International Federation for Medical and Biological Engineering 2023

Abstract
In this paper, a multi-level algorithm for pre-processing of dermoscopy images is proposed, which helps in improving the
quality of the raw images, making it suitable for skin lesion detection. This multi-level pre-processing method has a positive
impact on automated skin lesion segmentation using Regularized Extreme Learning Machine. Raw images are subjected to
de-noising, illumination correction, contrast enhancement, sharpening, reflection removal, and virtual shaving before the skin
lesion segmentation. The Non-Local Means (NLM) filter with lowest Blind Reference less Image Spatial Quality Evaluator
(BRISQUE) score exhibits better de-noising of dermoscopy images. To suppress uneven illumination, gamma correction is
subjected to the denoised image. The Robust Image Contrast Enhancement (RICE) algorithm is used for contrast enhance-
ment, and produces enhanced images with better structural preservation and negligible loss of information. Unsharp masking
for sharpening exhibits low BRISQUE scores for better sharpening of fine details in an image. Output images produced by
the phase congruency–based method in virtual shaving show high similarity with ground truth images as the hair is removed
completely from the input images. Obtained scores at each stage of pre-processing framework show that the performance is
superior compared to all the existing methods, both qualitatively and quantitatively, in terms of uniform contrast, preservation
of information content, removal of undesired information, and elimination of artifacts in melanoma images. The output of the
proposed system is assessed qualitatively and quantitatively with and without pre-processing of dermoscopy images. From
the overall evaluation results, it is found that the segmentation of skin lesion is more efficient using Regularized Extreme
Learning Machine if the multi-level pre-processing steps are used in proper sequence.

Keywords Non-Local Means Filter · Robust Image Contrast Enhancement · Unsharp masking · Dermoscopy · Phase
congruency

1 Introduction

Melanoma is the most common deadliest skin cancer, with


91,000 new cases annually in the USA, and causes more
* D. Jeba Derwin than 9000 deaths [1]. Globally, skin cancer is one of the life-
[email protected] threatening diseases in western countries. In Europe, more
O. Jeba Singh than 100,000 new melanoma cases, with 22,000 deaths, are
[email protected] reported yearly [2]. The statistics are all more alarming that,
B. Priestly Shan unlike other types of cancer melanoma, have been steadily
[email protected] increasing over the past decades. Consequently, early detec-
K. Uma Maheswari tion of melanoma is a significant challenge in the diagnosis
[email protected] and treatment of skin cancer. Over recent years, a high-reso-
D. Lavanya lution dermoscopy skin imaging technique is used to visual-
[email protected] ize the deep skin structures. Although dermoscopy images
1 are of high-resolution, the visualization of images is still
Alliance University, Bangalore, Karnataka, India
subjective due to poor contrast, skin tone variations, non-
2
SRM-TRP Engineering College, Tiruchirappalli, Tamil Nadu, uniform illumination, and artifacts [3]. A small amount of
India

13
Vol.:(0123456789)
Medical & Biological Engineering & Computing

noise present in the dermoscopy images may get amplified contrast of an input image is enhanced by histogram equali-
during sharpening and contrast enhancement. The amplified zation and the reduction of impulsive noise, hair structures,
noise may adversely affect the performance of edge-based and air bubbles is achieved by applying the median filter.
segmentation algorithms used to extract the borders of the Although it preserves the edges, the fine image details are lost
skin lesions. Hence, de-noising is a vital step in the auto- when the window size of the filter is increased above 3 × 3.
mated analysis of dermoscopy images. Furthermore, Jaworek et al. [5] proposed a novel method
Mostly, skin lesions are darker than the background. How- to reduce the border irregularity in dermoscopy images.
ever, due to uneven illumination, some portions of the image The authors highlighted a two-step pre-processing algo-
may appear darker than the background. Those darker regions rithm which includes black frame removal, hair detection,
may get falsely segmented along with the lesions. Therefore, and in painting. Initially, each row of an image is scanned
contrast enhancement and sharpening are indispensable in the in four directions and the rows with 50% of black pixel are
automated analysis of dermoscopy images. Specular reflec- removed in the input image. Next, the black top-hat trans-
tion is another concern that may deteriorate the visual quality form is applied to remove the dark thick hairs from the black
of melanoma images. Hence, reflection removal is needed to frame removal image. Here, the black top-hat transform has
eliminate the background reflections in input images. Hairs are failed to detect the local structures such as dots or globules in
present in dermoscopy images. The hairs, being dark, may get melanoma images. Moreover, Restrepo et al. [6] introduced
falsely segmented along with the lesion, if intensity-based seg- a contrast enhancement technique based on the most discri-
mentation methods are adopted. Hairs need to be removed prior minant projection of the color map in skin lesion images.
to the segmentation of lesions. The process of removing hairs This method overcomes the non-uniform illumination and
from dermoscopy images is usually termed as virtual shaving. color correction problems while detecting the melanoma.
In this paper, a new six-stage pre-processing algorithm Since the color projection is calculated for all directions,
is introduced to improve the segmentation accuracy of skin it increases the complexity of the algorithm. In addition, a
lesion in dermoscopy images. For de-noising the input five-step pre-processing framework is proposed by Mishraet
image, the Non-Local Means (NLM) filter is employed. It et al. [7] which includes elimination of lighting effects, color
ensures the preservation of detailed information of an image. correction, contrast enhancement, image smoothing, and hair
Likewise, gamma correction is applied at the second stage so removal to improve the visual quality of the image. Here, the
that a uniform illumination is achieved. An algorithm termed authors highlighted the problems in skin lesion detection like
as Robust Image Contrast Enhancement (RICE) is employed poor contrast, skin tone variation, artifacts, and non-uniform
for contrast enhancement. This method helps in avoiding the illumination on dermoscopy images.
image from over contrast enhancement. For sharpening, the Furthermore, Cherepkova et al. [8] proposed an
unsharp masking technique is implied to sharpen the edge enhancement and color correction for original dermos-
pixels. For reflection removal, a transmittance estimation- copy images. Accordingly, the enhancement is achieved in
based strategy is adopted. As a result, the undesired infor- six steps, namely retinex, spatiotemporal retinex-inspired
mation is removed, thereby improving the visual quality. envelope with stochastic sampling, automatic white bal-
Under virtual shaving, a phase congruency–based method ance (AWB), contrast enhancement, automatic enhance-
is adopted for removing the hairs without losing the image ment, and histogram equalization. The authors reported
content. The implemented technique in each stage performs an improved sensitivity and accuracy with an average of
efficiently such that a quality image is achieved at the pre- 4 to 8% and 3 to 5% respectively. Due to over exposure in
processed output for melanoma segmentation. The output of visual adjustment, fine image details are lost with partly
the proposed system is evaluated subjectively with ground corrected color. Although AWB provides a good color cor-
truth images and objectively using quality metrics like the rection, some deviations in visual quality occur due to
Disk Similarity Index (DSI), Jacquard Index (JI), Total Seg- the errors in temperature estimation. Also, a two-phase
mentation Coefficient (TSC), and Intersection over Union pre-processing algorithm for dermoscopy image enhance-
(IoU). The output results reveal the multi-level pre-process- ment is proposed by Jayalakshmi et al. [9]. Accordingly, a
ing algorithm outperforms in the segmentation of skin lesion median filter is applied to remove the artifact and K-means
using Regularized Extreme Learning Machine (RELM). clustering is used to eliminate the outlier pixels. The pre-
sented result shows an accuracy of 92.8% with sensitivity
of 93% and specificity of 90% on the Danderm database.
2 Literature survey Furthermore, a three-step framework was proposed to
improve the contrast of the dermoscopy images in [10].
To enhance the dermoscopy image, Madhan Kumar et al. Initially, a median filter is employed to reduce noise in
[4] presented a pre-processing technique in two steps to the raw input images. Next, the morphological operators
remove the noise, fine hairs, and air bubbles. Accordingly, the such as erosion and dilation are implemented to remove the

13
Medical & Biological Engineering & Computing

artifacts like hairs in the filtered image. Finally, intensity In order to overcome the above issues and enhance the spa-
value mapping is applied to enhance the contrast. Through tial quality for skin lesion segmentation in dermoscopy images,
median filtering, a 5 × 5 window is used to remove the a pre-processing module comprising of de-noising, illumina-
image details of 2 pixel wide. Pankaj et al. [11] introduced tion correction, contrast enhancement, sharpening, reflection
a reformed contrast enhancement technique using Krill removal, and hair removal is introduced in this work. Under
Herd (KH) optimization. Here, a new reformed histogram the de-noising phase, the NLM filter with a suitable DoS value
is obtained with a peak cut off. The global histogram equal- is chosen to preserve the fine details of dermoscopic images.
ization helps in the enhancement of medical images like Also, in the contrast enhancement phase, the RICE algorithm
X-ray, MRI, and CT scan. In this approach, the efficiency is is introduced to avoid the non-uniform enhancement by main-
tested through the metrices like Structural Similarity Index taining a mean brightness. In addition, the reflection removal
Matrix (SSIM), End-Point Intersection over union (EPI), is proposed to remove undesired information by separating the
Delta E (DE), and Region Error Change (REC). Jeevakala background image layer from the reflection layer of the dermos-
et al. [12] discussed a sharpening enhancement technique copy image to be analyzed. Thus, by optimizing the smoothing
for MR images. A Laplacian Pyramid and singular value parameter (SP) and rate control parameter (RCP) values in the
decomposition are implemented to decompose the multi- reflection removal process, the visual quality of the image is
scale images into coarse and difference sub-bands. Here, also preserved. Moreover, a phase congruency method with
the weighted sum of singular matrix and its global his- ideal threshold value preserves the image content in virtual
togram equalization increases the contrast in multi-scale shaving of hairs. The rest of the paper is organized as follows:
images. Section 3 explains the pipeline of dermoscopy pre-processing
Though a lot of literatures are enumerated in pre-pro- method in detail. Section 4 describes the results and discussion.
cessing of dermoscopy images, some limitations are iden- Finally, Section 5 draws the conclusion.
tified as follows:

Normally, median filters are used for de-noising in der- 3 Methodology


moscopy images. In such methods, when the filter size is
increased above 3 × 3, fine details of the image are lost. In this paper, a pre-processing methodology is introduced for
The black-hat transform implemented for hair removal is dermoscopy images which can improve the visual quality of
unable to remove local structures like dots and globules. digital images to achieve an accurate segmentation. The sche-
Automatic White Balance (AWB) causes over exposure in matic representation of the flow of work is depicted in Fig. 1.
visual adjustment, leading to loss of fine image content.
The over-enhancement and multiple illumination arti- 3.1 De‑noising
facts are found in Contrast Limited Adaptive Histogram
Equalization (CLAHE), Contextual and Variational In the proposed method, NLM filter is used to perform the objec-
Contrast (CVC) enhancement algorithm, and Layered tive of de-noising [13]. Therefore, for estimating the denoised
Difference Representation (LDR) algorithms. pixel value Y(m,n) of an input image pixel X(m,n), a windowing
Moreover, in existing methods, the hairs are removed using technique is applied on each 3 × 3 block of input dermoscopy
median filters which leads to loss of image information. images. Hence, Y(m,n) is computed as the weighted sum of the
pixel values inside a block with radius R1 as:

{ }
∑+R1 ∑+R1 [ ] 1≤m≤M (1)
Y(m, n) = W X(m, n), X(m + i, n + j) X(m + i, n + j),
i=−R1 j=−R1 1≤n≤N

Fig. 1  Schematic representation Input Image ILLUMINATION


CONTRAST
of flow of work DENOISING CORRECTION
ENHANCEMENT
* NLM Filter * Gamma
* RICE Algorithm
Correction

Segmented
Image
SKIN LESION VIRTUAL SHAVING REFLECTION
REMOVAL SHARPENING
SEGMENTATION * Phase-
* Reflection *Unsharp Masking
*RELM Congruency
Suppression

13
Medical & Biological Engineering & Computing

where M and N indicate the number of rows and columns After normalization of the weights, the weight corre-
in the input image. The weights W(m,n) are based on the sponding to the pixels, which are closely similar to the pixel
similarity of neighborhood pixels m and n. The similarity to be denoised, will get penalized more. Towards rectifying
is then estimated as: this inadvertent problem, the weight corresponding to the
self-similarity is replaced by the highest value of weight just
h X(m+p,n+p)−X((m+i)+p,(n+j)+p)]2
∑+R2
p=−R2 g [
below it. Therefore, the weight W[X(m,n),X(m + i,n + j)] at

� �
W X(m, n), X(m + i, n + j) = e 𝜉2
i = 0 and j = 0 is expressed as:
(2)
(4)
( [ ])
The variable hg is a normalizing constant. It penalizes the 𝑚𝑎𝑥 W X(m, n), X(m + i, n + j) ∀i ≠ 0&j ≠ 0, −R1 ≤ i ≤ +R1 , −R1 ≤ j ≤ +R1

gray level difference of the pixels within the similarity block,


which are away from its center. Now, Eq. (2) is subjected to
a normalization process,

∑+R1 ∑+R1
(3)
[ ] [ ]
0 ≤ W X(m, n), X(m + i, n + j) ≤ 1& W X(m, n), X(m + i, n + j) = 1
i=−R1 j=−R1

The variable ξ is an arbitrarily defined operational param- 3.3 Contrast enhancement


eter of the NLM filter, called as “decay control parameter.” It
is otherwise called as “Degree of Smoothing (DoS).” To adap- To increase the gray level difference between the lesion and
tively fix the value of DoS (ξ) of NLM filter, the strength of background of an illumination-corrected image Yi, the RICE
noise is estimated in the input image. In this paper, the value algorithm is implemented in dermoscopy images.
of DoS is linearly proportional to the standard deviation (SD) Initially, the histogram hi and equalized histogram heq are
of noise in the input image. This can be done as: obtained for the input image. Later, by applying sigmoid trans-
fer mapping function Tsig (.), the corresponding histogram hsig
𝜉 = 𝛽̂
𝜎n (5) is obtained which improves the visual quality of the image.
̂n indicates the SD of zero mean additive Gaussian noise.
where 𝜎 Now, the target histogram ̃ h is estimated as:
hi + Φheq + 𝜓hsig
h=
̃ (6)
3.2 Illumination correction 1+Φ+𝜓

To suppress the uneven illumination in the denoised image where Φ and ψ are the control parameters, selected based on
Y , illumination correction is implemented in the dermos- the saliency preservation. It is measured by a Quality assess-
copy images. Hence, to suppress the uneven illumination, ment Metric of Contrast (QMC) [14] in an image. Finally,
gamma correction is subjected to the illumination com- the contrast enhanced image Yc can be reconstructed using
ponent of HSV color space. Initially, the denoised input histogram matching function Thm (.) [15].
image in RGB color space is converted to the HSV color ( )
space. Here, the hue component and saturation component Yc = Thm Yi,̃h(Φ, 𝜓) (7)
are kept intact and the value component alone is decom-
posed using retinex decomposition. Later, the estimated
illumination component is subjected to the Gamma cor- 3.4 Sharpening
rection to suppress the unevenness. Since, the arbitrary
parameter γ controls the effectiveness of the devignetting The principle of unsharp masking is exclusively based on
called as Devignetting Quality Parameter (DQP). In this the concept of estimating difference between the input
work, the DQP value is varied between 0.25 and 2.5 and image and the Gaussian-filtered image [16]. A fraction of
the best value is selected as 2.0. Then, the new value com- the high-frequency content is computed by subtracting the
ponent is reconstructed from the decomposed reflectance Gaussian-filtered image from the input image. Again, it is
component and gamma-corrected illumination component. added back to the input image to get the unsharp masking.
Finally, combining the hue, saturation, and new value To perform the unsharp masking, the Gaussian filter kernel
components together, an illumination-corrected image Yi, is used to compute Gaussian filter mask HG as given by,
is obtained by converting the resultant image in HSV color 1 −
(
x2 +y2
)

space to RGB color space. HG (x, y) = e 2𝜎 2 , −w ≤ x ≤ +w and − w ≤ y ≤ +w (8)


2𝜋𝜎 2

13
Medical & Biological Engineering & Computing

Selecting the dimension of Gaussian mask and its SD is where Ys is the input RGB image. The variable T indi-
important to make the strength of smoothing more sensitive. cates the transmittance layer and the variable R indicates
Therefore, SD is computed from the value of the radius of the the reflectance layer of the input image. The notion Γ indi-
mask. The SD of Gaussian mask from its radius is computed cates element-wise multiplication. The notion “∗∗” denotes
using the relation σ = (w − 1) / 4. According to this relation, the 2D-convolution operation. W indicates the matrix that
when the radius of the Gaussian masks increases, the SD also weighs the contribution of the transmittance layer at each
increases proportionally. Therefore, when both SD and dimen- pixel. k is the blurring kernel. The weighing matrix W is
sion of the mask increase together, the degree of smoothing expressed as:
also increases significantly. The identity convolution mask H0
can be calculated as: Wm,n = w, ∀m, n, 1 ≤ m ≤ M, 1 ≤ n ≤ N (12)
{
1 x = 0&y = 0 To avoid losing the high-frequency component during
H0 (x, y) =
0 Otherwise
− w ≤ x ≤ +w and − w ≤ y ≤ +w (9) reflectance removal, the Laplacian-based data fidelity is taken
in the sharpened image. The optimization problem developed
Finally, the sharpened image Ys is obtained by computing for reflection removal image Yr is described as:
the difference between the input image Yc and its Gaussian- ( )||2
filtered output. Yr = argmin||L(T) − L Ys || + 𝜆C(T) (13)
||
T | | ||2
(10)
([ ] )
Ys = Yc ∗∗ H0 + 𝜆 H0 − HG ∗∗ Yc 0 ≤ 𝜆 ≤ 1
where 𝜆 is the regularization parameter, and if 𝜆 value
The fraction of difference between the input and the Gauss- increases, more gradients will be removed. The term C(T)
ian-filtered image merged to the input image is a manually invigorates the smoothening of image without disturbing the
selected parameter λ. This parameter is usually called as scale continuity of large structures.
and if the value of λ is more, the sharper will be the output
image. 3.6 Virtual shaving

3.5 Reflection removal The process of removing hairs from dermoscopy images


is usually termed as virtual shaving. The hairs, being dark,
It is important to remove the undesired reflections; the reflec- may get falsely segmented along with the lesion. A phase
tion removal is implemented in the sharpened image. The pro- congruency–based virtual shaving method is adopted for
cess of reflection suppression is based on enhancing the image the removal of hairs. In the first step of hair removal, the
quality by separating the reflectance layer from the transmit- color image is converted to grayscale. Figure 2 depicts the
tance layer [17]. Based on this observation, an RGB image can output of each pre-processing stage in the segmentation of
be represented as the weighted sum of its transmittance layer skin lesion.
and reflectance layer as explained in (11). Hairs are detected from the grayscale image based on its
phase congruency. A 2D-Log Gabor Filter (LGF) is used for
Ys = Γ(W, T) + Γ(i1 − W, k ∗∗ R) (11) computing phase congruency of the image [18]. The final
phase congruency model of the image is given by:

∑ ∑
wo (m, n)Aso (m, n)Δ𝜙so (m, n) − T
(14)
s o
𝜙(m, n) = ∑ ∑ , 1 ≤ m ≤ M, 1 ≤ n ≤ N
s o Aso (m, n) + 𝜉s

where T is the noise-compensation term, wo represents a


{
1, if 𝜙(m, n) < 0
weighting function, the Δ𝜙so term represents a phase devia- ΦP (m, n) = , 1 ≤ m ≤ M, 1 ≤ n ≤ N
0, otherwise
tion function, and the variable 𝜉s is a minute value used to (15)
avoid computational indeterminacy.
The modified phase angle ΦN1 is the result of negative
By applying threshold on the phase congruency model of
phase angles modified in the range 0 to π which is expressed
an image, the phase angle ΦP is estimated as:
as:

ΦN1 (m, n) = ΦP (m, n)(−𝜙(m, n)) + ΦP �(m, n)𝜙(m, n), 1 ≤ m ≤ M, 1 ≤ n ≤ N (16)

13
Medical & Biological Engineering & Computing

Fig. 2  Pre-processing stages.


a Input image. b Denoised
image. c Illumination correc-
tion. d Contrast enhancement.
e Sharpening. f Reflection
removal. g Virtual shaving. h
Pre-processed gray scale image

(a) (b) (c) (d)

(e) (f) (g) (h)

where the variable ΦP � is the complement of ΦP . Again, the for each of the connected regions. Hair-like structures are
phase angles in ΦN1 are modified such that the angles greater elliptical structures with eccentricity close to 1.
than 𝜋2 are brought to 0 to 𝜋2 as given by: {
1, ifEi < tb
(
ΦN2 (m, n) =ΦP2 (m, n) 𝜋 − ΦN1 (m, n)
) Hi =
0, otherwise
,1 ≤ i ≤ P (22)
(17)
+ ΦP2 �(m, n)ΦN1 (m, n), 1 ≤ m ≤ M, 1 ≤ n ≤ N
The region without hairs is indicated as Hi and the threshold
The term ΦP2 indicates the locations where ΦN1 is tb is arbitrarily selected as 0.6. The resulted virtual shaving image
greater than 𝜋2 and variable ΦP2 � is the complement of ΦP2. Yv for RGB channel without hairs after region filling is given by:
The modified phase angles are then normalized as: ( ) ( ) ( )
YvR = Ψ YrR , Hi , YvG = Ψ YrG , Hi andYvB = Ψ YrB , Hi , 1 ≤ i ≤ P
(23)
𝜋
2
− ΦN2 (m, n)
ΦR (m, n) = , 1 ≤ m ≤ M, 1 ≤ n ≤ N (18)
𝜋
where Ψ indicates the region filling operator and Yr is the
2
reflection removed image.
Later, the phase values ΦR are converted to binary with
a threshold t .
{ 3.7 Segmentation
1, if ΦR (m, n) < t
Φb (m, n) = , 1 ≤ m ≤ M, 1 ≤ n ≤ N
0, otherwise Lesion segmentation means separating the lesion region
(19) from the normal skin region.
Now the binary phase image Φb is then dilated with
disk-shaped structural element SE. The dilation in the
binary image makes the objects visible by filling the small
holes in it. Hence, the dilated phase image ΦD is given by:
ΦD = Φb ⊕ SE (20)
where SE is the structural element described as:

⎡0 1 0⎤
SE = ⎢ 1 1 1 ⎥ (21)
⎣0 1 0⎦
⎢ ⎥
(a) (b)
Then, the connected components P are found on the
dilated binary phase image ΦD . Eccentricity is calculated Fig. 3  Segmented output. a Pre-processed gray scale image. b Seg-
mented skin lesion

13
Medical & Biological Engineering & Computing

It is a crucial step in the analysis of dermoscopy images to error. In this case, ridge regression is quite beneficial in the
identify various global morphological features of the lesion. reduction of variance and prediction error due to smaller value
RELM with ridge regression is employed for segmentation of output weight β. Also, the over fitting problem is addressed
of skin lesion in the proposed system. Based on the ridge with regularization parameter C in RELM which produces
regression model, the stable and better regularization can be better and consistent performance than other segmentation
achieved by adding 1/C to the diagonal elements PTP while algorithms. Figure 3 shows the segmented skin lesion from
estimating the output weight β. gray scale pre-processed image.
Thus, the RELM regression becomes:
)−1 4 Results and discussion
P+ = PT P + I∕C PT (24)
(

The quality of the proposed system is analyzed subjectively


where I is an identity matrix.
and objectively in this section. Twelve objective quality met-
Based on the matrix inversion property, (24) can be writ-
rics are used in this section. They are (1) Blind Reference less
ten as:
Image Spatial Quality Evaluator (BRISQUE), (2) Average
)−1 Gradient of the Illumination Component (AGIC), (3) Light-
P+ = PT PPT + I∕C (25)
(
ness Order Error (LOE), (4) Sparse Feature Fidelity (SFF), (5)
In order to reduce the computation power, (24) and (25) Visual Saliency-based Index (VSI), (6) Patch-based Contrast
can be selected based on PTP or PPT with smaller dimen- Quality Index (PCQI), (7) Over-Contrast Measure (OCM), (8)
sions. Therefore, the computation complexity of RELM can Cumulative Probability of Blur Detection (CPBD), (9) SP, (10)
be estimated as follows: RCP, (11) Peak Signal to Noise Ratio (PSNR), and (12) SSIM.

𝛽 = PT P + I∕C PT T (26) 4.1 Image dataset


[ ]

where T stands for target estimation and P is the hidden The dermoscopy images are collected from the data archive
neuron matrix. of the International Skin Imaging Collaboration (ISIC) [18].
Also, (24) and (25) aimed at optimizing || Pβ − T ||2 + 1/C The archive comprises a total of 900 dermoscopy images. The
|| β ||2 show that smaller output weight β plays a vital role in test data of the ISIC Melanoma Challenge 2016 is used in
better generalization of RELM. The procedure of RELM is our experiment. The data comprises of 379 images. Out of
given in three steps. 379 images, 273 images comprise melanoma. A total of 106
Step 1: Randomly estimate the hidden neuron parameters, images are of normal lesions. Images with malignant lesions
weight w and bias b. are labeled after performing the biopsy. All images comprising
Step 2: Estimate the hidden layer matrix P using: benign lesions are labeled after a histopathological examina-
tion and prolonged longitudinal follow-up. Associated ground
⎡ P1 ⎤ ⎡ P(x1 ) ⎤ ⎡ G(w1 , b1 , x1 ) ⋯ G(wL , bL , xL ) ⎤
truth segmentations contoured by the expert dermatologists are
P=⎢ ⋮ ⎥=⎢ ⋮ ⎥=⎢ ⋮ ⋱ ⋮
also provided in the archive.

⎣ PN ⎦ ⎣ P(xN ) ⎦ ⎣ G(w1 , b1 , xN ) ⋯ G(wL , bL , xN ) ⎦
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
(27)
4.2 Validation of NLM filter
Step 3: Calculate the output weight β using:
The influence of DoS on the de-noising quality of the
𝛽 = H+T (28)
NLM filter is analyzed subjectively and objectively in
where H+ is derived from (24) and (25). this section. Under objective evaluation, the BRISQUE
Since hidden neuron parameters are randomly chosen, fast score is evaluated. The test images are filtered by the
learn speed is achieved in RELM. Due to randomness nature, NLM filter, by varying DoS values from 1 to 15, and the
Extreme Learning Machine (ELM) and other Artificial Neural results of some DoS values are shown in Fig. 4. As the
Network (ANN) algorithms have high variance and prediction value of the DoS varies from 1 to 15, the smoothing effect

Fig. 4  Output images produced


by the NLM filter for different
values of DoS. a Test image. b
DoS = 3. c DoS = 5. d DoS = 7.
e DoS = 9. f DoS = 11 (a) (b) (c) (d) (e) (f)

13
Medical & Biological Engineering & Computing

on the images also increases. It is evident from Fig. 4b–f


that, when the value of DoS increases beyond ten, the
images become excessively smoothed. This weakens the
lesions present in the images. On the other hand, when (a) (b) (c) (d)
DoS ranges 1 to 5, the noise is not removed sufficiently,
and the required smoothing is not achieved in the output Fig. 6  Output images produced by different de-noising algorithms. a
image. Hence, based on the perceived quality of processed Test image. b ADF. c BF. d NLM
images, the range of DoS between 6 and 9 is observed to
be suitable for dermoscopy images.
Among the 100 test images, three images are selected 4.3 Validation of illumination correction
randomly to plot the variations of BRISQUE score for
different values of DoS which is depicted in Fig. 5. In this The influence of DQP on the quality of devignetted images is
graph, it shows a low BRISQUE score when DoS is varied analyzed subjectively and objectively. For objective analysis,
between 8 and 10. As the DoS increases beyond 10, the the quality metrics like AGIC, LOE, SFF, and VSI are used.
BRISQUE score also increases for all the three images. Based on this analysis, identifying the suitable range of DQP
The NLM filter is compared qualitatively and quanti- for illumination correction in dermatological photographs is
tatively against two different alternatives of de-noising, important. The output images corresponding to the proposed
namely Anisotropic Diffusion Filter (ADF) [19] and Bilat- devignetting scheme for different values of DQP are depicted
eral Filter (BF) [20]. In Fig. 6c, BF excessively smooth- in Fig. 7. It is observed that in Fig. 7b, when the value of the
ens the image that greatly reduces the sharpness of the DQP is less than one, the gray levels at the enhanced regions
denoised image and thereby fades the boundary of the in the input images get compressed or scaled-down. In effect,
lesions. Likewise, in Fig. 6b, the image denoised by ADF the dynamic range of the processed images gets compressed
shows the boundary of lesions is not preserved properly and it appears to be relatively darker than the input images.
with textural artifact. But in Fig. 6d, the image is properly If the value of the DQP is equal to one, the processed image
denoised by the NLM filter by maintaining the boundary becomes exactly similar to the corresponding input images
of the lesions than the ADF. The information loss is also as shown in Fig. 7c.
minimal when compared to the bilateral filter. When the value of DQP is above one (DQP = 1.5), the
The summary of BRISQUE scores obtained for ADF, darker regions of the input images become enhanced slowly
BF, and NLM for 100 images is tabulated in Table 1. It and the background illumination becomes uniform. How-
is evident that the NLM filter obtains the least value of ever, the vignetting error is not fully corrected; it can be seen
BRISQUE score compared to the other schemes. in Fig. 7d. For the value DQP = 2, the dark corners of the
dermatological photographs caused by the vignetting error
become equally enhanced as the bright regions in the photo-
graphs are depicted in Fig. 7e. In this case, if DQP is greater
than 2 (DQP = 2.5), an over-enhancement can be noticed
in Fig. 7f.Therefore, DQP = 2 is chosen as the optimized
value for illumination correction due to uniform brightness
throughout the image.
The variations of AGIC, LOE, SFF, and VSI with respect
to DQP are shown in Fig. 8a–d. In Fig. 8a, the AGIC mono-
tonically decreases as the DQP increases. AGIC becomes
almost consistent for the values of DQP greater than 2. In
Fig. 8b, the LOE continuously decreases when the value of
DQP < 1 and reaches minimum at a point where DQP = 1.
Afterwards, the LOE increases linearly when the value of

Table 1  BRISQUE scores shown by various de-noising schemes

Method Image 1 Image 2 Image 3 Summary on 100 images

ADF 48.4356 53.7685 52.5846 51.0745 ± 3.4252


BF 40.6547 42.3425 43.5926 42.2393 ± 4.3343
NLM filter 31.2365 34.5476 33.4826 33.1646 ± 2.3256
Fig. 5  Variation of BRISQUE score against DoS

13
Medical & Biological Engineering & Computing

Fig. 7  Outputs of the proposed


devignetting scheme. a Test
image. b DQP = 0.5. c DQP = 1.
d DQP = 1.5. e DQP = 2. f
DQP = 2.5 (a) (b) (c) (d) (e) (f)

(a) (b)

(c) (d)

Fig. 8  Variation of the objective quality metrics with respect to DQP. a AGIC vs DQP. b LOE vs DQP. c SFF vs DQP. d VSI vs DQP

DQP is greater than 1. In Fig. 8 c and d, when DQP changes algorithms are depicted in Fig. 7. An ideal devignetting
from 0 to 1, both SFF and VSI increase and reach the maxi- technique should make the background illumination uni-
mum point at DQP = 1. When DQP increases above 1, the form throughout the image surface without intolerably
SFF and VSI start decreasing and above 2.2 the slope of scaling down or boosting the mean brightness. Figure 9a
SFF and VSI increases. This analysis of AGIC, LOE, SFF, depicts the input image for the GC algorithm. In the output
and VSI with respect to DQP indicates the optimum value images of the GC algorithm (Fig. 9b), the background illu-
of DQP suitable for the dermatological images. mination appears to be almost uniform. However, it blurs
The proposed devignetting algorithm is compared both the structures present in the dermoscopy images. The VF
qualitatively and objectively, against three different algo- algorithm introduces processing-induced color artifacts as
rithms, namely Gamma correction (GC) [21], variation- seen in Fig. 9c. It produces output images that are unnatural
based fusion (VF) [22], and sigmoid transform (ST) [23]. in appearance. Output images of the ST in Fig. 9d look sig-
The obtained images by applying different devignetting nificantly darker than the corresponding input images. The

13
Medical & Biological Engineering & Computing

Fig. 9  Output images of differ-


ent devignetting schemes for the
input image. a Input image. b
GC. c VF. d ST. e Proposed
(a) (b) (c) (d) (e)

Table 2  AGIC score for different schemes in illumination correction Table 5  VSI score for different schemes in illumination correction
Method Image 1 Image 2 Image 3 Summary of 100 images Method Image 1 Image 2 Image 3 Summary of 100 images

ST 0.5202 0.5012 0.5988 0.5401 ± 0.0517 ST 0.8282 0.7805 0.8457 0.8181 ± 0.0337
VF 0.3880 0.3628 0.4278 0.3929 ± 0.0328 VF 0.8938 0.8886 0.8815 0.8880 ± 0.0062
GC 0.1939 0.1560 0.2964 0.2154 ± 0.0726 GC 0.8302 0.7782 0.8428 0.8171 ± 0.0342
Proposed 0.1828 0.1547 0.2310 0.1895 ± 0.0386 Proposed 0.9927 0.9896 0.9886 0.9903 ± 0.0021

Table 3  LOE score for different schemes in illumination correction output image of the proposed algorithm is natural in appear-
Method Image 1 Image 2 Image 3 Summary of 100 images
ance. In addition, Table 4 shows the highest value of SFF
in the proposed algorithm indicates that the color as well
ST 1383 1587 1472 1480 ± 102.2758 as structural distortions is negligible in the output image.
VF 637.4233 964.4730 753.8274 785.2412 ± 165.7724 Moreover, the higher value of VSI shown in Table 5 justifies
GC 2498 2478 2476 2484 ± 12.1655 that visual saliency maps of the output images are identi-
Proposed 96.1730 377.4392 225.2537 232.9553 ± 140.7912 cal to the visual saliency maps of the corresponding input
images. Therefore, the loss of salient information negligible
in the proposed algorithm is guaranteed. Finally, in Table 6
Table 4  SFF score for different schemes in illumination correction it is evident that the proposed algorithm is computationally
Method Image 1 Image 2 Image 3 Summary of 100 images
faster than the other methods. All these results emphasize
the dominance of the proposed scheme in terms of uniform-
ST 0.8900 0.8836 0.8874 0.8870 ± 0.0032 ity in background illumination, information preservation,
VF 0.9369 0.9022 0.9249 0.9213 ± 0.0176 and computational speed.
GC 0.8849 0.8693 0.8721 0.8754 ± 0.0083
Proposed 0.9834 0.9855 0.9661 0.9783 ± 0.0106
4.4 Validation of the RICE algorithm

background illumination remains as uneven in the dermato- Contrast enhancement is done to increase the gray level dif-
logical photographs. But in Fig. 9e, a uniform background ference between lesion and background. Objective evalua-
illumination is noticed throughout the image surface. More- tion is done with the help of quality metrics like SFF, VSI,
over, the mean brightness is not down-scaled or boosted. PCQI, and OCM. The different techniques considered for
The structures present in the output images remain sharper, comparing the performance of contrast enhancement are
appear natural, and do not cause any processing-introduced CLAHE [21], CVC [24], and LDR [25].
color artifacts. With respect to the subjective quality of the While evaluating the performance of the RICE algorithm,
devignetted images, the proposed devignetting algorithm is a set of low-contrast dermoscopy images are used. Output
superior to ST, VF, and GC methods. The qualitative evalu- images produced by different contrast enhancement tech-
ation is repeated for hundred test images and it is found that niques are depicted in Fig. 10. An ideal enhancement algo-
the proposed algorithm is consistently better than its alterna- rithm increases the gray-scale difference without changing
tives on all test images. the mean brightness of the image. In Fig. 10 b and d, both
The obtained numerical values of AGIC, LOE, SFF, and CLAHE and LDR algorithms made an over-enhancement in
VSI and computational time for different schemes ST, VF, the image. Similarly in Fig. 10c, multiple illumination arti-
GC, and the proposed algorithm are presented in Tables 2, 3, facts are visible at the background region after the enhance-
4, 5, and 6, respectively. As given in Table 2, the minimum ment by the CVC algorithm. Besides, the proposed RICE
value of AGIC indicates that the background illumination algorithm effectively enhances the images without affect-
in output images of the proposed method is uniform. Fur- ing the mean brightness of dermoscopy images as shown
thermore, in Table 3 the low values of LOE justify that the in Fig. 10e. Hence, based on the subjective analysis, it is

13
Medical & Biological Engineering & Computing

Table 6  Computational Method Image 1 (s) Image 2 (s) Image 3 (s) Summary of 100 images (s)
time for different schemes in
illumination correction ST 0.108442 0.057055 0.060224 0.0752 ± 0.0288
VF 108.666069 94.76729 105.5663 102.9999 ± 7.2961
GC 0.124003 0.096456 0.079563 0.1000 ± 0.0224
Proposed 2.070523 2.435887 1.575264 2.0272 ± 0.4319

Fig. 10  Output images pro-


duced by different contrast
enhancement algorithms. a Test
image. b CLAHE. c CVC. d
LDR. e RICE
(a) (b) (c) (d) (e)

Table 7  SFF scores for different schemes in contrast enhancement concluded that the RICE algorithm can efficiently enhance
Method Image 1 Image 2 Image 3 Summary of 100 images
the dermoscopy image.
SFF, VSI, PCQI, and OCM values for the output images
CLAHE 0.5943 0.5473 0.7069 0.6162 ± 0.0820 produced by different schemes CLAHE, CVC, LDR, and
CVC 0.9362 0.9438 0.9608 0.9469 ± 0.0126 RICE are presented in Tables 7, 8, 9, and 10. A higher value
LDR 0.9786 0.9479 0.9780 0.9682 ± 0.0176 of SFF in the RICE algorithm reflects the lesser structural
RICE 0.9965 0.9964 0.9945 0.9958 ± 0.0011 distortions present in the output. Likewise, the higher value
of VSI score in the proposed algorithm indicates that the
visual saliency map of the output image is identical to that
of the input image. Similarly, the high values of PCQI score
Table 8  VSI scores for different schemes in contrast enhancement in the RICE algorithm indicate the proper enhancement of
dermoscopy images. The low value of OCM score in the
Method Image 1 Image 2 Image 3 Summary on 100 images
proposed result indicates negligible noise amplification dur-
CLAHE 0.9191 0.8768 0.9040 0.9000 ± 0.0214 ing enhancement. Considering the factors like enhancement
CVC 0.9566 0.9162 0.9517 0.9415 ± 0.0220 in contrast, visual saliency, feature preservation, and infor-
LDR 0.9790 0.9251 0.9663 0.9568 ± 0.0282 mation fidelity together, the RICE algorithm offers better
RICE 0.9958 0.9972 0.9954 0.9961 ± 0.0009 performance compared to other algorithms.

4.5 Validation of unsharp masking

The quality of the sharpened image is influenced by the


Table 9  PCQI scores for different schemes in contrast enhancement
parameter λ in unsharp masking. This process is carried out
Method Image 1 Image 2 Image 3 Summary on 100 images by varying the value of λ from 0 to 5 with an interval of 0.5.
It is analyzed subjectively using BRISQUE and CPBD.
CLAHE 0.2680 0.1622 0.2020 0.2107 ± 0.0534
The sharpening effect gets increased when the value of
CVC 1.1603 1.3073 1.1561 1.2079 ± 0.0861
λ increases and it can be clearly observed from the images
LDR 1.1445 1.3481 1.1770 1.2232 ± 0.1094
depicted in Fig. 11b–f. When the value of λ is less than
RICE 2.7586 2.8221 2.8510 2.8106 ± 1.6736
1, the sharpening effect is less as illustrated in Fig. 11 e
and f. In Fig. 11 b and c, it is observed that the value of
λ increases beyond 2.5, and the non-edge fine texture gets
amplified which may adversely affect the segmentation pro-
Table 10  OCM scores for different schemes in contrast enhancement
cess. Hence, based on the perceived quality of the processed
Method Image 1 Image 2 Image 3 Summary on 100 images images, the range of the λ between 1.5 and 2.5 is observed to
CLAHE 0.5017 0.1164 0.6459 0.4213 ± 0.2737 be ideal for unsharp masking in dermoscopy images.
CVC 0.4507 0.0142 0.0301 0.1650 ± 0.2476 The variations of BRISQUE and CPBD metrics to differ-
LDR 0.2295 0.0451 0.0481 0.1076 ± 0.1056 ent values of λ are shown in Fig. 12. BRISQUE exhibits an
RICE 0.0515 0.0102 0.0513 0.0377 ± 0.0238 inverted bell-shaped curve for three test images. BRISQUE
shows low values when the range of the λ is between 1.5 and

13
Medical & Biological Engineering & Computing

Fig. 11  Output images pro-


duced by unsharp masking for
different values of λ. a Input
image. b λ = 5. c λ = 2.5. d λ = 1.
e λ = 0.5. f λ = 0 (a) (b) (c) (d) (e) (f)

(a) (b)
Fig. 12  Variation of BRISQUE and CPBD score for different values of the λ. a BRISQUE vs λ. b CPBD vs λ

On the other hand in Fig. 13c, an ideal sharpening algorithm is


able to strengthen the lesion without amplifying the non-edge
fine texture in the image.
The values of BRISQUE scores for the output produced by
unsharp masking and local Laplacian filter for 100 images are
(a) (b) (c) presented in Table 11. It can be observed that unsharp masking
exhibits the lowest values of BRISQUE compared to the local
Fig. 13  Input image and results produced by sharpening filters. a Laplacian filter. Low values of the BRISQUE score provide
Input image. b Output of local Laplacian filter. c Output of unsharp
images with fewer distortions and artifacts, and are closer to
masking
the original quality of the image. Usually, the BRISQUE score
ranges between 0 and 100, and if the score is near to 0, the
2. The value of the CPBD metric increases as the λ increases quality of the image is good.
from 0.5 to 5. The slope of the CPBD starts decreasing
when the value of the λ is greater than 2. The variations of 4.6 Validation of reflection removal
BRISQUE and CPBD to λ indicate that the optimum range of
λ is between 1.5 and 2.5. The selection of the SP and RCP of the reflection-removed
The unsharp masking algorithm is compared both quali- images is analyzed subjectively in this section. SP controls
tatively and quantitatively against the local Laplacian filter the degree of smoothening and RCP determines the number
[15]. Output images for different sharpening algorithms are of iterations. The small value of RCP needs more iterations
shown in Fig. 13. From the output of the local Laplacian filter and results in sharper output image. For this analysis, a range
in Fig. 13b, it is evident that this filter excessively sharpens the of suitable dermatological photographs are identified that pos-
images, which results in amplification of non-edge fine texture. sess specular reflection. The outputs of the reflection removal

Table 11  BRISQUE score Method Image 1 Image 2 Image 3 Summary of 100 images
obtained for unsharp masking
and local Laplacian filter Local Laplacian filter 22.7326 26.9124 31.5786 27.0745 ± 4.4252
Unsharp masking 5.8528 1.2185 13.7466 6.9393 ± 6.3343

13
Medical & Biological Engineering & Computing

Fig. 14  Output images of


reflection removal for various
values of RCP (SP = 0.01).
a Input image. b RCP = 1.9.
c RCP = 1.7. d RCP = 1.5. e
RCP = 1.3 (a) (b) (c) (d) (e)

Fig. 15  Output images of


reflection removal for various
values of RCP (SP = 0.02).
a Input image. b RCP = 1.9.
c RCP = 1.7. d RCP = 1.5. e
RCP = 1.3 (a) (b) (c) (d) (e)

Fig. 16  Output images of


reflection removal for various
values of RCP (SP = 0.03).
a Input image. b RCP = 1.9.
c RCP = 1.7. d RCP = 1.5. e
RCP = 1.3 (a) (b) (c) (d) (e)

algorithm, corresponding to the test image, are depicted in


Figs. 14, 15, and 16.
The value of SP is varied between 0.01 and 0.04, and for
each value of SP, RCP is varied between 1.1 and 2 with an
interval of 0.1. It is apparent from the output images that,
as the value of SP increases beyond 0.02, unexpectedly the
image gets smoothed heavily with cartoon artifact. Moreo-
(a) (b)
ver, as the value of SP increases, the data loss occurs which
can be inferred from Fig. 16. When the value of SP is less
Fig. 17  Representative test image containing hairs. a Image. b
than 0.02, the reflected part of the image is also removed
Ground truth
without the smoothing effect as presented in Fig. 15. When
SP is 0.01, it is observed that reflection is not properly
removed from the image as shown in Fig. 14. Based on the
perceived quality of processed images, the ideal value of
SP is 0.02 for dermoscopy images. From Fig. 14e, Fig. 15e,
and Fig. 16e, it can be observed that as the value of RCP
is less than 1.5, the information contained in the image is
lost with visible cartoon artifact. When the value of RCP (a) (b) (c)
increases above 1.8, reflection from dermoscopy images
is not efficiently removed as shown in Fig. 14b, Fig. 15b, Fig. 18  Output images of the phase congruency–based virtual shav-
and Fig. 16b. Thus, based on the perceived quality of the ing. a Threshold = 0.7. b Threshold = 0.85. c Threshold = 0.95
resulting images, the range of RCP between 1.5 and 1.7 is
observed to be ideal for the dermoscopy images.
in Fig. 17. The value of the threshold is varied from 0.55
4.7 Validation of phase congruency–based virtual to 1. When the value of the threshold is between 0.55
shaving and 0.7, almost no hairs are removed from the dermos-
copy image as depicted in Fig. 18a. But hairs are com-
The influence of the threshold on the subjective quality pletely removed in Fig. 18b when the threshold value
of the virtually shaved images is analyzed subjectively is increased beyond 0.85. However, if the value of the
as well as objectively. The quality assessment is done threshold is increased above 0.95, the image informa-
objectively using PSNR and SSIM. The test image and tion content is also lost along with the removed hair as
its ground truth image used for virtual shaving are shown shown in Fig. 18c. Hence, based on the perceived quality

13
Medical & Biological Engineering & Computing

(a) (b)

Fig. 19  PSNR and SSIM plotted for different values of threshold. a PSNR versus threshold. b SSIM versus threshold

of the processed images, the range of threshold between 4.8 Validation of RELM‑based segmentation
0.85 and 0.9 is observed to be ideal for dermatological
photographs. In this section, different segmentation algorithms are applied
The variations of PSNR and SSIM to various values to the pre-processed and without pre-processed dermoscopy
of the threshold are shown in Fig. 19. PSNR and SSIM images. The performance of different algorithms is com-
metrics are computed between the virtually shaved image pared subjectively as well as objectively. The quality met-
and the ground truth image. Both PSNR and SSIM remain rics like DSI, JI, TSC, and IoU [26] are used for objective
consistent for threshold values less than 0.6. But, when comparison. The different segmentation algorithms used are
threshold increases beyond 0.6, both the parameters FCM [27], isolate thresholding method (IT) [28], k-means
exhibit a bell-shaped curve. PSNR has its maximum [29], and RELM.
value when the threshold is between 0.75 and 0.85 and The output of different segmentation algorithms without pre-
SSIM reaches its maximum values when the threshold is processing is shown in Fig. 20a–f and Fig. 22a-f. Here, the skin
between 0.75 and 0.9. A higher value of PSNR and SSIM lesion is not segmented accurately because of the existence of
justifies that the output of the virtually shaved image and noise, non-uniform illumination, and hairs. The virtually shaved
ground truth image is identical. Hence, it is concluded images with a threshold value of 0.85 along with the manually
that from the variations of PSNR and SSIM the optimum segmented ground truth and output of different segmentation
range of threshold for virtual shaving of dermoscopy algorithms are depicted in Fig. 21 and Fig. 23. From the output
images is between 0.75 and 0.9. results of FCM, IT, and k-means (Fig. 21c–e and Fig. 23c-e),

Fig. 20  Output images of dif-


ferent segmentation algorithms
without pre-processing. a Raw
image. b Ground truth image. c
FCM. d IT. e k-means. f RELM

(a) (b) (c)

(d) (e) (f)

13
Medical & Biological Engineering & Computing

Fig. 21  Output images of dif-


ferent segmentation algorithms
for the pre-processed image.
a Virtually shaved image. b
Ground truth image. c FCM. d
IT. e k-means. f RELM

(a) (b) (c)

(d) (e) (f)

Fig. 22  Output images of dif-


ferent segmentation algorithms
without pre-processing. a Virtu-
ally shaved image. b Ground
truth image. c FCM. d IT. e
k-means. f RELM

(a) (b) (c)

(d) (e) (f)

Fig. 23  Output images of dif-


ferent segmentation algorithms
for the pre-processed image.
a Virtually shaved image. b
Ground truth image. c FCM. d
IT. e k-means. f RELM

(a) (b) (c)

(d) (e) (f)

the algorithms failed to segment the skin lesions properly The values of JI, DSI, TSC, and IoU calculated for
(Fig. 22). The output of RELM agrees with the manual seg- 100 dermoscopy images for different segmentation algo-
mentation and effectively segments the skin lesions in Fig. 21f rithms with pre-processing and without pre-processing are
and Fig. 23f. Thus, based on subjective quality, it can be con- tabulated in Tables 12, 13, 14, and 15 respectively. The
cluded that the RELM algorithm is able to segment skin lesions skin lesions segmented manually by experts are used as
efficiently from the dermoscopy images. the ground truth for the calculation of different quality

13
Medical & Biological Engineering & Computing

Table 12  JI score shown by different segmentation schemes


Method Image 1 Image 2 Image 3 Summary of 100 images Summary of 100 images Difference
(without pre-processing) (with pre-processing)

FCM 0.8313 0.7666 0.5907 0.7062 ± 0.1024 0.7295 ± 0.1245 0.0233 ± 0.0221
IT 0.8584 0.7726 0.5964 0.7112 ± 0.1126 0.7425 ± 0.1336 0.0313 ± 0.0210
k-means 0.7787 0.6502 0.0085 0.4372 ± 0.3878 0.4791 ± 0.4126 0.0419 ± 0.0248
RELM 0.8729 0.9504 0.6972 0.8971 ± 0.0134 0.9402 ± 0.1297 0.0431 ± 0.1163

Table 13  DSI score shown by different segmentation schemes


Method Image 1 Image 2 Image 3 Summary of 100 images Summary of 100 images Difference
(without pre-processing) (with pre-processing)

FCM 0.9079 0.8679 0.7427 0.7865 ± 0.0456 0.8395 ± 0.0862 0.0530 ± 0.0406
IT 0.9238 0.8717 0.7472 0.7989 ± 0.0812 0.8476 ± 0.0907 0.0487 ± 0.0095
k-means 0.8756 0.7880 0.0169 0.4961 ± 0.3214 0.5602 ± 0.4725 0.0641 ± 0.1511
RELM 0.9322 0.9746 0.8216 0.8545 ± 0.0462 0.9895 ± 0.0790 0.1350 ± 0.0328

Table 14  TSC score shown by different segmentation schemes


Method Image 1 Image 2 Image 3 Summary of 100 images Summary of 100 images Difference
(without pre-processing) (with pre-processing)

FCM 0.8310 0.7664 0.5909 0.7012 ± 0.1156 0.7294 ± 0.1242 0.0282 ± 0.0086
IT 0.8581 0.7725 0.5965 0.7202 ± 0.1004 0.7424 ± 0.1334 0.0222 ± 0.0330
k-means 0.7784 0.6500 0.0085 0.4241 ± 0.3941 0.4790 ± 0.4125 0.0549 ± 0.0184
RELM 0.9984 0.9536 0.7026 0.9328 ± 0.1361 0.9549 ± 0.1594 0.0221 ± 0.0233

Table 15  IoU score shown by different segmentation schemes


Method Image 1 Image 2 Image 3 Summary of 100 images Summary of 100 images Difference
(without pre-processing) (with pre-processing)

FCM 0.8216 0.7543 0.5129 0.6894 ± 0.1210 0.6915 ± 0.1296 0.0021 ± 0.0086
IT 0.7773 0.7667 0.6328 0.7548 ± 0.1107 0.7886 ± 0.1274 0.0338 ± 0.0167
k-means 0.8114 0.7900 0.0122 0.5211 ± 0.2831 0.5390 ± 0.3025 0.0179 ± 0.0194
RELM 0.9115 0.9322 0.9001 0.9127 ± 0.1151 0.9338 ± 0.1372 0.0211 ± 0.0221

metrics. A high value of JI, DSI, and TSC indicates that automated segmentation of the RELM algorithm is more
the segmented lesions agree with ground truth. Also, the accurate with the manual segmentation of skin lesions in
IoU score ranges from 0 to 1 which indicates better segmen- dermoscopy images.
tation accuracy. The RELM algorithm exhibits the highest
value for all four metrics. This indicates that the RELM
algorithm has produced more accurate segmentation results 5 Conclusion
compared to other schemes. Objective evaluation results
agree with the inferences drawn by the subjective evalua- In this paper, different enhancement techniques are intro-
tion for the similarity of the skin lesions segmented using duced for pre-processing of dermoscopy images. Here, the
different algorithms with ground truth. optimization-based framework is tested with data archive
The RELM algorithm is used to segment the lesions from of ISIC (2016). Based on the results obtained with and
the pre-processed images. It exhibits a JI, DIS, TSC, and IoU without pre-processed segmentation, it is concluded that
score higher than FCM, IT, and k-means, which shows that the implementation of pre-processing algorithm improves

13
Medical & Biological Engineering & Computing

the success rate in RGB images. The NLM filter has been 7. Mishra NK, Celebi ME (2016) An overview of melanoma detec-
found to preserve very fine details by removing the noise tion in dermoscopy images using image processing and machine
learning. arXiv preprint arXiv:1601.07843
in skin lesion images. Also, the NLM filter exhibits the 8. O Cherepkova, & JY Hardeberg (2018) Enhancing dermoscopy
lowest BRISQUE score compared to anisotropic diffusion images to improve melanoma detection, 2018 Colour and Visual
filter and bilateral filter. The proposed RICE algorithm Computing Symposium (CVCS), 1–6
for contrast enhancement method is found to be superior 9. Jayalakshmi D, Dheeba J (2020) Border detection in skin lesion images
using an improved clustering algorithm. Int J e-Collab 16(4):15–29
to the existing methods including CLAHE, LDR, and 10. Zghal NS, Derbel N (2020) Melanoma skin cancer detection
CVC with better SFF, VSI, PCQI, and OCM scores. The based on image processing. Curr Med Imaging 16(1):50–58
enhancement of dermoscopy images is further improved 11. Kandhway P, Bhandari AK, Singh A (2020) A novel reformed
by eliminating the undesired information due to reflection histogram equalization based medical image contrast enhance-
ment using krill herd optimization. Biomed Signal Process Con-
using reflection removal method. Also, in our framework trol 56:101677
virtual shaving is included to remove the hairs without 12 Jeevakala S, Brintha A (2018) Therese, Sharpening enhance-
any loss of image content with appreciably high PSNR ment technique for MR images to enhance the segmentation.
and SSIM metrics. The values of quality evaluation met- Biomed Signal Process Control 41:21–30
13. Heo Y-C, Kim K, Lee Y (2020) Image de-noising using Non-
rics like PSNR and SSIM are appreciably high for out- Local Means (NLM) approach in magnetic resonance (MR)
put images produced by phase congruency–based virtual imaging: a systematic review. Appl Sci 10(7028):1–16
shaving when the value of the threshold is in the range of 14. Duan X (2019) A multiscale contrast enhancement for mam-
0.85–0.9. However, the proposed system generates better mogram using dynamic unsharp masking in Laplacian Pyramid.
IEEE Trans Radiat Plasma Med Sci 3(5):557–564
results among all comparable methods in terms of quali- 15. Gu K, Zhai G, Yang X, Zhang W, Chen CW (2015) Automatic
tative and quantitative aspects. Therefore, the introduced contrast enhancement technology with saliency preservation.
pre-processing framework is more appropriate for low- IEEE Trans Circuits Syst Video Technol 25(9):1480–1494
quality melanoma images. From the score of quality met- 16. Rajchel M, Oszust M (2021) No-reference image quality assess-
ment of authentically distorted images with global and local
rics like Disk Similarity Index, Jacquard Index, and Total statistics. SIViP 15:83–91
Segmentation Coefficient, it has been concluded that when 17. Arvanitopoulos N, Achanta R, Susstrunk S (2017) Single image
pre-processing steps are used in the proper sequence, seg- reflection suppression. In: Proceedings of the IEEE conference
mentation using Regularized Extreme Learning Machine on computer vision and pattern recognition, pp 4498–4506
18. Li J, Hu Q, Ai M (2020) RIFT: multi-modal image matching
is more efficient than other algorithms. based on radiation-variation insensitive feature transform. IEEE
Trans Image Process 29:3296–3310
19. Codella NC, Gutman D, Celebi ME, Helba B, Marchetti MA,
Declarations Dusza SW, Kalloo A, Liopyris K, Mishra N, Kittler H, Halpern
A (2018) Skin lesion analysis toward melanoma detection: a
Conflict of interest The authors declare no competing interests. challenge at the 2017 international symposium on biomedical
imaging (isbi), hosted by the international skin imaging col-
laboration (isic). In: 2018 IEEE 15th international symposium
on biomedical imaging. IEEE, pp 168–172
20. Mishra D, Chaudhury S, Sarkar M, Soin AS, Sharma V (2018)
References Edge probability and pixel relativity-based speckle reducing
anisotropic diffusion. IEEE Trans Image Process 27(2):649–664
1. Siegel RL, Miller KD, Jemal A (2018) Cancer statistics. CA A 21. Gavaskar RG, Chaudhury KN (2019) Fast adaptive bilateral
Cancer J Clin 68(1):7–30 filtering. IEEE Trans Image Process 28(2):779–790
2. Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal 22. Zhou M, Jin K, Wang S, Ye J, Qian D (2018) Color retinal
A (2018) Global cancer statistics GLOBOCAN estimates of image enhancement based on luminosity and contrast adjust-
incidence and mortality worldwide for 36 cancers in 185 coun- ment. IEEE Trans Biomed Eng 65(3):521–527
tries. CA A Cancer J Clin 68(6):394–424 23. Shamsudeen FM, Raju G (2019) An objective function based
3. Bafounta ML, Beauchet A, Aegerter P, Saiag P (2001) Is dermos- technique for devignetting fundus imagery using MST. Inform
copy (epiluminescence microscopy) useful for the diagnosis of mel- Med Unlocked 14:82–91
anoma? Results of a meta-analysis using techniques adapted to the 24. Srinivas K, Bhandari AK (2020) Low light image enhance-
evaluation of diagnostic tests. Arch Dermatol 137(10):1343–1350 ment with adaptive sigmoid transfer function. IET Image Proc
4. Madhankumar K, Kumar P (2012) Characterization of skin 14(4):668–678
lesions. In: International Conference on Pattern Recognition, 25. Celik T, Tjahjadi T (2011) Contextual and variational contrast
Informatics and Medical Engineering. IEEE, pp 302–306 enhancement. IEEE Trans Image Process 20(12):3431–3441
5. Jaworek-Korjakowska J (2015) Novel method for border irregu- 26. Liu J (2018) A cascaded deep convolutional neural network
larity assessment in dermoscopic color images. Comput Math for joint segmentation and genotype prediction of brainstem
Methods Med 2015:1–11 gliomas. IEEE Trans Biomed Eng 65(9):1943–1952
6. Ocampo-Blandón CF, Restrepo-Parra E, Riaño-Rojas JC, Jar- 27. Bai X et al (2019) Intuitionistic center-free FCM clustering for
amillo-Ayerbe PF (2016) Contrast enhancement by searching MR brain image segmentation. IEEE J Biomed Health Inform
discriminant color projections in dermoscopy images. Revista 23(5):2039–2051
Facultad Ingenieria, Univ Antioquia 79:192–200 28. Quan R et al (2019) A novel IGBT health evaluation method based
on multi-label classification. IEEE Access 7:47294–47302

13
Medical & Biological Engineering & Computing

29. Jaisakthi SM et al (2018) Automated skin lesion segmentation of B. Priestly Shan received his BE
dermoscopic images using GrabCut and K-means algorithms. IET degree in Electronics and Com-
Comput Vision 12(8):1088–1095 munication Engineering from
30. Lee C, Lee C, Kim C (2013) Contrast enhancement based on MS University in 2003 and Mas-
layered difference representation of 2D histograms. IEEE Trans ters in Communication Systems
Image Process 22(12):5372–5384 from Anna University in 2006.
He received his PhD degree from
Publisher's note Springer Nature remains neutral with regard to Anna University of Technology,
jurisdictional claims in published maps and institutional affiliations. Coimbatore, in 2011. Currently,
he is working as Pro-Vice Chan-
Springer Nature or its licensor (e.g. a society or other partner) holds cellor at Alliance University,
exclusive rights to this article under a publishing agreement with the Bangalore. His current research
author(s) or other rightsholder(s); author self-archiving of the accepted interest includes biomedical
manuscript version of this article is solely governed by the terms of image processing, pattern recog-
such publishing agreement and applicable law. nition, and deep learning.

K. Uma Maheswari has com-


D. Jeba Derwin received her BE pleted BE (Distinction) in ECE
degree in Electronics and Com- in 1999 in Bharathidasan Uni-
munication Engineering from versity, Trichy, Tamil Nadu,
Anna University in 2005 and ME India. She did her M.Tech. in
degree in Communication Sys- Communication Systems in 2006
tems from Anna University in in NIT, Trichy, Tamil Nadu,
2007. She received her PhD India. She received her PhD
degree from Anna University, from Anna University, Chennai,
Chennai, in 2020. Currently, she in 2018. She has published 8
is working as Associate Profes- papers in international journals,
sor in Alliance University, Ban- 1 book chapter published in
galore. Her current research IntechOpen, and a patent pub-
interest includes biomedical lished on title “Remotely con-
image processing, pattern recog- trolled target sheet holding appa-
nition, deep learning, and remote ratus.” She has teaching
sensing. experience of 22 years in various engineering colleges, and profes-
sional membership as life member in ISTE and BES. She is currently
O. Jeba Singh received his BE working in SRM TRP Engineering College, Trichy, Tamil Nadu, India.
degree in Electrical and Elec-
tronics from Manonmaniam D. Lavanya has completed BE in
Sundaranar University in 2001 ECE in 2013 in Anna University,
and his ME degree in power sys- Tamil Nadu, India. She did her
tems from Annamalai University ME in Communication Systems
in 2004. He received his PhD in 2015 in Anna University,
degree from Anna University, Tamil Nadu, India. She has
Chennai, in 2019. Currently, he teaching experience of 4 years in
is working as Associate Profes- engineering colleges. She is cur-
sor in Alliance University, Ban- rently working as Assistant Pro-
galore. His current research fessor in SRM TRP Engineering
interest includes power quality, College, Trichy, Tamil Nadu,
PV systems, image processing, India. Her current research inter-
and remote sensing. est includes biomedical image
processing and deep learning.

13

View publication stats

You might also like