Research Paper
Research Paper
Figure 1
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[6608]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
Nevertheless, current deep learning techniques [3–8] have not considered this medicinal intervention. By
taking into account "contextual" photos from the same patient, deep learning algorithms may identify which
photographs are indicative of melanoma and so increase diagnostic accuracy. The classifier will be more
precise and able to assist dermatology clinics in their job in this way. Drawing from the aforementioned
considerations, I put forth a novel melanoma detection framework that leverages EfficientNet's capabilities [9].
EfficientNet, in contrast to the well-known VGG [10], ResNet [11], employs a neural architecture search base
network that is simple to use but has higher depth, breadth, and picture resolution. The composite coefficient is
the value that can be applied to dimensions. This significantly enhances the capacity to identify richer, more
focused features for the identification of melanoma. I discovered that the suggested scheme was obtained from
the enormous ISIC repository, which houses the largest publicly accessible dataset, by carrying out multiple
tests to compare its performance against existing networks on the ISIC 2020 Challenge Dataset [ 13]. data.
Quality control dermoscopic pictures of skin lesions generated by different medical research institutes and the
International Skin Imaging Collaboration (ISIC). According to experimental results, my suggested model gets an
AUC-ROC score of 0.917, which is 3% higher than the 0.819 of the VGG-based model.
These outcomes show how my network is capable of significantly improving the diagnosis of melanoma skin
cancer. Networking initiatives can assist computer-aided diagnostics for cancer diagnosis and advance clinical
dermatology. There are two distinct things. To improve efficiency and discovery, I painstakingly constructed
new models and reevaluated the ad hoc mining of historical networks. When training, exercise caution. Before
presenting the training weights for the melanoma rate that were obtained from the bigger ImageNet [14]
dataset, I describe the existing model. Modifying the work helps the model identify better thinking models,
which in turn enhances my training. Within Chapter 2, I'll give an overview of the most well-liked classification
schemes and recent studies on skin cancer. A summary of the suggested deep learning model is provided in
Section 3, along with an explanation of why my model performed more accurately. In Episode 4, it is
demonstrated that the suggested model was put to the test against the original model. The final is a synopsis of
my work along with some potential directions for further research.
II. OBJECTIVE
This study paper's main goal is to create and assess Melanoma Diagnosis Tool, a web-based diagnostic tool that
uses sophisticated machine learning algorithms—specifically, convolutional neural networks (CNNs)—to
accurately diagnose and identify melanoma. The objective of this study is to tackle the difficulties presented by
conventional diagnostic techniques, which mostly depend on subjective visual evaluations, resulting in
inconsistent diagnoses and possible therapeutic setbacks. In particular, the goals are:
1. Accuracy and Reliability: The goal was to achieve high accuracy in differentiating between benign lesions
such seborrhoeic keratoses and nevi, and to train and optimise CNN models using the comprehensive dataset
from the 2017 ISIC Challenge.
2.User-Friendly Interface: to create and put into use an intuitive web interface that enables patients and
medical professionals to quickly and simply upload dermatoscopic photos and obtain diagnostic results.
3.Early Detection and Timely Intervention: to improve patient outcomes by enabling prompt intervention
and melanoma early identification, especially in areas with restricted access to specialised dermatological
knowledge.
4.Security and Compliance: To guarantee that Melanoma Diagnosis Tool complies with strict security and
privacy guidelines, safeguarding patient information and meeting legal requirements like HIPAA and GDPR.
5.Scalability and Accessibility: To create a broadly applicable, scalable solution that offers sophisticated
diagnostic capabilities to a worldwide clientele, including underprivileged and isolated regions.
III. IMPLEMENTATION
Dataset
We build a system to accept input data for training and testing in the first module. The datasets are placed in
the default folder. There are 3297 melanoma images in this database.
Figure 2
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[6610]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
Configuration:
This table enumerates many VGG architectures. It is evident that VGG-16 exists in two versions (C and D). With
the exception of one, which uses a (3, 3) filter size convolution in instead of a (1, 1) one, there aren't many
differences between them. There are 134 million and 138 million parameters in each of these two.
Building the Model:
Convolutional neural networks, or CNNs, differ significantly from regular neural networks in that they are very
good at recognising images. A CNN scans a picture several times after it is supplied in order to find distinct
characteristics. Two parameters are available for the convolution procedure, often known as raster: stride and
padding. As seen, the first convolution process yields a new output, which is displayed in the second column.
Each frame of the scanned image contains information about certain elements. Where characteristics are
dominant, the resulting frames have higher values; where features are absent or minimal, the values are lower.
The timing for every frame that is received is then chosen by this method. I decided to use the traditional VGG-
16 model for my project, which has just two convolution layers. This model has a comparable operation to that
of the human brain. Below is an example that shows how to discover features in various CNN layers,
particularly for applications related to facial recognition.
One may wonder what typical characteristics are identified. The early elements of a CNN built from scratch are
random. The CNN can progressively detect traits that fulfil the initial aim, like correctly processing the training
images, thanks to the constant adjustments made to the weights of the neurons during the training process. The
output size is decreased using a method called pooling, often known as subsampling. To further facilitate the
identification of non-linear patterns, we add a non-linear function named ReLU to the output following each
convolution. After the convolution procedure, the final set of frames is flattened into a one-dimensional vector
of neurons. A neural network is then fed this vector that has been flattened. Lastly, for classification tasks, a
SoftMax layer is employed, which transforms the model's outputs into probabilities for every category,
signifying the possibility of accurate predictions.
Apply the model and plot the graphs for accuracy and loss:
We'll use the fit function to compile the model and apply it. There will be three batches. The accuracy and loss
graphs will then be plotted. The average training accuracy was 89.4%, and the average validation accuracy was
89.8%.
Accuracy on test set:
On the test set, we obtained an accuracy score of 89.4%.
Saving the Trained Model:
The first thing to do is save your trained and tested model into a.h5 or .pkl file using a library like pickle, after
you're comfortable enough to take it into a production-ready environment. Verify that pickles are installed in
your setting. The module will now be imported, and the model will be dumped into a.h5 file.
V. INPUT DESIGN AND OUTPUT DESIGN
INPUT DESIGN:
As the vital link between data and users, input design includes the creation and processes needed for data
preparation as well as the actions required to convert data transfer into a format that can be processed. This
can be done by either human input or computer analysis of data from printed or written texts. In input design,
strategic planning is concentrated on reducing input costs, managing errors, averting delays, getting rid of
pointless stages, and optimising the workflow.
The goal of the design is to protect privacy while providing security and usability. Among the design factors are:
➢ Determining which data sources to use for input.
➢ Selecting the appropriate encoding or organisation for the data.
➢ Starting conversations to direct operational staff in offering suggestions.
➢ Formulating procedures for input validation and delineating actions to rectify problems as they arise.
Figure 3
X. PERFORMANCE ANALYSIS
Figure 4
XI. RESULTS
The study's findings demonstrate how well the VGG16 architecture may be used to extract intricate details
from dermoscopic pictures of skin lesions. According to our experimental results, the suggested network has a
tendency to give priority to pertinent regions that include information about melanoma, which improves
classification accuracy over other approaches. We obtain notable gains in classification accuracy by using a
deeper, wider, and higher resolution network. These findings highlight the potential of deep learning methods
to improve melanoma early detection and support better dermatological diagnostic procedures. These results
open the door to the creation of more reliable and accurate computer-aided diagnosis methods in the future for
the identification of skin cancer.
The result of our model in terms of accuracy is coming 89.4%.
XII. CONCLUSION
I systematically review the history and current status of melanoma detection in this work. Based on my
observations, I analyse how well the VGG16 architecture can extract detailed information from clinical
dermoscopic images of skin lesions. Because of its deeper, wider, and more resolution design, the experimental
results show that the suggested network has a tendency to prioritise relevant regions holding melanoma
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[6615]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
information. As a result, the network outperforms other widely used techniques in terms of classification
accuracy. In the future, I intend to concentrate my research efforts on two main areas. First and foremost, I
want to learn more about the connection between melanoma and skin cancer so that I may expand the scope of
my suggested network to include a wider range of skin cancer kinds. Second, I plan to investigate the
fundamental causes of melanoma as well as its different presentations. My goal is to create a more strong and
potent network by adding more medical knowledge obtained from "contextual" photos.
My future work:
Going forward, I plan to concentrate on two main areas of research. First and foremost, I want to learn more
about the connections between melanoma and skin cancer so that the suggested network can be expanded to
include a wider range of skin cancer forms. Second, I want to look into the many symptoms linked to melanoma
as well as its underlying causes. Through the inclusion of more "contextual" perspectives on medical
knowledge, my goal is to build a stronger network.
XIII. REFERENCES
[1] Board P D Q A T E. Health Professional Version of Intraocular (Uveal) Melanoma Treatment [J]. [Online]
PDQ Cancer Information Summaries, 2002.
[2] Wang H, Allen C, Naghavi M, et al. GBD 2015 Collaborators on Mortality and Causes of Death. The Global
Burden of Disease Study 2015[J] conducted a systematic examination of life expectancy, all-cause
mortality, and cause-specific mortality for 249 causes of death, spanning from 1980 to 2015. The Lancet,
2016[J], 388(10053): 1459 1544.
[3] Wang Z, Hu Z, Tang J, and others. Pattern Recognition, 2018, 83: 134-149. Deep learning for image-based
cancer detection and diagnosis-a survey.
[4] Deepmole: Deep neural networks for skin mole lesion classification [Pomponiu V, Nejati H, Cheung N
M]//2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2623–2627 (2016).
[5] Kuprel B, Novoa R A, Esteva A, et al. Deep neural networks for the classification of skin cancer at the
dermatologist level[J]. 2017; nature, 542(7639): 115–118.
[6] Mahbod A, Wang C, Schaefer G, and others. Hybrid deep neural networks for skin lesion categorization
[C]//ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech, and Signal Processing
(ICASSP). 2019 IEEE: 1229–1233.
[7] Al-Jumaily A, Anam K, Masood A. Skin cancer diagnosis using a self-supervised learning model [C] //2015
7th International IEEE/EMBS Conference on Neural Engineering (NER). 2015 IEEE: 1012–1015.
[8] Majtner T, Hardeberg J Y, Yildirim-Yayilgan S. Using hand-crafted features and deep learning for the
categorization of skin lesions[C]//2016 Sixth International Conference on Image Processing Theory,
Tools and Applications (IPTA). IEEE, 2016: 1–6.
[9] Tan M, Le Q V. Efficientnet: Convolutional neural network model scaling reconsidered[J]. 2019 is the
preprint arXiv:1905.11946.
[10] Zisserman A, Simonyan K. Deep convolutional networks to recognise images on a vast scale[J]. preprint
arXiv:1409.1556, 2014, arXiv.
[11] Ren S, Zhang X, He K, and others. Image recognition with deep residual learning[C]//Proceedings of the
2016 IEEE Conference on Pattern Recognition and Computer Vision, 770-778.
[12] Yang Q, Pan S J. A transfer learning survey [J]. 2009; 22(10): 1345–1359; IEEE Transactions on
Knowledge and Data Engineering.
[13] Rotemberg V, Betz-Stablein B, Kurtansky N, and others. ArXiv preprint arXiv:2008.07360, 2020. A
Patient-Centric Dataset of Images and Metadata for Identifying Melanomas Using Clinical Context.
[14] Deng J, Socher R, Dong W, and others. An extensive hierarchical picture database called Imagenet[C] was
presented at the 2009 IEEE conference on pattern recognition and computer vision. 2009; IEEEE: 248–
255.
[15] Bengio Y, Glorot X. Comprehending the challenge of deep feedforward neural network training [C]//13th
international conference on artificial intelligence and statistics proceedings. 2010: 249–256.
[16] Tan M, Pang R, Chen B, and others. Mnasnet: Mobile platform-aware neural architecture search
[C]//IEEE Conference on Computer Vision and Pattern Recognition Proceedings. 2019: 2820–2828.
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[6616]