0% found this document useful (0 votes)
4 views9 pages

Research Paper

The document presents a web-based Melanoma Diagnosis Tool that utilizes advanced machine learning techniques, specifically convolutional neural networks (CNNs), to accurately identify melanoma from benign skin lesions. Trained on a large dataset from the 2017 ISIC Challenge, the tool aims to improve diagnostic accuracy and accessibility, particularly in areas lacking specialized dermatological expertise. The study outlines the tool's objectives, implementation details, and its potential to enhance early detection and patient outcomes in skin cancer diagnosis.

Uploaded by

vietnamhov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views9 pages

Research Paper

The document presents a web-based Melanoma Diagnosis Tool that utilizes advanced machine learning techniques, specifically convolutional neural networks (CNNs), to accurately identify melanoma from benign skin lesions. Trained on a large dataset from the 2017 ISIC Challenge, the tool aims to improve diagnostic accuracy and accessibility, particularly in areas lacking specialized dermatological expertise. The study outlines the tool's objectives, implementation details, and its potential to enhance early detection and patient outcomes in skin cancer diagnosis.

Uploaded by

vietnamhov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

e-ISSN: 2582-5208

International Research Journal of Modernization in Engineering Technology and Science


( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
WEB BASED MELANOMA DIAGNOSIS TOOL
Asst. Prof. Pitamber Adhikari*1, Rahul Agarwal*2, Raj Sharma*3, Rohit Kumar Verma*4
*1Asst. Professor, Department of Information Technology, N.I.E.T, Greater Noida, Uttar Pradesh, India
*2,3,4IT, Department of Information Technology, N.I.E.T, Greater Noida, Uttar Pradesh, India
ABSTRACT
Web-based diagnostic tool created to tackle the vital problem of correctly identifying the deadliest type of skin
cancer, melanoma. Melanoma Diagnosis Tool is an advanced machine learning system that uses convolutional
neural networks (CNNs) to identify melanoma from benign lesions like seborrhoeic keratoses and nevi. It was
trained on a large dataset from the 2017 ISIC Challenge. With a heavy reliance on subjective visual evaluations,
traditional diagnostic techniques frequently result in delayed detection and incorrect diagnoses. This tool seeks
to address these shortcomings. Melanoma Diagnosis Tool's user-friendly interface makes it easy for healthcare
professionals and users to identify skin lesions and take timely action by providing efficient and reliable skin
lesion analysis. Melanoma Diagnosis Tool can greatly improve patient outcomes by increasing diagnosis
accuracy and accessibility, notably in areas where access to specialised dermatological knowledge is restricted.
Keywords: Melanoma Diagnosis, Skin Cancer Disease, Machine Learning, Nevi, Benign Lesions, Dermatology.
I. INTRODUCTION
Melanoma is a highly aggressive kind of skin cancer that is primarily responsible for most skin cancer-related
deaths globally. Effective treatment and better patient outcomes depend on early detection and precise
diagnosis. However, the subjective visual evaluation of dermatologists is a major component of traditional
diagnostic procedures for melanoma, which can result in inconsistent diagnosis and possible treatment delays.
The difficulty of correctly differentiating benign skin lesions like seborrhoeic keratoses and nevi highlights the
requirement for sophisticated diagnostic instruments that can offer fast, dependable, and impartial analysis.
The use of artificial intelligence (AI) and machine learning (ML) in medical diagnostics has demonstrated
encouraging potential in recent years to improve disease detection efficiency and accuracy. CNNs, or
convolutional neural networks, When it comes to analysing medical photos, a class of deep learning models that
are very good at image recognition tasks have shown remarkable performance. An extensive collection of
dermatoscopic pictures from the International Skin Imaging Collaboration (ISIC) Challenge is now used as a
standard for creating and assessing automated skin lesion analysis systems. Melanoma Diagnosis Tool is a
web-based diagnostic tool designed to take advantage of current machine learning developments for melanoma
early diagnosis. Melanoma Diagnosis Tool seeks to reliably differentiate benign lesions from melanoma using
CNNs trained on the large ISIC 2017 Challenge dataset, offering a useful tool for patients and medical
professionals alike. Because of the tool's intuitive design, advanced diagnostic capabilities can be used even in
areas without specialised dermatological knowledge.It makes intuitive sense to think of the diagnosis of
melanoma skin cancer as a classification problem involving the determination of whether benign or malignant
tumours are present in the dermoscopic image. In Figure 1, some skin cell images are displayed.

Figure 1
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[6608]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
Nevertheless, current deep learning techniques [3–8] have not considered this medicinal intervention. By
taking into account "contextual" photos from the same patient, deep learning algorithms may identify which
photographs are indicative of melanoma and so increase diagnostic accuracy. The classifier will be more
precise and able to assist dermatology clinics in their job in this way. Drawing from the aforementioned
considerations, I put forth a novel melanoma detection framework that leverages EfficientNet's capabilities [9].
EfficientNet, in contrast to the well-known VGG [10], ResNet [11], employs a neural architecture search base
network that is simple to use but has higher depth, breadth, and picture resolution. The composite coefficient is
the value that can be applied to dimensions. This significantly enhances the capacity to identify richer, more
focused features for the identification of melanoma. I discovered that the suggested scheme was obtained from
the enormous ISIC repository, which houses the largest publicly accessible dataset, by carrying out multiple
tests to compare its performance against existing networks on the ISIC 2020 Challenge Dataset [ 13]. data.
Quality control dermoscopic pictures of skin lesions generated by different medical research institutes and the
International Skin Imaging Collaboration (ISIC). According to experimental results, my suggested model gets an
AUC-ROC score of 0.917, which is 3% higher than the 0.819 of the VGG-based model.
These outcomes show how my network is capable of significantly improving the diagnosis of melanoma skin
cancer. Networking initiatives can assist computer-aided diagnostics for cancer diagnosis and advance clinical
dermatology. There are two distinct things. To improve efficiency and discovery, I painstakingly constructed
new models and reevaluated the ad hoc mining of historical networks. When training, exercise caution. Before
presenting the training weights for the melanoma rate that were obtained from the bigger ImageNet [14]
dataset, I describe the existing model. Modifying the work helps the model identify better thinking models,
which in turn enhances my training. Within Chapter 2, I'll give an overview of the most well-liked classification
schemes and recent studies on skin cancer. A summary of the suggested deep learning model is provided in
Section 3, along with an explanation of why my model performed more accurately. In Episode 4, it is
demonstrated that the suggested model was put to the test against the original model. The final is a synopsis of
my work along with some potential directions for further research.
II. OBJECTIVE
This study paper's main goal is to create and assess Melanoma Diagnosis Tool, a web-based diagnostic tool that
uses sophisticated machine learning algorithms—specifically, convolutional neural networks (CNNs)—to
accurately diagnose and identify melanoma. The objective of this study is to tackle the difficulties presented by
conventional diagnostic techniques, which mostly depend on subjective visual evaluations, resulting in
inconsistent diagnoses and possible therapeutic setbacks. In particular, the goals are:
1. Accuracy and Reliability: The goal was to achieve high accuracy in differentiating between benign lesions
such seborrhoeic keratoses and nevi, and to train and optimise CNN models using the comprehensive dataset
from the 2017 ISIC Challenge.
2.User-Friendly Interface: to create and put into use an intuitive web interface that enables patients and
medical professionals to quickly and simply upload dermatoscopic photos and obtain diagnostic results.
3.Early Detection and Timely Intervention: to improve patient outcomes by enabling prompt intervention
and melanoma early identification, especially in areas with restricted access to specialised dermatological
knowledge.
4.Security and Compliance: To guarantee that Melanoma Diagnosis Tool complies with strict security and
privacy guidelines, safeguarding patient information and meeting legal requirements like HIPAA and GDPR.
5.Scalability and Accessibility: To create a broadly applicable, scalable solution that offers sophisticated
diagnostic capabilities to a worldwide clientele, including underprivileged and isolated regions.
III. IMPLEMENTATION
Dataset
We build a system to accept input data for training and testing in the first module. The datasets are placed in
the default folder. There are 3297 melanoma images in this database.

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[6609]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
Import the Required libraries:
Python will be the language we utilise for this. The first step will be to import the required libraries, which
include pandas, NumPy, matplotlib, and TensorFlow, as well as keras to generate important models, Sklearn to
distribute training and testing data, and PIL to convert images to numerical arrays.
Retrieving the pictures:
We'll obtain the images along with their labels. The image's size should then be changed to (224,224) since
analysis requires that all photos have the same dimensions. Next, transform the picture into a NumPy array.
Splitting the dataset:
Divide the dataset into test and train sets. There are 20% test and 80% train data.
IV. MODELING AND ANALYSIS
VGG-16 | CNN model:
The Visual Geometry Group (VGG) at the University of Oxford developed the VGG-16 Convolutional Neural
Network (CNN) model, a very prominent deep learning architecture renowned for its simplicity and efficacy in
picture categorization tasks. The 16 weight layers that make up VGG-16's architecture are composed of 3 fully
connected layers, 13 convolutional layers, and 5 max-pooling layers that are positioned in between the
convolutional layers. To capture fine-grained characteristics, the model uses padding and small 3x3
convolutional filters with a stride of 1. This preserves the spatial dimensions of the input images. A ReLU
activation function follows each convolutional layer, adding non-linearity to the model. The spatial dimensions
are gradually reduced via max-pooling layers with a 2x2 filter and a stride of 2, which aggregates features and
offers translation invariance. By transferring the high-level information that the convolutional layers extracted
to the output classes, the final fully connected layers function as a classifier. This architecture is ideally suited
for tasks like diagnosing melanoma in dermatoscopic images because of its depth and design, which allow it to
achieve excellent performance in image recognition tasks while preserving computing efficiency.

Figure 2
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[6610]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
Configuration:
This table enumerates many VGG architectures. It is evident that VGG-16 exists in two versions (C and D). With
the exception of one, which uses a (3, 3) filter size convolution in instead of a (1, 1) one, there aren't many
differences between them. There are 134 million and 138 million parameters in each of these two.
Building the Model:
Convolutional neural networks, or CNNs, differ significantly from regular neural networks in that they are very
good at recognising images. A CNN scans a picture several times after it is supplied in order to find distinct
characteristics. Two parameters are available for the convolution procedure, often known as raster: stride and
padding. As seen, the first convolution process yields a new output, which is displayed in the second column.
Each frame of the scanned image contains information about certain elements. Where characteristics are
dominant, the resulting frames have higher values; where features are absent or minimal, the values are lower.
The timing for every frame that is received is then chosen by this method. I decided to use the traditional VGG-
16 model for my project, which has just two convolution layers. This model has a comparable operation to that
of the human brain. Below is an example that shows how to discover features in various CNN layers,
particularly for applications related to facial recognition.
One may wonder what typical characteristics are identified. The early elements of a CNN built from scratch are
random. The CNN can progressively detect traits that fulfil the initial aim, like correctly processing the training
images, thanks to the constant adjustments made to the weights of the neurons during the training process. The
output size is decreased using a method called pooling, often known as subsampling. To further facilitate the
identification of non-linear patterns, we add a non-linear function named ReLU to the output following each
convolution. After the convolution procedure, the final set of frames is flattened into a one-dimensional vector
of neurons. A neural network is then fed this vector that has been flattened. Lastly, for classification tasks, a
SoftMax layer is employed, which transforms the model's outputs into probabilities for every category,
signifying the possibility of accurate predictions.
Apply the model and plot the graphs for accuracy and loss:
We'll use the fit function to compile the model and apply it. There will be three batches. The accuracy and loss
graphs will then be plotted. The average training accuracy was 89.4%, and the average validation accuracy was
89.8%.
Accuracy on test set:
On the test set, we obtained an accuracy score of 89.4%.
Saving the Trained Model:
The first thing to do is save your trained and tested model into a.h5 or .pkl file using a library like pickle, after
you're comfortable enough to take it into a production-ready environment. Verify that pickles are installed in
your setting. The module will now be imported, and the model will be dumped into a.h5 file.
V. INPUT DESIGN AND OUTPUT DESIGN
INPUT DESIGN:
As the vital link between data and users, input design includes the creation and processes needed for data
preparation as well as the actions required to convert data transfer into a format that can be processed. This
can be done by either human input or computer analysis of data from printed or written texts. In input design,
strategic planning is concentrated on reducing input costs, managing errors, averting delays, getting rid of
pointless stages, and optimising the workflow.
The goal of the design is to protect privacy while providing security and usability. Among the design factors are:
➢ Determining which data sources to use for input.
➢ Selecting the appropriate encoding or organisation for the data.
➢ Starting conversations to direct operational staff in offering suggestions.
➢ Formulating procedures for input validation and delineating actions to rectify problems as they arise.

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[6611]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
OUTPUT DESIGN:
Outputs that satisfy the needs of the final user and clearly display information are considered good outcomes.
Any system's outputs are how processing results are shared with users and other systems. Making decisions on
data manipulation during production is necessary to meet complicated data requirements and urgent needs.
Users can immediately benefit from and find this information important. Products that are both effective and
efficient are made to improve relationships and help people make wise decisions.
1. Computer-generated products should have a unified design that satisfies all objectives. It should also make
sure that every component is clearly and easily understood by users. To study and design these items to fulfil
specific needs, specialised equipment would be needed.
2. Select the informational presentation techniques.
3. Generate reports, papers, or other formats containing data generated by the system.
An information system's output form should accomplish one or more of the following goals:
❖ Share details regarding current affairs, historical accomplishments, and anticipated future events.
❖ Draw attention to significant occasions, chances, issues, or cautions.
❖ Set off a reaction, Verify an activity.
VI. LITERATURE SURVEY
1) Deep learning for image-based cancer detection and diagnosis − A survey
AUTHORS: Hu Z, Tang J, Wang Z, et al.
In order to assess the use of deep learning in cancer diagnosis and detection, we have included a summary of
the field's advancements in this article. We start our survey with a summary of deep learning and common
architectures for cancer diagnosis and detection. In particular, we study four popular deep learning techniques:
deep belief networks, autoencoders, fully connected networks, and convolutional neural networks. Next, we
examine studies that use deep learning to diagnose and predict cancer, some of which are categorised by kind
of cancer. Lastly, we provide an overview of current developments in the use of deep learning to cancer
diagnosis and detection, as well as possible future paths for the subject.
2) Deep neural networks for skin mole lesion classification
AUTHORS: Pomponiu V, Nejati H, Cheung N M. Deepmole
These days, a growing number of people worldwide are afflicted with skin cancer as a result of excessive sun
exposure. The most popular technique for determining whether a skin mole is cancerous is for a dermatologist
to diagnose it using specialised medical techniques. Thanks to advancements in image sensors and processing
power, computer-assisted diagnosis based on skin nevus imaging is another approach that is becoming more
popular. But these methods frequently depend on manually designed elements that are hard to generalise and
perform poorly in novel scenarios. In this work, we suggest a technique for diagnosing skin cancer by
employing pre-trained deep neural networks (DNNs) to extract a set of features. Clinical data experiments
show that DNN-based features lead state-of-the-art approaches in classification performance.
3) Dermatologist-level classification of skin cancer with deep neural networks
AUTHORS: Esteva A, Kuprel B, Novoa R A, et al.
The most prevalent type of cancer in humans is skin cancer, which is typically identified by visual inspection.
Dermoscopic evaluation, biopsy, and histological analysis are frequently performed after the initial visual
inspection. Because skin lesions vary so much in appearance, automatically classifying them from photographs
presents a major challenge. Deep convolutional neural networks (CNNs) have demonstrated impressive
performance over a wide range of characteristics in managing both general and extremely diversified jobs. In
this work, we demonstrate the end-to-end training of a CNN from photos for the purpose of classifying skin
illnesses using only pixels and associated disease labels. Our CNN is trained on a dataset of 2,032 unique
diseases and 129,450 medical photos, which is double the size of previous datasets.
We assessed the CNN's performance in two main binary classification criteria (differentiating malignant
melanoma from benign moles and keratinocyte carcinoma, benign seborrhoeic keratosis, and keratinocyte

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[6612]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
carcinoma) in clinical trials involving 21 board-certified dermatologists. The former denotes the cancer type
that is most common, whereas the latter denotes the cancer type that is most deadly. A comparative research
showed that in both tasks, the CNN matched dermatologists' performance. Deep neural network integration
with mobile devices may make dermatological knowledge more widely available outside of medical settings. By
2021, there will likely be 6.3 billion smartphone users worldwide, which means that smartphones may provide
widespread, affordable access to vital diagnostic features.
4) Skin lesion classification using hybrid deep neural networks
AUTHORS: Mahbod A, Schaefer G, Wang C, et al
An important type of cancer that has become more common in recent decades is skin cancer. Giving the right
treatment requires a precise diagnosis of skin lesions to distinguish between benign and malignant disorders.
While there are several computer-based techniques for categorising skin lesions, convolutional neural
networks (CNNs) have outperformed more conventional techniques in terms of performance. Our goal in this
work is to improve the classification process by the use of deep features that have been taken out of well-
known CNN architectures at various levels of abstraction. We specifically use AlexNet, VGG16, and ResNet-18—
three well-known deep neural networks. A support vector machine (SVM) classifier is trained using the
features that have been extracted. The classifiers' results are then combined to provide the final classification.
Our suggested approach delivers exceptional classification performance, as demonstrated by evaluation on 150
validation photos from the ISIC 2017 Classification Challenge. Melanoma classification yields an area under the
receiver operating characteristic curve (AUC) of 83.83%, whereas seborrhoeic keratosis classification yields an
AUC of 97.55%.
5) Self-supervised learning model for skin cancer diagnosis
AUTHORS: Masood A, Al-Jumaily A, Anam K.
Many classification schemes have been developed in the field of actively investigated automated skin cancer
diagnosis. On the other hand, insufficiently labelled training data-based classification models might have a
substantial effect on the diagnosis procedure if they lack the ability to provide self-advice and semi-supervise.
This research presents an automated learning model for melanoma recognition using dermoscopic pictures
that is semi-supervised and self-advised. Using both labelled and unlabeled data, a deep belief architecture is
built, and the labelled data separation is maximised through the use of an exponential loss function for fine-
tuning. Concurrently, by reducing the impact of incorrectly categorised data, a self-advised support vector
machine (SVM) technique is utilised to enhance classification outcomes. Training samples chosen using a
bootstrap technique are used to train deep networks and SA-SVMs based on polynomial and radial basis
functions, in order to improve redundancy and generalisation of the model. The outcomes are then combined
using weighting based on least squares estimation. A dataset of 100 dermoscopic images is used to assess the
suggested model, and the variation in classification error is examined in relation to the proportion of labelled
and unlabeled data used during the training stage. A comparative study shows that the suggested model, which
makes use of deep brain processing, outperforms well-known methods such as KNN, ANN, SVM, and semi-
supervised algorithms like transductive SVM and expectation maximisation.
VII. SYSTEM STUDY
At this point, a business proposal that includes the overall project plan and expected costs is developed, and the
project's viability is evaluated.
To make sure that the planning process does not put too much strain on the business, an assessment of the
proposal's efficacy will be carried out as part of the system evaluation. It is essential to comprehend the
fundamental needs of the system in order to assess feasibility.
ECONOMIC FEASIBILITY:
This study attempts to evaluate how the organization's business would be affected by putting the system in
place. The amount of money that the corporation can set aside for development is limited. As a result, costs
have to be kept in check, and the design process has to stay within the allocated funds. Effective development
can be facilitated by using primarily free tools and, when needed, investing on customised products.

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[6613]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
SCIENTIFICATION ABILITY:
The purpose of the technical feasibility study is to assess the resource requirements and efficacy of the system.
There shouldn't be undue strain placed on the resources or infrastructure already in place by the system. A
high demand for technical resources could make customers unhappy. The system architecture should be as
simple as possible, needing little to no changes in order to be used.
SOCIAL FEASIBILITY:
The social feasibility study's main objectives are to evaluate users' acceptance of the system and make sure
they have received the necessary training to use it efficiently. It is important that users view the system as
helpful rather than scary. The success of the introduction and familiarisation process determines user
acceptability. Boosting user confidence is necessary to promote constructive feedback from customers.
VIII. SYSTEM ANALYSIS
EXISTING SYSTEM:
• Using a pre-trained AlexNet to extract high-level feature representations from dermoscopic skin pictures,
Pompeniu et al. offer a method for classifying skin cancer. A k-nearest neighbour classifier is then trained with
these features to diagnose skin cancer.
• Esteva et al. suggest using a big dataset used for training along with a pre-trained Convolutional Neural
Network (CNN) to identify skin cancer.
Mahbod et al. study the use of optimised deep features from many well-known CNNs in a completely automated
computerised approach for the classification of skin lesions. The method also makes use of CNNs that have
already been trained.
• Masood et al. use dermoscopic pictures to create a unique semi-supervised, self-advised learning model for
automated skin cancer detection.
DISADVANTAGES OF EXISTING SYSTEM:
• This clinical context has not been appropriately considered by the current models in the current system.
• Typically, current system techniques use two networks to perform lesion segmentation and classification
tasks separately.
•Even with so many different approaches put out, there is still opportunity to improve skin lesion segmentation
and classification performance.
PROPOSED SYSTEM:
The dataset, which includes 3297 photos classified as benign or cancerous, was obtained from Kaggle and other
public repositories. The RGB values of these lesion photos are used to scale them, usually to 224x224 pixels,
and then load them into a numpy array. After labelling each image, the data is jumbled for randomization and
added to the training set. This method shows improved accuracy in determining if a mole is benign or
cancerous by using the VGG-16 architecture model. Given that skin cancer has the potential to be fatal, early
detection is essential. Early detection lowers the risk of death and greatly improves the likelihood of a
favourable outcome. First, the photos are divided into training and testing sets, and the dataset is collected.
After these sets are combined, the dataset is subjected to a deep learning technique. Results from the collected
data allow for the detection of melanomas or cancers. The model becomes increasingly dependable for
identifying skin cancer as its accuracy rises. Our system performs better, accurately differentiating between
benign and malignant tumours.
ADVANTAGES OF PROPOSED SYSTEM:
1.The results show how effective the suggested technique is in greatly improving melanoma identification in
cases of skin cancer.
2. The suggested network demonstrates improved support for dermatological clinic operations, contributing to
the advancement of cancer detection computer-aided diagnosis systems. With skin cancer becoming more
common, early detection is critical since it has a significant impact on survival rates.

www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science


[6614]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
3. Although a variety of techniques have been used to identify cancer early on, their accuracy varies. Improving
the detection accuracy is essential to allow for the early detection of malignant skin lesions and to enable timely
treatment. A variety of models from Deep Learning can overcome the drawbacks of less precise techniques.
IX. SYSTEM DESIGN
SYSTEM ARCHITECTURE:

Figure 3
X. PERFORMANCE ANALYSIS

Figure 4
XI. RESULTS
The study's findings demonstrate how well the VGG16 architecture may be used to extract intricate details
from dermoscopic pictures of skin lesions. According to our experimental results, the suggested network has a
tendency to give priority to pertinent regions that include information about melanoma, which improves
classification accuracy over other approaches. We obtain notable gains in classification accuracy by using a
deeper, wider, and higher resolution network. These findings highlight the potential of deep learning methods
to improve melanoma early detection and support better dermatological diagnostic procedures. These results
open the door to the creation of more reliable and accurate computer-aided diagnosis methods in the future for
the identification of skin cancer.
The result of our model in terms of accuracy is coming 89.4%.
XII. CONCLUSION
I systematically review the history and current status of melanoma detection in this work. Based on my
observations, I analyse how well the VGG16 architecture can extract detailed information from clinical
dermoscopic images of skin lesions. Because of its deeper, wider, and more resolution design, the experimental
results show that the suggested network has a tendency to prioritise relevant regions holding melanoma
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[6615]
e-ISSN: 2582-5208
International Research Journal of Modernization in Engineering Technology and Science
( Peer-Reviewed, Open Access, Fully Refereed International Journal )
Volume:06/Issue:05/May-2024 Impact Factor- 7.868 www.irjmets.com
information. As a result, the network outperforms other widely used techniques in terms of classification
accuracy. In the future, I intend to concentrate my research efforts on two main areas. First and foremost, I
want to learn more about the connection between melanoma and skin cancer so that I may expand the scope of
my suggested network to include a wider range of skin cancer kinds. Second, I plan to investigate the
fundamental causes of melanoma as well as its different presentations. My goal is to create a more strong and
potent network by adding more medical knowledge obtained from "contextual" photos.
My future work:
Going forward, I plan to concentrate on two main areas of research. First and foremost, I want to learn more
about the connections between melanoma and skin cancer so that the suggested network can be expanded to
include a wider range of skin cancer forms. Second, I want to look into the many symptoms linked to melanoma
as well as its underlying causes. Through the inclusion of more "contextual" perspectives on medical
knowledge, my goal is to build a stronger network.
XIII. REFERENCES
[1] Board P D Q A T E. Health Professional Version of Intraocular (Uveal) Melanoma Treatment [J]. [Online]
PDQ Cancer Information Summaries, 2002.
[2] Wang H, Allen C, Naghavi M, et al. GBD 2015 Collaborators on Mortality and Causes of Death. The Global
Burden of Disease Study 2015[J] conducted a systematic examination of life expectancy, all-cause
mortality, and cause-specific mortality for 249 causes of death, spanning from 1980 to 2015. The Lancet,
2016[J], 388(10053): 1459 1544.
[3] Wang Z, Hu Z, Tang J, and others. Pattern Recognition, 2018, 83: 134-149. Deep learning for image-based
cancer detection and diagnosis-a survey.
[4] Deepmole: Deep neural networks for skin mole lesion classification [Pomponiu V, Nejati H, Cheung N
M]//2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2623–2627 (2016).
[5] Kuprel B, Novoa R A, Esteva A, et al. Deep neural networks for the classification of skin cancer at the
dermatologist level[J]. 2017; nature, 542(7639): 115–118.
[6] Mahbod A, Wang C, Schaefer G, and others. Hybrid deep neural networks for skin lesion categorization
[C]//ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech, and Signal Processing
(ICASSP). 2019 IEEE: 1229–1233.
[7] Al-Jumaily A, Anam K, Masood A. Skin cancer diagnosis using a self-supervised learning model [C] //2015
7th International IEEE/EMBS Conference on Neural Engineering (NER). 2015 IEEE: 1012–1015.
[8] Majtner T, Hardeberg J Y, Yildirim-Yayilgan S. Using hand-crafted features and deep learning for the
categorization of skin lesions[C]//2016 Sixth International Conference on Image Processing Theory,
Tools and Applications (IPTA). IEEE, 2016: 1–6.
[9] Tan M, Le Q V. Efficientnet: Convolutional neural network model scaling reconsidered[J]. 2019 is the
preprint arXiv:1905.11946.
[10] Zisserman A, Simonyan K. Deep convolutional networks to recognise images on a vast scale[J]. preprint
arXiv:1409.1556, 2014, arXiv.
[11] Ren S, Zhang X, He K, and others. Image recognition with deep residual learning[C]//Proceedings of the
2016 IEEE Conference on Pattern Recognition and Computer Vision, 770-778.
[12] Yang Q, Pan S J. A transfer learning survey [J]. 2009; 22(10): 1345–1359; IEEE Transactions on
Knowledge and Data Engineering.
[13] Rotemberg V, Betz-Stablein B, Kurtansky N, and others. ArXiv preprint arXiv:2008.07360, 2020. A
Patient-Centric Dataset of Images and Metadata for Identifying Melanomas Using Clinical Context.
[14] Deng J, Socher R, Dong W, and others. An extensive hierarchical picture database called Imagenet[C] was
presented at the 2009 IEEE conference on pattern recognition and computer vision. 2009; IEEEE: 248–
255.
[15] Bengio Y, Glorot X. Comprehending the challenge of deep feedforward neural network training [C]//13th
international conference on artificial intelligence and statistics proceedings. 2010: 249–256.
[16] Tan M, Pang R, Chen B, and others. Mnasnet: Mobile platform-aware neural architecture search
[C]//IEEE Conference on Computer Vision and Pattern Recognition Proceedings. 2019: 2820–2828.
www.irjmets.com @International Research Journal of Modernization in Engineering, Technology and Science
[6616]

You might also like