0% found this document useful (0 votes)
12 views

13

prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effective

Uploaded by

gdscdmuk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

13

prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effective

Uploaded by

gdscdmuk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Kumar et al.

BMC Medical Imaging (2024) 24:63 BMC Medical Imaging


https://doi.org/10.1186/s12880-024-01241-4

RESEARCH Open Access

Unified deep learning models for enhanced


lung cancer prediction with ResNet‑50–101
and EfficientNet‑B3 using DICOM images
Vinod Kumar1, Chander Prabha2, Preeti Sharma2, Nitin Mittal3*, S. S. Askar4 and Mohamed Abouhawwash5

Abstract
Significant advancements in machine learning algorithms have the potential to aid in the early detection and pre-
vention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount
of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three
distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict
lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer
aims to address the issue effectively. Using a dataset of 1,000 DICOM lung cancer images from the LIDC-IDRI reposi-
tory, each image is classified into four different categories. Although deep learning is still making progress in its ability
to analyze and understand cancer data, this research marks a significant step forward in the fight against cancer, pro-
moting better health outcomes and potentially lowering the mortality rate. The Fusion Model, like all other models,
achieved 100% precision in classifying Squamous Cells. The Fusion Model and ResNet-50 achieved a precision of 90%,
closely followed by EfficientNet-B3 and ResNet-101 with slightly lower precision. To prevent overfitting and improve
data collection and planning, the authors implemented a data extension strategy. The relationship between acquir-
ing knowledge and reaching specific scores was also connected to advancing and addressing the issue of impre-
cise accuracy, ultimately contributing to advancements in health and a reduction in the mortality rate associated
with lung cancer.
Keywords Lung Cancer, Deep Learning, Cancer Detection, EfficientNet-B3, ResNet-50, ResNet-101, Fusion

Introduction
Human bodies are composed of different cells. Beating
cancer happens when one of these cells encounters wild
and unordinary progression due to cellular changes [1].
*Correspondence: The World Prosperity Organization reports that cancer is
Nitin Mittal
[email protected] the diminutive driving cause of passing around the world.
1
Department of Computer Science and Engineering, Chandigarh The recurrence of as of late analyzed cancer cases con-
University, Mohali, Punjab, India tinues to rise each year [2] and [3]. The mortality rate for
2
Chitkara University Institute of Engineering and Technology, Chitkara
University, Punjab, India cancer is 6.28% for females and 7.34% for folks. Lung and
3
Skill Faculty of Engineering and Technology, Shri Vishwakarma Skill verbal cancer in men reports 25% of cancer-connected
University, Palwal, Haryana, India passings, whereas breast and verbal cancer pitch in 25%
4
Department of Statistics and Operations Research, College of Science,
King Saud University, P.O. Box 2455, 11451 Riyadh, Saudi Arabia of female cancer-linked passings. The cancer estima-
5
Department of Mathematics, Faculty of Science, Mansoura University, tions are routinely changed and wrap data from [4–7].
Mansoura 35516, Egypt

© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecom-
mons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 2 of 21

Table 1 shows the rates meaning the fundamental com- proceeds to claim the lives of more females than any
ponents behind cancer. other frame of cancer. On the other hand, breast, stom-
In 2020, lung cancer has risen as the first dangerous out- ach, and colon cancers rule the scene of cancer among
line of cancer, point by point [8] and [9]. The ensuing three ladies [10]. In the afterward times, the field of restora-
exceedingly deadly malignancies were breast, liver, and tive ponders has seen an earth-shattering development
stomach cancer, bookkeeping for 6.9%, 8.3%, and 7.7% of inside the abuse of fake insights and machine learning
cancer-related passings, individually. Figure 1 outlines the techniques [11, 12]. These cutting-edge methodologies
worldwide measurements of cancer mortality up to 2020. have been instrumental in the improvement of presci-
The Cancer Actualities and Figure think gauges that ent models for different infections, including cancer.
the year 2022 will witness an amazing 1,918,030 mod- Eminently, the professional deep-learning models [13]
ern cases of cancer. Over about nine decades, from to estimate lung cancer stands as a groundbreaking
1930 to 2019, lung cancer has reliably risen as the restorative apparatus at this dynamic time.
essential cause of passing among guys, as uncovered The proposed device utilizes profoundly proficient
by the study’s discoveries. Among the differing clusters deep-learning models to classify major lung can-
of cancer sorts, stomach, colon, and prostate cancers cer. In organizing to progress the precision of display
win as the foremost predominant among guys. Shock- lung cancer anticipation structure, the proposition for
ingly, despite lower and larger cancer rates, lung cancer ensemble and fusion strategies are provided.

Table 1 Cancer statistics: A global comparison (India 2018 vs. World 2020)

Fig. 1 Trends in Cancer Survivorship in India and Globally


Kumar et al. BMC Medical Imaging (2024) 24:63 Page 3 of 21

A novel part of the current study is the development that illustrate the power of transfer learning in deep net-
of a support system using three different deep-learning works. Section 4, data section provides a detailed evalua-
models (ResNet-50, EfficientNet-B3, and ResNet-101) tion of the combined LIDC-IDRI, displaying several types
combined with transfer learning thereby reducing the of lung cancer images. To enhance lung cancer detection,
related mortality rate and improving health. the results and discussion section include data from the
The purpose of the study is to provide further knowl- fusion of three deep learning models. In Sect. 5, an in-
edge that the usage of deep learning techniques improves depth review of the training and verification procedures
cancer research. Specifically, the authors discuss how is provided by the experimental analysis section. This
deep learning models can be applied to medical research understanding is provided via visual representations of
and diagnostics to improve health outcomes and reduce the accuracy and loss of training and verification curves,
mortality rates and how ensemble learning enhances lung confusion matrices, and comparisons of deep learning
cancer prediction. models for cancer detection performance. Hence, this
well-planned organization of the manuscript ensures
Contributions clarity, coherence, and thoroughness in the methodology,
results, and research discussion presentations. Finally,
• To detect lung cancer subtypes, a support system was Sect. 6 concludes the findings and scope for future work.
developed by combining transfer learning with three
different deep learning models (ResNet-50, Efficient-
Net-B3, and ResNet-101). Related work
• A considerable level of accuracy was attained in Over the past decade, the collection of multimodality
the classification of Squamous cells with the use of information has driven a noteworthy increment within
Fusion Models and ResNet-50. the utilization of information examination in well-being
• Implement a data expansion strategy to prevent over- data frameworks. The therapeutic field has experienced
fitting and enhance data collection and planning. fast development with the advancement of machine
• Using ensemble and fusion techniques, lung cancer learning models to oversee and examine this endless sum
precision has improved, which might lead to better of restorative information, as referenced in [14]. Deep
health outcomes and potentially a decrease in the Learning, which is based on fake neural systems, has
mortality rate from the disease. developed as an advanced machine learning strategy with
the potential to convert the counterfeit insights industry,
as famous in [15].
Motivation DL has demonstrated its value in the medical sector
by effectively managing previously challenging tasks. It
• To prevent lung cancer, research is being conducted offers an extent of organized sorts with different capabili-
to create deep learning models. ties, empowering the proficient to take care of expansive
• To significantly improve health by improving the volumes of restorative information, including literary
accuracy of diagnosis and lowering the disease’s data, sound signals, restorative images, and recordings.
death rate. These DL systems, moreover, known as models, have
been demonstrated to be profoundly viable apparatuses
in various restorative frameworks [16–19]. Both ML
Structure of paper and DL models have achieved success in various medi-
The overall organization starts with a thorough introduc- cal domains, including cancer prevention, detection, and
tion and discusses the current state of cancer survivor- COVID-19 diagnosis [20–22] and medical data analysis.
ship in India and throughout the world. A summary of DL models play a prominent role in medicine, with the
deep learning systems in therapeutic applications and the selection and configuration of networks depending on
parallels between deep learning models and cancer diag- the specific field, data volume, and research objectives.
nosis are also provided. The related research Sect. 2nd For a comprehensive list of commonly utilized DL net-
expands the scope of in-depth learning about cancer works and their distinctive features in the medical indus-
research, supported by perceptible trends in mortality try [23, 24], refer to Table 2.
from 2012 to 2023. The materials and methods Sect. 3rd Machine learning and deep learning are progressively
outlines the techniques used, which include the use of being utilized in therapeutic investigation, and can-
convolutional neural networks (CNNs) and a transfer cer avoidance and discovery could be key regions of the
learning model that consists of ResNet50, ResNet101, center [31]. This article surveys the latest things in this
and EfficientNet-B3. These are complemented by images field, highlighting the foremost promising progress.
Kumar et al. BMC Medical Imaging

Table 2 Deep learning systems in therapeutic applications


Network Types Key Characteristics Detailed Description Notable Remarks
(2024) 24:63

Deep auto-encoder [25] The input–output layers of the framework have Utilized for unsupervised learning, connected Utilized for include extraction and determination
a break indeed with the number of neurons, which for dimensionality decrease or change
must be at least 2
Deep Boltzmann Machine [26] The layers in the address are undirected and have The undirected associations encourage Not appropriate for huge datasets
a measurement of 2 or more. These layers can be both administered and unsupervised learn-
categorized as obvious or covered up, with no near- ing, whereas too minimizing the time required
ness of input or yield layers for the learning handle
Convolutional Neural Networks [27] Incorporates classification, convolution, pooling, Utilized to fathom therapeutic image categorization Apply the network’s highlight extraction preparation.
and completely connected layers. Utilize an enact- issues for unremitting illness and cancer discovery Not each neuron is wired together. takes too much
ment work that’s not straight. Acknowledges input data to memorize
specifically as an image
ResNet-50 [28] A 50-layer deep CNN sort that’s more advanced Utilized to classify therapeutic images with moved It takes much data as well to memorize
comprises leftover units that skip associations forward execution Requires more preparing time
than CNN but performs impressively superior
GoogLeNet [29] Inception CNN: concurrent convolution with diverse In Initiation, CNN trains sped up more Request a gigantic set of data to memorize effectively
part sizes, high-performance therapeutic image than ResNet-50, even though its execution was a lit-
classification tle better
EfficientNet [30] CNNs can make strides by expanding their profun- Utilized to fathom a part of the image categoriza- It is smaller and speedier
dity, width, and determination tion issue. Compared to ResNet-50 and 101
Page 4 of 21
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 5 of 21

A look at "Google Researcher" gives important insights an image of measurements M*N. The bit moves over the
into cancer investigations from 2014 to 2022. The infor- image, duplicating each pixel by its encompassing pixels
mation uncovered in Fig. 2, highlights the expanding and including the items together to grant the convolu-
intrigue in utilizing deep learning in cancer investiga- tion’s yield. This yield is called the actuation outline, and
tions. Additionally, it illustrates that lung cancer receives its measure changes based on the number of channels
more focus compared to breast cancer. The study indi- utilized.
cates that the breast and lung cancer ratios are the high- The ultimate measure of the convolution is decided by
est. These facts were gathered by Google Scholar on components like walk (S) and cushioning (P). The walk
October 24 at noon. shows the part measure, whereas cushioning includes
Table 3 illustrates that previous research had deficien- columns and columns for border pixels. For example, a
cies, with some studies exhibiting poor accuracy due to 5*5 part features a cushioning of 2. The yield measure is
the use of incorrect methodologies or parameters. Cer- decided by the equation (W—F + 2P) / S + 1, where W is
tain investigations employed sophisticated models, but the image measure, F is the part measure, S is the walk,
most of the research only utilized one or two indicators, and P is the cushioning. The yield is at that point passed
which is inadequate for evaluating accuracy and effective- to a pooling layer, which diminishes the image. CNNs can
ness. To attain tall execution with a low-computational utilize max pooling (selecting max esteem) or normal
show, the show ponder will consider the preferences of pooling (calculating normal). The components of a Con-
outfit learning, exchange learning, and particular pro- volutional Neural Network (CNN) are depicted in Fig. 3,
found models with moo computational effectiveness. which clarifies the complex architecture that is essential
to deep learning for tasks involving classification and rec-
ognition of images.
Materials and methods All going-before layer neurons are associated with the
Convolutional Neural Network (CNN) Completely Associated Layer (FC). The neuron’s esteem
The CNN, or deep neural arrangement, takes a 2D in this layer is decided by the whole of the weighted items
image as input and produces classes or lesson prob- of all past layer neurons. Non-linear actuation capacities
abilities as the yield. It is utilized in areas like thera- like Sigmoid, Tanh, and ReLU are utilized in conventional
peutic conclusion, individual distinguishing proof, and CNN systems to evacuate boisterous pixels after convo-
image classification. The CNN’s structure incorporates lution and pooling layers. These actuation capacities are
convolution layers, pooling layers, and a completely connected recently to the pooling layer and after each
associated layer [57]. convolutional layer. To form the ultimate convolution
The convolutional layer applies the convolution that comes about congruous with the FC layer, a straight-
method, where a bit of estimate K*K is convolved with ening layer is commonly utilized.

Fig. 2 Trends in deep learning cancer research mortality, 2012–2023


Table 3 Similarity of deep learning models for cancer determining
Researcher Area characterization DL illustrate Information Outcome
set

[26] Advanced Breast Tomosynthesis vs. Comput- Pretrained VGG16 Breast Screen Norway screening program The rate of breast cancer detected by screen-
erized Mammography ing is comparable between computerized
breast tomosynthesis and stepwise mam-
mography in a population-based screening
program
Kumar et al. BMC Medical Imaging

[27] Precise aspiratory nodule discovery Convolutional Neural Systems (CNNs) LIDC-IDRI dataset Affectability of 92.7% with 1 untrue positive
per filter and affectability of 94.2% with 2
wrong positives per check for lung knob
discovery on 888 checks. Utilization of thick
Most extreme Concentrated Projection (MIP)
images makes a difference distinguish little
(2024) 24:63

aspiratory knobs (3 mm-10 mm) and dimin-


ishes wrong positives
[32] Pathogenesis of Oral Cancer Not applicable (no deep learning model Not applicable (no dataset mentioned) Audit and talk of key atomic concepts
mentioned) and chosen biomarkers embroiled in verbal
carcinogenesis, particularly in verbal squa-
mous cell carcinoma, with a center on dereg-
ulation amid diverse stages of verbal cancer
advancement and movement
[33] Liquid Biopsies for BC Not applicable Meta-analysis of 69 studies ctDNA mutation rates for TP53, PIK3CA,
and ESR1: 38%, 27%, and 32% respectively
[34] Assessment of smartphone-based Employ- Not appropriate Information collected from 4,247 patients Introductory Using inspiration rate expanded
ing Visual Review of the Cervix with Acidic who experienced cervical cancer screening from 16% to 25.1% standard preparing,
Corrosive in helpful settings in rustic Eswatini from September 1, 2016, at that point dropped to a normal of 9.7 term
to December 31, 2018 refresher preparing, expanded once more
to a normal of 9.6 before the beginning
of mentorship, and dropped to a normal
of 8.3% in 2018
[35] Healthcare and Deep Learning Deep Learning (Artificial Neural Network) Electronic Improved predictive performance and appli-
Health Data—8000 cations in various healthcare areas, Accuracy-
97.5%
[36] Computer-Aided Diagnosis (CAD) in Gastric Not specified in the provided text Histopathological images of gastric cancer Summarizes image preprocessing, feature
Cancer (GHIA) extraction, segmentation, and classification
techniques for future researchers
[37] Tumor organization of non-small cell lung Two-step deep learning shows autoencoder Preparing (n = 90), Approval (n = 8), Test CPTAC Test Cohort:
cancer (NSCLC) with detailed insights and CNN) for NSCLC arranging cohorts (n = 37, n = 26) from open space Precision:0.8649
(CPTAC and TCGA) Affectability:0.8000
Specificity:0.9412
AUC:0.8206
TCGA Test Cohort:
Exactness:0.8077
Affectability:0.7692
Specificity:0.8462
AUC:0.8343
Page 6 of 21
Table 3 (continued)
Researcher Area characterization DL illustrate Information Outcome
set

[38] Precise location and classification of breast Pa-DBN-BC (Deep Conviction Arrange) The entire slide histopathology image data- 86% accuracy
cancer set from four information cohorts
[39] Skin Cancer Diagnosis U-Net and VGG19 ISIC 2016, ISIC 2017, ISIC 2018 Palatable comes about compared to state-
of-the-art
[40] Rectal Adenocarcinoma Survival Prediction DeepSurv model (seven-layer neural Patients with rectal adenocarcinoma C index: 0.824 (preparation cohort) and 0.821
Kumar et al. BMC Medical Imaging

network) from the Soothsayer database (test cohort)


Factors influencing survival: age, gender, mari-
tal status, tumor evaluation, surgical status,
and chemotherapy status. High consistency
between test and cohort predictions
(2024) 24:63

[41] Prostate Cancer Diagnosis and Gleason Deep Residual Convolutional Neural 85 prostate core biopsy specimens digitized Coarse-level accuracy: 91.5%, Fine-level
Grading Network and annotated accuracy: 85.4%
[42] Tree-based BrT Multiclassification Demon- Outfit tree-based deep learning demon- BreakHis dataset (pretraining), BCBH dataset Classification accuracy of 87. 50% to 100%
strate for Breast Cancer strates for the four subtypes of BrT
The proposed show is beyond the state
of the art
[43] Breast Cancer (BC) Transfer Learning (TL) MIAS dataset 80–20 strategy:
Precision: 98.96D44 Affectability: 97.83D44
Specificity: 99.13D44 Accuracy: 97.35D44F-
score: 97.66D44
AUC: 0.995
tenfold cross-validation strategy:
Exactness: 98.87D44 Affectability: 97.27D44
Specificity: 98.2D44 Accuracy: 98.84D44
F-score: 98.04D44
AUC: 0.993
[44] Screening for breast cancer with mammog- Deep learning and convolutional neural Different datasets in advanced mammogra- AI calculations appearing guarantee in review
raphy systems phy and advanced breast tomosynthesis information sets, AUC 0.91, advance considers
required for real-world screening effect
[45] Breast Cancer Diagnosis Statistical ML and Deep Learning Various breast imaging datasets Recommendations for future work
Accuracy 97%
[46] Dermoscopic Expert Crossbreed Convolutional, Neural Organize ISIC-2016, ISIC-2017, ISIC-2018 AUC of 0.96, AUC by 10.0% and 2.0% for ISIC-2016 and ISIC-
(hybrid-CNN) 0.95, 0.97 Advanced 2017 datasets, 3.0% higher balanced precision
for ISIC-2018 dataset
[47] Breast Cancer Classification ResNet-50 pre-trained model Histopathological images from Jimma 96.75 accuracy for twofold classification, 96.7
College Therapeutic Center, ’BreakHis,’ accuracy for generous sub-type classifica-
and ’zendo’ online datasets tion, 95.78 accuracy for threatening sub-type
classification, and 93.86 accuracy for review
recognizable proof
Page 7 of 21
Table 3 (continued)
Researcher Area characterization DL illustrate Information Outcome
set

[48] Cancer-Net SCa Custom deep neural organize plans Universal Skin Imaging Collaboration (ISIC) Made strides in precision compared
to ResNet-50, decreased complexity, solid
skin cancer discovery execution, empowered
open-source utilization and improvement
Kumar et al. BMC Medical Imaging

[49] Automating Medical Diagnosis Transfer Learning, Image Classification, Medical image data, Skin lesion data, Pres- Cervical cancer: Sensitivity + 5.4%, Skin lesion:
Object Detection, sure ulcer, Segmentation data, Accuracy + 8.7%, Precision + 28.3%, Sensitiv-
Segmentation, Multi-task Learning ity + 39.7%, Pressure ulcer: Accuracy + 1.2%,
IoU + 16.9%, Dice similarity + 3.5%
[50] Symptomatic Precision of CNN for Gastric Convolutional Neural Network (CNN) 17 studies, 51,446 images, 174 videos, 5539 Sensitivity: 89%, Specificity: 93%, LR + : 13.4,
(2024) 24:63

Cancer patients LR–: 11, AUC: 0.94


Anticipating Attack Profundity of Gastric Sensitivity: 82%, Specificity: 90%, LR + : 8.4,
Cancer LR–: 20, AUC: 0.90
[51] Image Quality Control for Cervical Precancer Deep learning gathering system 87,420 images from 14,183 patients Accomplished higher execution than stand-
Screening with numerous cervical cancers think about ard approaches
[52] Breast Cancer Determination Utilizing Deep Convolutional Neural Systems (CNN) Mammography and histopathologic images Moved forward BC conclusion with DL,
Neural Systems utilized open and private datasets, pre-pro-
cessing procedures, neural arrange models,
and distinguished inquire about challenges
for future advancements
[53] HPV Status Prediction in OPC, Survival Ensemble Model 492 OPC Patient Database AUC: 0.83, Accuracy: 78.7%
Prediction in OPC AUC: 0.91, Accuracy: 87.7%
[54] Pathology Detection Algorithm YOLOv5 with an improved attention Gastric cancer slice dataset F1_score: 0.616, mAP: 0.611; Decision support
mechanism for clinical judgment
[55] Cervical Cancer (CC) HSIC, RNN, LSTM, AFSA Not mentioned Risk scores for recurrence CC patients using
the AFSA algorithm
[56] Hepatocellular carcinoma (HCC) Inception V3 Genomic Data Commons Databases H&E Matthews’s correlation coefficient, 96.0
images accuracy for benign/malignant classification,
and 89.6 accuracy for tumor separation. Antic-
ipated ten most common changed qualities
(CTNNB1, FMN2, TP53, ZFX4) with AUCs
from 0.71 to 0.89
Page 8 of 21
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 9 of 21

Fig. 3 Building blocks of a convolutional neural network

Proposed transfer learning models this, the ResNet design consolidates associations and
Three built-up models, to be specific ResNet-50, leftover units that bypass different convolutional layers
ResNet-101, and EfficientNet-B3, are utilized within the (three in ResNet-50), successfully anticipating the angle
current ponder to examine the viability and execution from reducing.
of distinctive CNN engineering sorts. The concept of The design of ResNet-50 comprises 50 convolutional
exchange learning, as delineated in Fig. 4, includes the layers. The primary layer has 64 channels with a meas-
utilization of pre-trained models for an unused prob- ure of 7*7 and a walk of 2. The ensuing max pooling
lem that contrasts with the initial issue. The ResNet-50 layer (walk = 2) reduces the convolution estimate. This
and 101 [58] and EfficientNet-B3 [59] models have been is often taken after by three convolution layers with 64
already prepared to utilize the ImageNet dataset. In this channels of estimate 1*1, 64 channels of measure 3*3,
examination, these models will be utilized to form expec- and 256 channels of measure 1*1. Three more convo-
tations concerning lung cancer. The input image has lution layers are taken after. The following four con-
three color channels and a standard pixel size of 224 by volutional layers are composed of 512 estimated 1*1
224. The first convolutional layer of the ResNet architec- channels, 128 estimated 3*3 channels, and 128 measure
ture successfully extracts information from the input pic- 1*1 channels. The other layer comprises 1024 channels
ture with a stride value of 2 by using a kernel size of 7 × 7, of estimate 1*1, which is rehashed six times, alongside
including 64 different kernels. 256 channels of estimate 1*1, 256 channels of measure
ResNet-50, a distinctive version of CNN, consolidates 3*3, and 256 channels of estimate 1*1.
the remaining units created by [60]. ResNet-50 may be The ResNet-50 network’s last convolution layers
a 50-layer profound arrangement comprising one max include 2048 measured 1*1 channels, 512 estimated
pooling layer, one FC layer, and 48 convolutional layers. 3*3 channels, and 512 estimated 1*1 channels. The best
The essential advantage of ResNet-50 lies in its utilization layer of this organization, known as the FC layer or nor-
of remaining units. These units successfully address the mal pooling layer, comprises 1000 tests speaking to the
issue of vanishing angles experienced in prior profound ultimate highlight vector. It utilizes a "Softmax" enact-
systems. Within the ResNet-50 design, the remaining ment work to classify images into different classes. In
units are shown during each segment and serve as skip- differentiation, RenNet101 utilizes the ImageNet data-
ping associations, as delineated in Fig. 4. set to prepare its 101 layers, joining an add-up to 44.5
As you dive advance into the profundities, the angle million preparing parameters [61].
either vanishes or gets to be exceedingly minute. In any The authors presented EfficientNet [62], a CNN archi-
case, as you proceed to slip, the slope lessens. To check tecture that scales all measurements (profundity, breadth,

Fig. 4 Visualizing the Power of Transfer Learning in Deep Networks


Kumar et al. BMC Medical Imaging (2024) 24:63 Page 10 of 21

and determination) through compound coefficients. They these images, the researchers used a robust deep learn-
created a course of EfficientNet topologies that are both ing model that included ResNet-50, ResNet-101, and
exact and compact, illustrating that it outperforms earlier EfficientNet-B3, with an emphasis on improving the
models such as ResNet, Xception, NasNet, and Initiation prediction accuracy of lung cancer subtypes. The Fusion
in terms of computation. The network’s three measure- Model categorized Squamous Cells with 100% accuracy,
ments are similarly scaled utilizing compound scaling, whereas ResNet-50, EfficientNet-B3, and ResNet-101 all
permitting the show to powerfully react to the input had 90% accuracy, with EfficientNet-B3 and ResNet-101
estimate. having considerably lesser precision. It also used a data
augmentation approach to improve the data’s resilience
Dataset and reduce overfitting. after closely examining our mod-
Three different folders were created from the dataset els’ performance across 35 time periods. According to
of Chest CT-Scan images: 70% were put aside for train- our research, ResNet-101 and EfficientNet-B3 outper-
ing, 20% for validation, and 10% were set aside for test- form ResNet-50. The findings highlight the ability of deep
ing. There are 613 images in the training dataset, 315 learning algorithms to make more accurate lung cancer
in the validation dataset, and 72 in the testing dataset. diagnoses, which might lead to improvements in medical
Adenocarcinoma, Large Cell Carcinoma, Squamous Cell care and perhaps lower death rates.
Carcinoma, and Normal CT Image were the four dis- To distinguish between these kinds, the use of deep
tinct categories into which the authors meticulously cat- learning requires the use of a powerful classifier, as shown
egorized a dataset of 1,000 DICOM lung cancer images in Fig. 5 (a, b, c, d). This figure shows cases from differ-
from the LIDC-IDRI repository [63]. The data collection ent categories in the prepared data sets, highlighting the
was divided into three categories: training (70%), valida- similarities between them, such as adenocarcinoma and
tion (20%), and testing (10%). Specifically, it contains 613 large cells. The main challenge encountered in this data
images in the training dataset and 315 and 72 images in set lies in the similarities observed in the classifications.
the validation and test datasets. To accurately classify In any case, the application of the information expansion

Fig. 5 (a) Normal CT Image, b Large Cell Carcinoma, c Adenocarcinoma, d Squamous Cell Carcinoma
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 11 of 21

method will be used to address this concern considering learning rate decrease figure is 0.5. The input images
the limited measurement of the dataset. Figure 5 (a, b, c, size 224 × 224-pixel were used by the authors to train
d) shows illustrations of the three forms of lung cancer as the first convolutional layers of the ResNet model with
well as a healthy case. a stride of two. Using ReLU activation functions, non-
linearity is integrated into the network design. With
Results and discussion these designs, the images provided need to be appro-
Figure 6 presents a comprehensive diagram of the lung priately downscaled to enable the feature extraction.
cancer determination strategy, highlighting the key Using categorical cross-entropy as the loss function
methods included. At first, the lung CT imaging dataset and accuracy as the selected performance indicator,
is obtained. Hence, the preparation, approval, and test the study takes use of multi-class classification. 50-per-
sets experience an arrangement of image-processing pro- son batches are trained, and the training is terminated
cedures to guarantee compatibility with the deep learn- when the validation precision does not increase above a
ing organized input layer. These strategies incorporate tolerance level of five rounds. Convergence is improved
RGB change and scaling into a 224*224 arrangement. during training by reducing the learning rate by a factor
To improve the preparation to prepare and empower of 0.5.
the demonstration to memorize different levels of image Each of the three transfer learning models
corruption, in this manner anticipating overfitting and (ResNet-50, ResNet-101, and EfficientNet-B3) uses a
progressing the preparing to arrange, the preparing set learning rate of 0.001 using the Adam-Optimizer while
is advance altered through information increase. This training the model to classify lung cancer by analys-
step includes turning, flipping, and zooming the lung CT ing CT scan images. It was found through the study
image to create different forms of the same CT image. of learning behaviour that ResNet-50 has a saturation
Vertical flipping, zooming, and turning are utilized at epoch 32, whereas ResNet-101 and EfficientNet-B3
as modifiers for image control. The three models, spe- may also have a saturation near epoch 32, depending
cifically ResNet-50, ResNet-101, and EffecientNetB3, on their convergence speed and complexity. Observing
are at that point prepared and approved utilizing these the learning rate saturation is vital for interpreting the
procedures. These models were chosen based on their training dynamics of the model and refining the train-
adequacy in image classification errands, with Effecient- ing strategy.
Net-B3, ResNet-50, and ResNet-101 being preva- The demonstrated ResNet-50-Dense-Dropout expe-
lent profound demonstrate sorts as shown in Table 2. rienced preparing with the preparing set and was
EffecientNet-B3 is considered a low-computation pro- assessed utilizing the assessment set. After this, the
found demonstration. To address the lung cancer con- prepared show was surveyed utilizing the test set and
clusion issue, the exchange learning method is utilized assessment measurements. Additionally, the dem-
to retrain the same pre-trained deep learning models. onstrated ResNet-101-Dense-Dropout was prepared
This includes consolidating extra layers into the insight- to utilize the preparation set and tried utilizing the
ful plan. assessment set. The prepared show was at that point
The proposed deep learning models incorporates a assessed utilizing the test set and assessment measure-
crucial show, to be specific ResNet-50, ResNet-101, ments. The Efficient-B3-Dense-Dropout demonstra-
or EfficientNet-B3, taken after by a bunch normaliza- tion was moreover prepared to utilize the preparing
tion layer, a thick layer with 256 neurons, and ’ReLU’ set and tried utilizing the assessment set. The prepared
enactment work, a dropout layer with a 35% dropout show was at that point put to the test utilizing the test
rate, and a classification layer with a ’Softmax’ enact- set and appraisal criteria. The three preparing models
ment work and four neurons speaking to the targets. were combined at the score level, and the combined
All models will be built utilizing the Adam optimizer demonstration was evaluated. Also, a gathering was
with a learning rate of 0.01, as per the chosen prepa- made utilizing the stacking outfit strategy, comprising
ration criteria. The connected misfortune work for this the ResNet-50-Dense-Dropout, ResNet-101-Dense-
issue is the categorical cross-entropy because it could Dropout, and Efficient-B3-Dense-Dropout models. The
be a multi-class classification issue. The chosen execu- learned outfit demonstration was tried utilizing the test
tion metric is precision. The bunch estimate being uti- set and assessment measurements.
lized is 50. To decide when to end the preparing handle,
Experimental analysis
a resistance level of 5 is set, meaning that if the watched
degree does not move forward after 5 preparing empha- All models experience preparing to utilize the past
sis, the method will halt. The degree being followed for cases. The preparing ages are utilized to calculate the
this reason is the approval precision. Furthermore, the exactness and misfortune for both the preparation and
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 12 of 21

Fig. 6 Fusion of three deep learning models for improved lung cancer diagnosis
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 13 of 21

approval sets. Besides, the perfect approval esteem method that uniformly scales the network width, depth,
is decided for each circumstance. The exactness and and resolution. Specifically, the "B3" variant represents
misfortune bends are outwardly spoken to in Fig. 7. a particular set of scaling coefficients applied to the
EfficientNet-B3 is a CNN architecture that belongs baseline architecture, resulting in a model that is com-
to the EfficientNet family, designed to achieve a bal- putationally efficient while maintaining competitive
ance between computational efficiency and model per- accuracy across various computer vision tasks. Efficient-
formance. It is characterized by a compound scaling Net-B3 has been widely used in image classification,

Fig. 7 a EfficientNetB3-Dense Dropout: Training Vs Validation Loss and Accuracy Curves. b ResNet-50-Dense Dropout: Training and Validation Loss
and Accuracy Curves (c) ResNet-101-Dense Dropout: Training and Validation Loss and Accuracy Curves
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 14 of 21

object detection, and other visual recognition tasks due The integration of dense dropouts into ResNet-50
to its effectiveness in achieving a favorable trade-off architecture seems to contribute positively to model
between model size and performance. The Efficient- training dynamics, improving generalization and overall
Net-B3 show accomplishes its best execution in terms performance.
of misfortune and exactness at ages 40 and 32, sepa- The ResNet-101 training and validation loss curves
rately. On the other hand, the ResNet-50 demonstrates show that the model minimizes error during training and
its ideal execution at age 15, considering both exactness can generalize to invisible data (Fig. 7 (c)). The decrease
and misfortune. As for the ResNet-101 demonstration, in the trend of the two curves indicates effective learn-
ages 14 and 15 are recognized as the ideal focuses for ing, but the gap between them may be expanding, indi-
precision and misfortune, individually. cating overfitting. Table 6 provides a detailed description
The EfficientNetB3-Dense model is improved by drop- of the hyperparameters for each model, including train-
out layers and exhibits notable differences in training, ing accuracy, testing accuracy, training loss, and testing
validation loss, and accuracy curves (Fig. 7 (a)). During accuracy.
training, the model gradually reduces loss and increases The accuracy curve shows the correctness of the model
accuracy, indicating effective learning. However, in the in the prediction. As the accuracy of training increases,
validation set, performance plateaus or slight fluctua- the model learns from the training data. At the same
tions occur, indicating potential over-adjustment con- time, validation accuracy indicates the extent to which
cerns. Fine-tuning of hyperparameters or adjustment of the model can be generalized to new invisible data. The
dropout rates could be explored to improve generaliza- combination of balanced growth in both is ideal, showing
tion performance. The hyperparameters taken into con- robust learning without over- or under-adaptation. It is
sideration are shown in Table 4. The layered architecture essential to monitor convergence, divergence, or plateau
for the transfer learning model incorporating ResNet-50, signs in these curves, assess model training progress, and
ResNet-101, and EfficientNet-B3 with the specified con- identify potential problems such as over-adaptation. Fig-
figurations is shown in Table 5. ure 7 delineates the preparation and approval precision
The ResNet-50-Dense Dropout model shows impres- and misfortune bends. Thick dropouts, such as Efficient-
sive performance in terms of training and validation NetB3-Dense-Dropout, ResNet-50-Dense-Dropout, and
losses as well as accuracy curves (Fig. 7(b)). During the ResNet-101-Dense-Dropout, are a few of the models
training phase, the model effectively minimizes losses showcased.
and shows a constant decline over the years. At the same Class 0 refers to normal CT-image, Class 1 to Large
time, the accuracy of training has been consistently Cell Carcinoma, Class 2 to Adenocarcinoma, and Class
improved, indicating the ability of the model to learn and 3 to Squamous Cell Carcinoma in this study. Efficient-
generalize training data. Net-B3, delineated in Fig. 7, illustrates the foremost
In the validation phase, the model shows its robust- ideal merging among the models. It accomplished a
ness by achieving low validation losses, indicating a good test exactness of about 93.05%, an approval precision of
generalization to invisible data. The validation accuracy 94.99%, and a preparing exactness of 97.5%. In differen-
curve reflects training accuracy and confirms the model’s tiation, the ResNet-50 demonstration displayed prepar-
ability to perform well in new and varied samples. ing, approval, and test exactness scores of 97.5%, 75%,

Table 4 Hypermeters consideration


Hyperparameter ResNet-50 ResNet-101 EfficientNet-B3

Input Image Size 224 × 224 pixels 224 × 224 pixels 224 × 224 pixels
Kernel Sizes 7 × 7, 1 × 1, 3 × 3, 5 × 5 7 × 7, 1 × 1, 3 × 3, 5 × 5 NA
Stride (Initial Convolution) 2 2 NA
Stride (Subsequent Convolution) 1 1 NA
Activation Function ReLU ReLU ReLU
Number of Layers 50 101 NA
Residual Blocks Yes Yes NA
Global Avg Pooling Yes Yes Yes
Compound Scaling No No Yes
Squeeze-and-Excitation Blocks No No Yes
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 15 of 21

Table 5 Transfer learning model incorporating ResNet-50, ResNet-101, and EfficientNet-B3 with the specified configurations
Layer (type) Output Shape Param # Connected to

input_image (InputLayer) (224, 224, 3) 0 –


resnet50_base (Functional) (7, 7, 2048) 23,587,712 input_image[0][0]
resnet101_base (Functional) (7, 7, 2048) 42,658,176 input_image[0][0]
efficientnetb3_base (Functional) (7, 7, 1536) 10,783,535 input_image[0][0]
global_average_pooling2d Global (2048) 0 resnet50_base[0][0]
global_average_pooling 2d_1 Global (2048) 0 resnet101_base[0][0]
global_average_pooling2d_2 Global (1536) 0 efficientnetb3_base[0][0]
dense_layer_1 (Dense) (128) 262,272 global_average_pooling2d[0][0]
dense_layer_3 (Dense) (128) 262,272 global_average_pooling2d_1[0][0]
dense_layer_5 (Dense) (128) 196,736 global_average_pooling2d_2[0][0]
dropout_1 (Dropout) (128) 0 dense_layer_1[0][0]
dropout_3 (Dropout) (128) 0 dense_layer_3[0][0]
dropout_5 (Dropout) (128) 0 dense_layer_5[0][0]
dense_layer_2 (Dense) (64) 8256 dropout_1[0][0]
dense_layer_4 (Dense) (64) 8256 dropout_3[0][0]
dense_layer_6 (Dense) (64) 8256 dropout_5[0][0]
dropout_2 (Dropout) (64) 0 dense_layer_2[0][0]
dropout_4 (Dropout) (64) 0 dense_layer_4[0][0]
dropout_6 (Dropout) (64) 0 dense_layer_6[0][0]
output_layer (Dense) (4) 260 dropout_2[0][0]
dropout_4[0][0]
dropout_6[0][0]
output_activation (Activation) (4) 0 output_layer[0][0]
output_layer[1][0]
output_layer[2][0]

Table 6 Training and testing loss vs accuracy for efficient-B3, ResNet50, and ResNet101
Model Loss Accuracy Validation Loss Validation F1-Score Best Epoch Last Epoch
Accuracy

ResNet50 0.01 1 0.09 0.95 0.85 23 32


ResNet101 0.02 0.99 0.12 0.95 0.84 32 35
EfficientNet-B3 0.02 0.99 0.27 0.89 0.77 31 38

and 80.55% individually. Also, the ResNet-101 show Negatives (FN) are when it incorrectly predicts presence,
showcased preparing, approval, and test precision scores and True Positives (TP) are when the model correctly
of 100%, 94.99%, and 93.50% separately. Strikingly, the predicts the presence of lung cancer. These cases are all
ResNet-101 demonstrates shown the most reduced pre- considered in the confusion matrix. Hence, A True Posi-
paring, approval, and test misfortune, with values of tive (TP) is when the model accurately predicts a positive
0.0003, 0.11, and 0.47 individually. The perplexing disar- outcome indicating the presence of lung cancer and the
ray network computations for the three prepared models prediction aligns with objective truth. If the model accu-
and the score-level combination are displayed in Figs. 8 rately predicted a negative outcome, indicating that there
(a) to 8 (c). was no lung cancer, and the prediction was in line with
Deep learning models are used to categorize lung can- the fundamental truth, this is known as a true negative
cer into four classes: class 0 for normal CT scans, class 1 (TN). False Positive (FP) depicts the conditions in which
for large cell carcinoma, class 2 for adenocarcinoma, and the model erroneously predicted a positive outcome indi-
class 3 for squamous cell carcinoma. False Positives (FP) cating the existence of lung cancer, yet the prediction
are when the model incorrectly predicts absence, False contradicted the essential reality. Situations when the
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 16 of 21

model incorrectly predicts a negative outcome, indicating


the absence of cancer, are known as false negatives (FN).
The EfficientNet-B3 outperforms all other person mod-
els in terms of results, as illustrated by Fig. 8, where the
disarray matrix’s primary pivot contains most of the
hits. Besides, the number of wrong positives and wrong
negatives is lower compared to other models. The score-
level combination yields profoundly comparable results.
Figure 8 outlines that both the combined and Efficient-
Net-B3 models show indistinguishable predominant
execution, outflanking the isolated ResNet models. For
a comprehensive execution comparison over all models
and the four categories, refer to Table 7. In this study,
class 0 normal CT-image, class 1 large cell carcinoma,
class 2 adenocarcinoma, and class 3 squamous cell carci-
noma are identified.
Figure 8 portrayed the disarray lattice of the prepared
models in the combination of ResNet and EfficientNet-B3
at the score level, alongside the EfficientNet-B3-Dense-
Dropout demonstration, ResNet-50-Dense-Dropout
demonstration, and ResNet-101-Dense-Dropout demon-
strate, were utilized in this examination.
An average accuracy of 94% is achieved with the Effi-
cientNet-B3-Dense-Dropout model, as Table 7 shows,
indicating better performance. And even with con-
stant accuracy and F1-score, integrating all models at
the score level improves precision by 1%. Further high-
lights the "Normal" category’s importance in obtaining
the best accuracy across all classes, the "Squamous" cat-
egory’s highest recall, and the "Normal" category’s high-
est F1 score. ResNet-101 further performs better than
ResNet-50 in a variety of real-world circumstances.
Table 7 provides a detailed comparison of the accuracy
of the main models and illustrates how effective each
model is in terms of time. ResNet-50 showed that it could
analyse data in 12.49 s each iteration, but ResNet-101
took a little longer 15.41 s to do the same. However, Effi-
cientNet-B3 showed a similar processing time, with an
average of 15.32 s per iteration. Thorough time computa-
tions served as the foundation for these time measures.
The chart also shows that the ensemble model outper-
formed all other individual models, achieving an excep-
tional accuracy rate of 99.44%.
The results of benchmarking for cancer diagnosis using
different deep learning models are displayed in Table 8.
[46] discovered increased AUC values, particularly for
ISIC-2016, using a hybrid CNN on ISIC datasets. To
improve breast cancer classification models, [52] use
CNN on both public and private data. To precisely locate
and classify breast cancer, [38] use Pa-DBN-BC to histo-
Fig. 8 a Confusion Matrix EfficientNet-B3 with Dense Dropout (b) pathological images. On the LIDC-IDRI file, [27] dem-
Confusion Matrix ResNet-50 with Dense Dropout (c) Confusion Matrix onstrates the precise identification of lung nodules using
ResNet-101 with Dense Dropou
CNNs. Using Inception V3 on genomic datasets, [56]
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 17 of 21

Table 7 Comparison of dense dropout deep learning models for cancer detection performance
Model Precision
Adenocarcinoma Large-Cell Normal Squamous cell Average

EffiecientNetB3-Dense-Dropout 0.87 0.76 0.85 1.00 0.87


ResNet-50-Dense-Dropout 0.91 0.9 0.92 1.00 0.93
ResNet-101-Dense-Dropout 0.96 0.8 1.00 1.00 0.94
Score-level fusion model 0.92 0.9 0.92 1.00 0.94
Model Recall
Adenocarcinoma Large-Cell Normal Squamous cell Average
EffiecientNet-B3-Dense-Dropout 0.91 1.00 1.00 0.65 0.89
ResNet-50-Dense-Dropout 1.00 0.90 1.00 0.83 0.89
ResNet-101-Dense-Dropout 0.95 1.00 1.00 0.79 0.93
Score-level fusion model 1.00 1.00 1.00 0.75 0.94
Model F1-Score
Adenocarcinoma Large-Cell Normal Squamous cell Average
EffiecientNet-B3-Dense-Dropout 0.89 0.86 0.92 0.79 0.87
ResNet-50-Dense-Dropout 0.95 0.9 0.96 0.91 0.87
ResNet-101-Dense-Dropout 0.96 0.89 1 0.88 0.93
Score-level fusion model 0.96 0.95 0.96 0.86 0.93

classify hepatocellular carcinoma with high accuracy and The research paper emphasizes how important it is to
AUC. Utilizing a fractional backpropagation MLP, [64] compare analyses with contemporary models. It focuses
was able to surpass BP-MLP in the categorization of leu- on the modelling architecture, learning rate, model train-
kaemia malignancy. An extraordinary rate of breast can- ing, implementation platform, data set details, model
cer detection was achieved by [65] by using the Modified architecture, preprocessing techniques, classification
Entropy Whale Optimization Algorithm to several data- performance, and results. Utilizing deep learning mod-
sets. Finally, better accuracy in the prediction of various els such as ResNet-50, ResNet-101, and EfficientNet-B3,
forms of lung cancer is achieved in the current work by the study makes use of the LIDC-IDRI-Speicher, which
utilizing EfficientNet. has 1.000 DICOM images of lung cancers. Seventy per-
By contrast with earlier state-of-the-art methods in cent of the data will be used for training, twenty percent
the Comparative Analysis of Lung Cancer Prediction for validation, and ten percent will be used for tests.
using the Deep Learning technique, Table 9 demon- ResNet-50, ResNet-101, EfficientNet-B3, and preprocess-
strates the performance and efficiency of the present ing data augmentation techniques are used in the model
advancement. The present advancement’s supremacy architecture. In the classification of squamous cells, the
is highlighted by this comparison. The superiority fusion model achieves 100% absolute accuracy, whereas
of the ensemble model over the EfficientNet-B3 and ResNet-50, EfficientNet-B3, and ResNet-101 show 90%
ResNet-101 models is evident, with an improvement accuracy. The training procedure takes place across 35
of 6.44% and 18.44%, respectively. While the validation epochs with a batch size of 32, using the Adam opti-
accuracy of the ensemble model is comparable to that mizer with a learning rate of 0.001. The study makes use
of the EfficientNet-B3 and ResNet-101 models, its sig- of 10,988,787 parameters and highlights the potential for
nificant enhancement lies in achieving a precision of advancements in medical care as well as a reduction in
99.44%. mortality rates related to lung cancer through improved
Although there have been advancements in predict- lung cancer subtype prediction accuracy.
ing lung cancer, it is important to acknowledge the The authors advocate the utilization of EfficientNet-B3
existing limitations in the current thinking [66]. These and ResNet-50–101, deep neural network algorithms, for
restrictions involve using small data sets and specific the early detection of lung cancer. The study leverages
scientific models [67, 68]. pre-trained Convolutional Neural Networks (CNNs) and
To isolate the region of interest (ROI) or lung tissues employs strategies on the LIDC DICOM datasets. All
from lung images, it is essential to use preprocessing shape and texture images within the dataset are utilized
techniques such as image segmentation [12]. for feature extraction. Notably, the automatic extraction
Table 8 Benchmarking of deep learning models for cancer detection
Study Field description DL model Dataset Results
Kumar et al. BMC Medical Imaging

[27] Exact aspiratory knob discovery Convolutional Neural Networks (CNNs) LIDC-IDRI dataset 92.7% distribution probability with 1 bad posi-
tive per filter and 94.2% distribution probability
with 2 bad positives per filter for lung nodules
over 888 examinations in the LIDC-IDRI dataset.
The use of MIP imaging increases the likelihood
(2024) 24:63

of indication and reduces the number of false


positive results when locating pulmonary
lymph nodes programmed into the CT inter-
face
[39] Pa-DBN-BC Deep Belief Network (DBN) The slide histopathology image dataset 86% accuracies in breast cancer location
from four distinct cohorts achieved and classification, surpassing previous deep
learning strategies
[56] Hepatocellular carcinoma (HCC) Inception V3 Genomic Data Commons Databases 96.0 accuracy for kind and dangerous classifica-
tion—89.6 accuracy for tumor separation (well,
direct, and destitute)—Expectation of 10 most
common changed qualities in HCC—Outside
AUCs for 4 qualities (CTNNB1, FMN2, TP53,
ZFX4) extending from 0.71 to 0.89—Utilize
of convolutional neural systems to help pathol-
ogists in classification and quality transforma-
tion discovery in liver cancer
[46] Dermo Expert Hybrid-CNN ISIC-2016, ISIC-2017, ISIC-2018 AUC: 0.96, 0.95, 0.97; Improved AUC by 10.0%
(ISIC-2016) and 2.0% (ISIC-2017); Outperformed
by 3.0% in balanced accuracy (ISIC-2018)
[64] Learning Algorithm for Adaptive Signal Fractional Backpropagation MLP Leukemia cancer classification Outperformed BP-MLP in convergence rate
Processing and test accuracy
[65] Breast Cancer Discovery and Classification Modified Entropy Whale Optimization In the breast, MIAS, CBIS-DDSM IN breast: 99.7%, MIAS: 99.8%, CBIS-DDSM:
Algorithm (MEWOA) 93.8%
Current Study Adenocarcinoma, Expansive Cell Carcinoma, Adenocarcinoma, expanding cell carci- 1000 images from the Kaggle lung cancer Best accuracy for humans (EfficientNet 93%)
Squamous Cell Carcinoma, Typical noma, squamous cell carcinoma dataset Accuracy 99.44% synthetic accuracy
Page 18 of 21
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 19 of 21

Table 9 Comparative analysis of lung cancer prediction through deep learning


Aspect Implementation Platform Dataset Details

Platform Used Deep learning models: ResNet-50, ResNet-101, EfficientNet- LIDC-IDRI repository
B3
Input Data DICOM lung cancer images 1,000 images
Data Partitioning Training: 70% Validation: 20% Testing: 10% Training: 613 images Validation: 315 images Testing: 72
images
Model Architecture ResNet-50, ResNet-101, EfficientNet-B3 –
Preprocessing Techniques Data augmentation strategy –
Classification Performance Fusion Model: 100% precision in classifying Squamous Cells Precision: ResNet-50, EfficientNet-B3, and ResNet-101
achieved 90%, followed by EfficientNet-B3 and ResNet-101
with slightly lower precision
Model Training Epochs: 35 Batch Size: 32 –
Learning Rate Adam optimizer with a learning rate of 0.001 –
Total Parameters 10,988,787 –
Trainable Parameters 10,099,090 –
Non-trainable Parameters 889,697 –
Achievements Improved accuracy in predicting lung cancer subtypes Potential for advancements in healthcare and reduction
in mortality rates associated with lung cancer

of shape features is facilitated by the capabilities of Effi- it may be for the identification and management of lung
cientNet-B3 and ResNet, while AlexNet is employed to cancer. These findings highlight the potential of deep
prepare the highest resolution. learning algorithms to offer tailored treatment regimens
The research emphasizes the significance of evaluating and ultimately reduce the mortality rate from lung can-
the network input layer and the number of initial layers cer. To enhance patient outcomes and advance medical
to enhance the efficiency and accuracy of the proposed imaging capabilities, forthcoming endeavours ought to
system. Furthermore, the article highlights the success- concentrate on refining model architectures, broaden-
ful completion of all training procedures, including lung ing datasets, and encouraging multidisciplinary partner-
separation and elimination processes. The system’s perfor- ships. In the future, deep learning models can be used in
mance is rigorously assessed, achieving 100% in sensitiv- a wide range of research projects and using larger data-
ity, precision, and accuracy, with low false rates. The study sets. Additionally, it was noted that obtaining knowledge
underscores the importance of further analysis, particu- and achieving certain scores was connected to improving
larly in methods like segmentation, which may necessitate health and lowering lung cancer death rates by dealing
a comprehensive evaluation of the entire image dataset. with the problem of inaccurate precision.
The proposed diagnostic approach holds promise in
Acknowledgements
providing elite medical professionals with precise and Researchers Supporting Project number (RSP2024R167), King Saud University,
timely diagnostic impressions. The robust performance Riyadh, Saudi Arabia.
metrics and successful completion of various procedures
Authors’ contributions
underscore the potential for the proposed system to con- Vinod Kumar: Conceptualization, Methodology, Writing—original draft.
tribute significantly to early lung cancer detection, paving Chander Prabha: Investigation, Writing—review & editing, Supervision. Preeti
the way for enhanced medical diagnoses in the future. Sharma: Validation, Writing—review & editing, Software, Supervision. Nitin
Mittal: Methodology, Writing—review & editing. S.S. Askar: Writing—review
& editing, Funding. Mohamed Abouhawwash: Conceptualization, Writing—
review & editing.
Conclusion and future scope
In conclusion, this study examined the use of deep Funding
This project is funded by King Saud University, Riyadh, Saudi Arabia.
learning models for precise lung cancer diagnosis and
classification, including ResNet-50, ResNet-101, and Availability of data and materials
EfficientNet-B3. Extensive analysis of experimental data Data may be available upon reasonable request from the corresponding author.
and cross-validation with prior research demonstrated
the efficacy of the proposed Fusion Model, particularly Declarations
in accurately diagnosing Squamous Cell Carcinoma.
Ethics approval and consent to participate
The remarkable 92% increase in prediction accuracy of Authors declare that they are not intentionally engage in or participate in any
the combined model demonstrates how revolutionary form of malicious harm to another person or animal.
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 20 of 21

Consent for publication 15. Ayana G, Park J, Jeong J-W, Choe S-W. A novel multistage transfer
Not applicable. learning for ultrasound breast cancer image classification. Diagnostics.
2022;12(1):135.
Competing interests 16. Zhou W-L, Yue Y-Y. Development and validation of models for predicting
The authors declare no competing interests. the overall survival and cancer-specific survival of patients with primary
vaginal cancer: A population-based retrospective cohort study. Front
Med. 2022;9: 919150.
Received: 29 December 2023 Accepted: 7 March 2024 17. Abumalloh RA, Nilashi M, Yousoof Ismail M, Alhargan A, Alzahrani AO,
Saraireh L, Osman R, Asadi S. Medical image processing and COVID-
19: a literature review and bibliometric analysis. J Infect Public Health.
2022;15(1):75–93.
18. Kumar, V. Suresh, Ahmed Alemran, Dimitrios A. Karras, Shashi Kant Gupta,
References Chandra Kumar Dixit, and Bhadrappa Haralayya. "Natural Language
1. Mollahosseini, Ali, David Chan, and Mohammad H. Mahoor. "Going Processing using Graph Neural Network for Text Classification." In 2022
deeper in facial expression recognition using deep neural networks." In International Conference on Knowledge Engineering and Communica-
2016 IEEE Winter Conference on applications of computer vision (WACV), tion Systems (ICKES), pp. 1–5. IEEE, 2022.
pp. 1–10. IEEE, 2016. 19. Langenkamp, Max, and Daniel N. Yue. "How open-source machine learn-
2. Sardanelli, Francesco, Hildegunn S. Aase, Marina Álvarez, Edward Aza- ing software shapes AI." In Proceedings of the 2022 AAAI/ACM Confer-
vedo, Henk J. Baarslag, Corinne Balleyguier, Pascal A. Baltzer et al. "Position ence on AI, Ethics, and Society, pp. 385–395. 2022.
paper on screening for breast cancer by the European Society of Breast 20. Balasekaran, Gomatheeshwari, Selvakumar Jayakumar, and Rocío Pérez
Imaging (EUSOBI) and 30 national breast radiology bodies from Austria, de Prado. "An intelligent task scheduling mechanism for autonomous
Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, vehicles via deep learning." Energies 14, no. 6 (2021): 1788.
Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, 21. Maftouni, Maede, Andrew Chung Chee Law, Bo Shen, Zhenyu James
Ireland, Italy, Israel, Lithuania, Moldova, The Netherlands, Norway, Poland, Kong Grado, Yangze Zhou, and Niloofar Ayoobi Yazdi. "A robust ensem-
Portugal, Romania, Serbia, Slovakia, Spain, Sweden, Switzerland and ble-deep learning model for COVID-19 diagnosis based on an integrated
Turkey." European Radiology 27 (2017): 2737–2743. CT scan images database." In IIE Annual Conference. Proceedings, pp.
3. Karthik S, Srinivasa Perumal R, Chandra Mouli PVSSR. Breast cancer 632–637. Institute of Industrial and Systems Engineers (IISE), 2021.
classification using deep neural networks. Knowledge Computing and 22. Shankar K, Perumal E. A novel hand-crafted with deep learning features
Its Applications: Knowledge Manipulation and Processing Techniques: based fusion model for COVID-19 diagnosis and classification using chest
2018;1:227–41. X-ray images. Complex Intell Systems. 2021;7(3):1277–93.
4. Aziz R, Verma CK, Srivastava N. Artificial neural network classification of 23. Feldner-Busztin D, Firbas Nisantzis P, Edmunds SJ, Boza G, Racimo F,
high dimensional data with novel optimization approach of dimension Gopalakrishnan S, Limborg MT, Lahti L, de Polavieja GG. Dealing with
reduction. Ann Data Sci. 2018;5:615–35. dimensionality: the application of machine learning to multi-omics data.
5. Ferlay J, Colombet M, Soerjomataram I, Mathers C, Parkin DM, Piñeros Bioinformatics. 2023;39(2):btad021.
M, Znaor A, Bray F. Estimating the global cancer incidence and 24. Seewaldt VL, Winn RA. Residential Racial and Economic Segregation and
mortality in 2018: GLOBOCAN sources and methods. Int J Cancer. Cancer Mortality in the US—Speaking Out on Inequality and Injustice.
2019;144(8):1941–53. JAMA Oncol. 2023;9(1):126–7.
6. Kalafi, E. Y., N. A. M. Nor, N. A. Taib, M. D. Ganggayah, C. Town, and S. K. 25. Alom, Md Zahangir, Tarek M. Taha, Chris Yakopcic, Stefan Westberg,
Dhillon. "Machine learning and deep learning approaches in breast Paheding Sidike, Mst Shamima Nasrin, Mahmudul Hasan, Brian C. van
cancer survival prediction using clinical data." Folia biologica 65, no. 5/6 Essen, Abdul AS Awwal, and Vijayan K. Asari. "A state-of-the-art survey on
(2019): 212–220. deep learning theory and architectures." electronics 8, no. 3 (2019): 292.
7. Safiri S, Kolahi AA, Naghavi M. Global, regional, and national burden of 26. Hofvind S, Holen ÅS, Aase HS, Houssami N, Sebuødegård S, Moger TA,
bladder cancer and its attributable risk factors in 204 countries and ter- Haldorsen IS, Akslen LA. Two-view digital breast tomosynthesis versus
ritories, 1990–2019: a systematic analysis for the Global Burden of Disease digital mammography in a population-based breast cancer screen-
study 2019. BMJ Glob Health. 2021;6(11):e004128. ing program (To-Be): a randomized, controlled trial. Lancet Oncol.
8. Ha R, Mutasa S, Karcich J, Gupta N, Sant EPV, Nemer J, Sun M, Chang P, Liu 2019;20(6):795–805.
MZ, Jambawalikar S. Predicting breast cancer molecular subtype with MRI 27. Zheng S, Guo J, Veldhuis RNJ, Oudkerk M, van Ooijen PMA. Automatic
dataset utilizing convolutional neural network algorithm. J Digit Imaging. pulmonary nodule detection in CT scans using convolutional neural net-
2019;32:276–82. works based on maximum intensity projection. IEEE Trans Med Imaging.
9. Lilhore, U. K., Poongodi, M., Kaur, A., Simaiya, S., Algarni, A. D., Elmannai, H., 2019;39(3):797–805.
… Hamdi, M. (2022). Hybrid model for detection of cervical cancer using 28. Morgan E, Arnold M, Gini A, Lorenzoni V, Cabasag CJ, Laversanne M,
causal analysis and machine learning techniques. Computational and Vignat J, Ferlay J, Murphy N, Bray F. Global burden of colorectal cancer in
Mathematical Methods in Medicine, 2022, 1–17 https://​doi.​org/​10.​1155/​ 2020 and 2040: Incidence and mortality estimates from GLOBOCAN. Gut.
2022/​46883​27. 2023;72(2):338–44.
10. Gao Q, Wu Y, Qian Li. Fast training model for image classification 29. Toumazis I, Bastani M, Han SS, Plevritis SK. Risk-based lung cancer screen-
based on spatial information of deep belief network. J Syst Simul. ing: a systematic review. Lung Cancer. 2020;147:154–86.
2020;27(3):549–58. 30. Viale PH. The American Cancer Society’s Facts & figures: 2020 edition. J
11. Kumar, Vinod and Brijesh Bakariya, Machine Learning Algorithms for Detect- Adv Pract Oncol. 2020;11(2):135.
ing Lung Nodules: An Empirical Investigation, Journal of Chengdu University 31. Tharsanee, R. M., R. S. Soundariya, A. Saran Kumar, M. Karthiga, and
of Technology, ISSUE 8, Sr. No. 91, ISSN: 1671–9727, Volume 26, 2021. S. Sountharrajan. "Deep convolutional neural network–based image
12. Sharma, G., & Prabha, C. (2022). A systematic review for detecting cancer classification for COVID-19 diagnosis." In Data Science for COVID-19, pp.
using machine learning techniques. Innovations in computational and 117–145. Academic Press, 2021.
computer techniques: ICACCT-2021. AIP Publishing. 32. Ha, Na Hee, Bok Hee Woo, Da Jeong Kim, Eun Sin Ha, Jeom Il Choi, Sung Jo
13. Kumar, Vinod, and Brijesh Bakariya. "An Empirical Identification of Kim, Bong Soo Park, Ji Hye Lee, and Hae Ryoun Park. "Prolonged and repeti-
Pulmonary Nodules using Deep Learning." Design Engineering (2021): tive exposure to Porphyromonas gingivalis increases aggressiveness of oral
13468–13486, https://​thede​signe​ngine​ering.​com/​index.​php/​DE/​artic​le/​ cancer cells by promoting acquisition of cancer stem cell properties." Tumor
view/​4610. Biology 36 (2015): 9947–9960.
14. Islami F, Guerra CE, Minihan A, Yabroff KR, Fedewa SA, Sloan K, Wiedt K, 33. Lee, Ju-Han, Hoiseon Jeong, Jung-Woo Choi, Hwa Eun Oh, and Young-Sik
Thomson B, Siegel RL, Nargis N, Winn RA. American Cancer Society’s Kim. "Liquid biopsy prediction of axillary lymph node metastasis, cancer
report on the status of cancer disparities in the United States, 2021. CA recurrence, and patient survival in breast cancer: A meta-analysis." Medicine
Cancer J Clin. 2022;72(2):112–43. 97, no. 42 (2018).
Kumar et al. BMC Medical Imaging (2024) 24:63 Page 21 of 21

34. Asgary, Ramin, Nelly Staderini, Simangele Mthethwa-Hleta, Paola Andrea 53. Fazelpour, Sherwin, Maryam Vejdani‐Jahromi, Artem Kaliaev, Edwin Qiu,
Lopez Saavedra, Linda Garcia Abrego, Barbara Rusch, Tombo Marie Luce, Deniz Goodman, V. Carlota Andreu‐Arasa, Noriyuki Fujima, and Osamu
et al. "Evaluating smartphone strategies for reliability, reproducibility, and Sakai. Multiparametric machine learning algorithm for human papilloma-
quality of VIA for cervical cancer screening in the Shiselweni region of virus status and survival prediction in oropharyngeal cancer patients. Head
Eswatini: A cohort study." PLoS Medicine 17, no. 11 (2020): e1003378. Neck (2023).
35. Mittal, Shubham, and Yasha Hasija. "Applications of deep learning in health- 54. Guo, Qiuxia, Weiwei Yu, Shasha Song, Wenlin Wang, Yufei Xie, Lihua Huang,
care and biomedicine." Deep learning techniques for biomedical and health Jing Wang, Ying Jia, and Sen Wang. "Pathological Detection of Micro and
informatics (2020): 57–77. Fuzzy Gastric Cancer Cells Based on Deep Learning." Computational and
36. Ai, Shiliang, Chen Li, Xiaoyan Li, Tao Jiang, Marcin Grzegorzek, Changhao Sun, Mathematical Methods in Medicine (2023).
Md Mamunur Rahaman, Jinghua Zhang, Yudong Yao, and Hong Li. "A state- 55. Srividhya E, Niveditha VR, Nalini C, Sinduja K, Geetha S, Kirubanantham
of-the-art review for gastric histopathology image analysis approaches and P, Bharati S. Integrating lncRNA gene signature and risk score to predict
future development." BioMed Research International 2021 (2021). recurrence cervical cancer using recurrent neural network. Measurement.
37. Choi, Jieun, Hwan-ho Cho, Junmo Kwon, Ho Yun Lee, and Hyunjin Park. "A 2023;27:100782.
cascaded neural network for staging in non-small cell lung cancer using 56. Chen M, Zhang B, Topatana W, Cao J, Zhu H, Juengpanich S, Mao Q, Hong
pre-treatment ct." Diagnostics 11, no. 6 (2021): 1047. Yu, Cai X. Classification and mutation prediction based on histopathology
38. Hirra, Irum, Mubashir Ahmad, Ayaz Hussain, M. Usman Ashraf, Iftikhar H&E images in liver cancer using deep learning. NPJ precision oncology.
Ahmed Saeed, Syed Furqan Qadri, Ahmed M. Alghamdi, and Ahmed S. 2020;4(1):14.
Alfakeeh. "Breast cancer classification from histopathological images using 57. Upreti, Megha, Chitra Pandey, Ankur Singh Bist, Buphest Rawat, and
patch-based deep learning modeling." IEEE Access 9 (2021): 24273–24287. Marviola Hardini. "Convolutional neural networks in medical image under-
39. Jimi, Anwar, Hind Abouche, Nabila Zrira, and Ibtissam Benmiloud. "Auto- standing." Aptisi Transactions on Technopreneurship (ATT) 3, no. 2 (2021):
mated Skin Lesion Segmentation using VGG-UNet." In 2022 IEEE/ACM 120–126.
International Conference on Advances in Social Networks Analysis and 58. Kumar, Vinod, and Brijesh Bakariya. Classification of Lung Cancer using Alex-
Mining (ASONAM), pp. 370–377. IEEE, 2022. ResNet based on Thoracic CT Images. Turkish Online Journal of Qualitative
40. Yu, Haohui, Tao Huang, Bin Feng, and Jun Lyu. "Deep-learning Model for Inquiry 12, no. 4 (2021). https://​www.​tojqi.​net/​index.​php/​journ​al/​artic​le/​
Predicting the Survival of Rectal Adenocarcinoma Patients based on the view/​8258.
SEER Database." (2021). 59. Navamani, T. M. "Efficient deep learning approaches for health informatics."
41. Kott O, Linsley D, Amin A, Karagounis A, Jeffers C, Golijanin D, Serre T, Gersh- In Deep learning and parallel computing environment for bioengineering
man B. Development of a deep learning algorithm for the histopathologic systems, pp. 123–137. Academic Press, 2019.
diagnosis and Gleason grading of prostate cancer biopsies: a pilot study. Eur 60. Shafiq M, Zhaoquan Gu. Deep residual learning for image recognition: A
Urol Focus. 2021;7(2):347–51. survey. Appl Sci. 2022;12(18):8972.
42. Murtaza G, Wahab AWA, Raza G, Shuib L. A tree-based multiclassification of 61. Swathy M, Saruladha K. A comparative study of classification and prediction
breast tumor histopathology images through deep learning. Comput Med of Cardio-vascular diseases (CVD) using Machine Learning and Deep Learn-
Imaging Graph. 2021;89: 101870. ing techniques. ICT Express. 2022;8(1):109–16.
43. Saber A, Sakr M, Abo-Seida OM, Keshk A, Chen H. A novel deep-learning 62. Duta, Ionut Cosmin, Li Liu, Fan Zhu, and Ling Shao. "Pyramidal convolution:
model for automatic detection and classification of breast cancer using the Rethinking convolutional neural networks for visual recognition." arXiv
transfer-learning technique. IEEE Access. 2021;9:71194–209. preprint arXiv:​2006.​11538 (2020).
44. Sechopoulos, Ioannis, Jonas Teuwen, and Ritse Mann. "Artificial intelligence 63. LIDC-IDRI (Lung Image Database Consortium and Image Database
for breast cancer detection in mammography and digital breast tomosyn- Resource Initiative), The Cancer Imaging Archive (TCIA), Public Access-
thesis: State of the art." In Seminars in cancer biology, vol. 72, pp. 214–225. Cancer Imaging Archive Wiki, accessed on April 12, 2023, https://​wiki.​cance​
Academic Press, 2021. rimag​ingar​chive.​net/​pages/​viewp​age.​action?​pageId=​19662​54
45. Tariq, Mehreen, Sajid Iqbal, Hareem Ayesha, Ishaq Abbas, Khawaja Tehseen 64. Sadiq, Alishba, and Norashikin Yahya. "Fractional Stochastic Gradient
Ahmad, and Muhammad Farooq Khan Niazi. "Medical image-based breast Descent Based Learning Algorithm For Multi-layer Perceptron Neural Net-
cancer diagnosis: State of the art and future directions." Expert Systems with works." In 2020 8th International Conference on Intelligent and Advanced
Applications 167 (2021): 114095. Systems (ICIAS), pp. 1–4. IEEE, 2021.
46. Hasan, Md Kamrul, Md Toufick E. Elahi, Md Ashraful Alam, Md Tasnim Jawad, 65. Zahoor S, Shoaib U, Lali IU. Breast cancer mammograms classification using
and Robert Martí. "DermoExpert: Skin lesion classification using a hybrid deep neural network and entropy-controlled whale optimization algorithm.
convolutional neural network through segmentation, transfer learning, and Diagnostics. 2022;12(2):557.
augmentation." Informatics in Medicine Unlocked 28 (2022): 100819. 66. Agarwal, S., & Prabha, C. (2022). Analysis of lung cancer prediction at an
47. Taye ZE, Tessema AW, Simegn GL. Classification of breast cancer types, early stage: A systematic review. In Lecture Notes on Data Engineering and
sub-types and grade from histopathological images using deep learning Communications Technologies (pp. 701–711). Singapore: Springer Nature
technique. Health Technol. 2021;11:1277–90. Singapore.
48. Lee JR, Hou MP, Famouri M, Wong A. Cancer-Net SCa: tailored deep neural 67. Kaur, G., Prabha, C., Chhabra, D., Kaur, N., Veeramanickam, M. R. M., & Gill, S. K.
network designs for detection of skin cancer from dermoscopy images. (2022). A systematic approach to machine learning for cancer classifica-
BMC Med Imaging. 2022;22(1):1–12. tion. 2022 5th International Conference on Contemporary Computing and
49. Chae J, Kim J. An Investigation of Transfer Learning Approaches to Informatics (IC3I). IEEE.
Overcome Limited Labeled Data in Medical Image Analysis. Appl Sci. 68. Sharma, D., & Prabha, C. (2023). Security and privacy aspects of electronic
2023;13(15):8671. health records: A review. 2023 International Conference on Advancement in
50. Xie F, Zhang K, Li F, Ma G, Ni Y, Zhang W, Wang J, Li Y. Diagnostic accuracy of Computation & Computer Technologies (InCACCT). IEEE.
convolutional neural network–based endoscopic image analysis in diagnos-
ing gastric cancer and predicting its invasion depth: a systematic review
and meta-analysis. Gastrointest Endosc. 2022;95(4):599–609. Publisher’s Note
51. Xue, Zhiyun, Sandeep Angara, Peng Guo, Sivaramakrishnan Rajaraman, Jose Springer Nature remains neutral with regard to jurisdictional claims in pub-
Jeronimo, Ana Cecilia Rodriguez, Karla Alfaro, et al. "Image quality classifica- lished maps and institutional affiliations.
tion for automated visual evaluation of cervical precancer." In Workshop on
Medical Image Learning with Limited and Noisy Data, pp. 206–217. Cham:
Springer Nature Switzerland, 2022.
52. Abhisheka, Barsha, Saroj Kumar Biswas, and Biswajit Purkayastha. A compre-
hensive review on breast cancer detection, classification and segmentation
using deep learning. Archives of Computational Methods in Engineering
(2023): 1–30.

You might also like