Lung Cancer (CT) 2024
Lung Cancer (CT) 2024
Available online at
www.heca-analitika.com/ijcr
* Correspondence: [email protected]
Received 25 February 2024 This study tackles the pressing challenge of lung cancer detection, the foremost cause of
Revised 18 April 2024 cancer-related mortality worldwide, hindered by late detection and diagnostic limitations.
Accepted 26 April 2024 Aiming to improve early detection rates and diagnostic reliability, we propose an
Available Online 4 May 2024 approach integrating Deep Convolutional Neural Networks (DCNN) with Explainable
Artificial Intelligence (XAI) techniques, specifically focusing on the Residual Network
Keywords: (ResNet) architecture and Gradient-weighted Class Activation Mapping (Grad-CAM).
Deep learning Utilizing a dataset of 1,000 CT scans, categorized into normal, non-cancerous, and three
ResNet50 types of lung cancer images, we adapted the ResNet50 model through transfer learning
Grad-CAM and fine-tuning for enhanced specificity in lung cancer subtype detection. Our
Computer-aided detection methodology demonstrated the modified ResNet50 model's effectiveness, significantly
XAI outperforming the original architecture in accuracy (91.11%), precision (91.66%),
sensitivity (91.11%), specificity (96.63%), and F1-score (91.10%). The inclusion of Grad-CAM
provided insightful visual explanations for the model's predictions, fostering transparency
and trust in computer-assisted diagnostics. The study highlights the potential of
combining DCNN with XAI to advance lung cancer detection, suggesting future research
should expand dataset diversity and explore multimodal data integration for broader
applicability and improved diagnostic capabilities.
Copyright: © 2024 by the authors. This is an open-access article distributed under the
terms of the Creative Commons Attribution-NonCommercial 4.0 International License.
(https://creativecommons.org/licenses/by-nc/4.0/)
also significantly contribute to the development of lung their decision-making processes are not easily
cancer [7–9]. understood by humans [31]. This lack of transparency
can be a barrier, particularly in sensitive areas such as
Lung cancer is broadly categorized into two main types
healthcare [32, 33]. Clinicians and patients must be able
based on the appearance of lung cancer cells under a
to comprehend how AI models, like those used in lung
microscope: Non-Small Cell Lung Cancer (NSCLC) and
cancer detection, arrive at their conclusions. This
Small Cell Lung Cancer (SCLC) [10]. NSCLC is the most
understanding is important not just for acceptance but
common type of lung cancer, accounting for about 80%
also for integrating AI tools into clinical decision-making
to 85% of all cases [11]. NSCLC is further divided into
processes. The necessity for this understanding brings
three main subtypes based on the type of cells found in
XAI into focus.
the tumor: Adenocarcinoma, Squamous Cell Carcinoma,
and Large Cell Carcinoma [12]. Adenocarcinoma XAI seeks to bridge the gap between AI decision-making
originates in the cells that secrete substances such as and human interpretability by developing methods and
mucus and is the most common form of lung cancer models that clarify, in human-understandable terms,
among non-smokers [13]. Squamous Cell Carcinoma how AI systems make decisions. This includes techniques
arises from the flat cells lining the inside of the lungs and that can provide visual explanations, decompose the
is closely associated with a smoking history [14]. Large decision-making process into understandable
Cell Carcinoma, known for its large and abnormal-looking components, and offer insights into the data features
cells, can appear in any part of the lung and tends to grow most important for predictions. For example, in lung
and spread rapidly, making it more difficult to treat [15]. cancer detection, an explainable AI system could
highlight which aspects of a lung image were most
Conventionally, lung cancer detection has relied on
indicative of malignancy, thereby providing clinicians with
methods such as chest X-rays and computed tomography
understandable evidence to support the AI's diagnosis.
(CT) scans, followed by biopsy for confirmation [16, 17].
While these methods are essential in identifying In this study, we aim to enhance traditional lung cancer
abnormalities in the lung, they have notable limitations. detection methods by integrating the Residual Network
For instance, chest X-rays can miss small tumors or fail to (ResNet) model with Gradient-weighted Class Activation
distinguish between tumors and other abnormalities, Mapping (Grad-CAM) to improve the precision and
leading to false negatives or positives [18, 19]. CT scans efficiency of screenings. Our objectives are to advance
provide more detailed images but also present detection accuracy, enable the interpretability of AI
challenges in accurately differentiating between benign decisions through visual heat maps, and build trust
and malignant nodules, requiring invasive procedures among healthcare professionals. This approach is
like biopsies for definitive diagnoses [20]. designed to make AI's decision-making processes
transparent, supporting better clinical decision-making
In recent years, the incorporation of artificial intelligence
and facilitating wider clinical adoption. The innovative
(AI) into diagnostic assistance has represented a major
combination of ResNet's robust image processing with
advancement [21–23], mainly through the use of a
Grad-CAM's explanatory power addresses the technical
method known as deep learning [24–26]. Deep learning
and ethical challenges in early lung cancer detection. By
is a method where computers are trained to recognize
improving both the technology's capability and
patterns by analyzing large amounts of data. Specifically,
accountability, our methodology sets a new standard for
one type of deep learning, called Deep Convolutional
the responsible and effective use of AI in medical
Neural Networks (DCNN), has proven very effective [27,
diagnostics
28]. These networks can analyze complex images, such as
CT scans, to find signs of lung cancer [29]. Unlike 2. Materials and Methods
traditional methods, which might require more manual
2.1. Dataset
effort and be less accurate, DCNN can more precisely
identify different types of lung cancer and distinguish The dataset used in this study was obtained from Kaggle,
between harmful and harmless nodules with little human a well-known platform recognized for its diverse
oversight. This technology promises to make the collection of datasets spanning various domains, such as
detection of lung cancer earlier, less invasive, and more healthcare and medical imaging [34]. This specific dataset
reliable, potentially leading to better patient outcomes. consists of 1,000 CT scan images, classified into four
distinct categories. These categories include normal
However, a limitation exists the need for explainable AI
scans, non-cancerous abnormalities, as well as three
[30]. Deep learning systems, including DCNN, operate in
primary types of chest cancer: squamous
a way that is often described as a 'black box,' meaning
Page | 7
Indonesian Journal of Case Reports, Vol. 2, No. 1, 2024
(A)
(B)
(C)
(D)
Figure 1. CT Scan images in the dataset: (A) normal; (B) squamous cell carcinoma; (C) adenocarcinoma; (D) large cell carcinoma.
Class
Subset Squamous Cell Large Cell Total
Normal Adenocarcinoma
Carcinoma Carcinoma
Training 148 155 195 115 613
Validation 13 15 23 21 72
Testing 54 90 120 51 315
Total 1000
cell carcinoma, adenocarcinoma, and large cell predictions in real-world scenarios. The distribution of CT
carcinoma. An overview of the CT scan images in this scan images across each subset is presented in Table 1.
dataset is illustrated in Figure 1.
2.2. ResNet50
This dataset is divided into three subsets: the training set,
validation set, and testing set, containing 613, 72, and 315 Residual Network (ResNet) is a deep learning model
CT scan images, respectively. The training set, consisting designed to solve the vanishing gradient problem
of the majority of the data, is utilized to train the DCNN encountered in training very deep neural networks [35].
model, allowing it to learn patterns and features present This issue impedes learning by significantly diminishing
in the images. The validation set, with its smaller size, the gradients. ResNet introduces innovative "skip
serves as a means to fine-tune the model's connections" that permit inputs to skip certain layers,
hyperparameters and assess its performance during thereby maintaining the flow of gradients and enabling
training, aiding in preventing overfitting, which occurs the effective training of much deeper networks.
when a model learns the training data too well, including ResNet50, a variant of ResNet, embodies this principle,
its noise and outliers, leading to poor performance on offering enhanced capabilities for image recognition. Its
new, unseen data. Finally, the testing set, comprising 315 ability to maintain training efficiency and effectiveness in
images, provides an independent evaluation of the deep network configurations makes it particularly well-
model's performance after training, helping to gauge its suited for medical imaging, where precision and
ability to generalize to unseen data and make accurate reliability are critical.
Page | 8
Indonesian Journal of Case Reports, Vol. 2, No. 1, 2024
Table 2. Parameter used to train the model. for its adaptive learning rate capabilities. Unlike
traditional fixed learning rate optimizers, Adam adjusts
Parameter Value
Batch Size 32
the learning rate for each parameter dynamically based
Optimizer Adam on estimates of the first and second moments of the
Learning Rate 1e-5 gradients. This feature enhances the convergence speed
Decay Rate 1e-6 towards the optimal solution, leading to faster training
Loss Function Categorical Cross Entropy
times and improved performance on complex datasets
Epoch 100
Early Stopping Patience 20 [37]. A learning rate of 1e-5 was set to ensure gradual
adjustments to the model's weights, preventing
To adapt ResNet50 for lung cancer detection, we overshooting of the global minimum, with a decay rate of
employed a technique known as transfer learning [36]. 1e-6 applied to decrease the learning rate over time,
This involved starting with a ResNet50 model pre-trained further stabilizing the training process. The loss function
on a comprehensive dataset of images called ImageNet, used was Categorical Cross Entropy, suitable for multi-
which was then fine-tuned using our specific collection of class classification problems, measuring the
CT scans. The advantage of this method is that the model performance of the model's output relative to the true
has already learned to identify a wide array of general labels. The model was trained for 100 epochs to allow
image features, which can be leveraged to detect ample opportunity for learning, with an early stopping
anomalies within lung images. We further tailored the mechanism set with a patience of 20 epochs to prevent
ResNet50 model to our needs by incorporating three overfitting by halting training if the validation loss does
custom layers atop the pre-trained model. These not improve, ensuring the model's generalizability and
additional layers were designed to hone in on the efficiency.
characteristics most pertinent for differentiating various
types of lung cancer and healthy lung tissue. Through this 2.3. Data Preprocessing
fine-tuning, the model is able to utilize its inherent image
recognition prowess for the specialized task of detecting We employed a meticulous approach to prepare the CT
lung cancer subtypes in CT scans. scan images for analysis with our deep learning model.
Initially, each image underwent resizing to a uniform
In our methodology, we specifically modified the dimension of 450x450 pixels. This standardization is
ResNet50 architecture to suit our lung cancer detection important for ensuring that the input data is consistent in
objectives by setting the layers of the ResNet50 model as size, which aids the model in efficiently learning and
non-trainable if they are not part of the 'conv5' block. This recognizing patterns across all images [38].
approach ensures that only the most advanced features
are fine-tuned for our task, preserving the pre-learned Following the resizing, we converted each image's color
weights in the initial layers for general image recognition scheme from RGB (Red, Green, Blue) to BGR (Blue, Green,
while focusing the training effort on the deeper layers Red). This conversion aligns with the color channel
more relevant to identifying lung cancer. Additionally, we ordering convention used in many deep learning
augmented the model with a custom top layer sequence: frameworks and models pre-trained on the ImageNet
a Dropout layer set at 0.6 to prevent overfitting, followed dataset, facilitating compatibility and leveraging pre-
by a Flatten layer to transform the output into a 1D array. existing model architectures and weights more
This is then passed through a Batch Normalization layer effectively [39].
to normalize the activations, another Dropout layer at 0.6
Finally, we conducted a zero-centering process on each
for further overfitting mitigation, and finally, a Dense
color channel of the images, relative to the mean values
layer with a softmax activation function designed to
of the ImageNet dataset. Zero-centering involves
output predictions across four classes. This tailored
adjusting the values of each pixel in such a way that the
architecture aims to enhance the model's ability to
mean of the pixel intensities across each color channel
discern subtle features indicative of different lung cancer
approximates zero. It's important to note that this
stages from CT scans.
adjustment was made without scaling the pixel values.
The parameters used to train the model are shown in This step is integral to normalizing the input data,
Table 2. For the training process, a batch size of 32 was reducing the variance among the images, and helping the
chosen to balance the computational efficiency and model to focus on the essential features for classification.
model accuracy, allowing the model to learn from a
sufficiently large subset of data while managing memory
resources effectively. The Adam optimizer was selected
Page | 9
Indonesian Journal of Case Reports, Vol. 2, No. 1, 2024
Figure 2. Visualization of the modified ResNet50V2 training and validation: (a) accuracy; (b) loss.
2.4. Evaluation Metrics regions within input images that significantly impact the
model's classification decisions. This method illuminates
In assessing the performance of our lung cancer
the specific features and patterns the model leverages
detection model, we employed a range of evaluation
for making its predictions. Through the visualization of
metrics including accuracy, precision, sensitivity,
these heatmaps, we are provided with a deeper
specificity, and F1-score [40]. Accuracy represents the
understanding of the underlying rationale for the model's
overall correctness of predictions, while precision
decisions [43]. This insight is crucial for validating the
measures the proportion of true positive predictions
model's predictions and enhancing trust in its capabilities
among all positive predictions made by the model.
by offering a clear view into how and why certain
Sensitivity, also known as recall, quantifies the model's
decisions are made.
ability to correctly identify positive instances among all
actual positive instances. On the other hand, specificity 3. Results and Discussion
evaluates the model's ability to correctly identify negative
The results obtained from the training of the modified
instances among all actual negative instances. The F1-
ResNet model are illustrated in Figure 2. As shown in
score combines precision and recall into a single metric,
Figure 2a, the training accuracy (blue line) starts at a
providing a balanced measure of a model's performance.
lower value and exhibits a steady increase as the number
We adopted a weighted average approach for each of epochs progresses. It reaches a plateau around epoch
metric due to the presence of four distinct classes within 50, indicating that the model becomes more consistent in
our dataset. This approach is crucial for providing a more correctly identifying the different categories of lung
accurate representation of the model's performance cancer as it learns from the training data. The validation
across these classes, especially given their varying accuracy (red line) also increases and follows a similar
prevalence. The weighted average method calculates the trend as the training accuracy, which suggests that the
metrics of accuracy, precision, sensitivity, specificity, and model generalizes well to new, unseen data. However,
F1-score for each class by considering the proportion of there is a noticeable gap between the training and
each class's total sample. This ensures that the model's validation accuracy, which could imply a slight overfitting
performance in detecting each type of lung cancer and of the model to the training data. Nonetheless, the
normal tissue is weighted according to the class's validation accuracy does not decrease, and its plateau
representation in the dataset. Consequently, this method indicates the model's robustness.
offers a nuanced evaluation, reflecting the model's
Turning to Figure 2b, the training loss (blue line) and
effectiveness in accurately identifying each class,
validation loss (red line) both decrease over time, with a
compensating for any imbalance among the classes.
sharp drop in the initial epochs followed by a more
2.5. Explainable Artificial Intelligence (XAI) gradual decline. This decline reflects the model's
improving performance in correctly classifying the CT
To improve the transparency and interpretability of our
scan images as it learns. The training loss continues to
model's decision-making, we implemented XAI
decrease slightly throughout the epochs, but the
techniques using Grad-CAM [41, 42]. Grad-CAM is a
validation loss reaches its lowest point at epoch 75. This
powerful tool within XAI, as it visually indicates the
Page | 10
Indonesian Journal of Case Reports, Vol. 2, No. 1, 2024
Table 3. Performance comparison of ResNet50 and modified ResNet50 models in predicting lung cancer.
Actual: Normal Actual: Squamous cell carcinoma Actual: Adenocarcinoma Actual: Large cell carcinoma
Predicted: Normal Predicted: Squamous cell carcinoma Predicted: Adenocarcinoma Predicted: Large cell carcinoma
Confidence Score: 100.00% Confidence Score: 91.67% Confidence Score: 91.37% Confidence Score: 98.02%
Figure 4. Confusion matrix of the testing set prediction from modified ResNet50V2 model.
minimizing the chances of false negatives and false making process, fostering a greater level of trust in AI
positives. tools.
The prediction results visualized with Grad-CAM For the integration of AI models like the modified
heatmaps are shown in Figure 4. The provided image ResNet50 into clinical settings, several practical
offers a clear insight into the decision-making process of considerations must be addressed. Clinicians require
the modified ResNet50 model. Each heatmap provides a tools that not only deliver high performance but also fit
visual explanation for why the model made a particular seamlessly into the existing workflow, ensuring that they
prediction, highlighting the regions in the CT images that complement, rather than complicate, the diagnostic
most influenced the model's decision. The heatmaps process. The visual explanations provided by Grad-CAM
show that the model is focusing on appropriate areas of can serve as a communication medium, helping clinicians
the CT scans to distinguish between normal tissue and to understand and verify the AI's recommendations,
various types of cancer. For example, in the which is crucial for acceptance and ethical accountability
adenocarcinoma prediction, the heatmap is in medical practice.
concentrated around the abnormal growth, aligning with
the expected patterns of this cancer type. Despite the promising results, this study has limitations.
The dataset, while diverse, is limited in size and sourced
The accompanying confidence scores reflect the model's from a single platform, which could affect the model's
certainty in its predictions, with high confidence in its generalizability to broader populations [44]. Moreover,
correct predictions for normal tissue (100%) and large cell the complexity of AI models can lead to challenges in
carcinoma (98.02%). The confidence scores for squamous clinical interpretation, and the reliance on visual
cell carcinoma and adenocarcinoma are slightly lower at explanations does not fully elucidate the intricate
91.67% and 91.37%, respectively, which might correlate patterns learned by the deep learning model. There is
with the more challenging nature of distinguishing these also the risk of the model encountering novel
conditions or the variability in their appearance on CT presentations of lung cancer not represented in the
scans. training dataset, which could lead to misclassifications.
The outcomes of this research possess substantial Future research should prioritize expanding the dataset
implications for the field of medical imaging and to encompass a more diverse demographic, including
oncology. The high accuracy, precision, sensitivity, multi-institutional data that captures a broader spectrum
specificity, and F1-scores achieved by the modified of lung cancer presentations. This expansion is crucial for
ResNet50 model suggest that deep learning can enhancing the model's robustness and extending its
significantly enhance lung cancer detection, thereby applicability across different populations. Additionally,
potentially improving patient outcomes. Early and there is potential in exploring the integration of
accurate detection is paramount in cancer treatment, multimodal data, such as combining CT images with
and AI-assisted diagnostics could lead to earlier patient medical histories and genetic information, to
interventions, more targeted therapies, and, enhance the model's diagnostic capabilities. However,
consequently, better survival rates. Furthermore, the integrating multimodal data presents challenges, such as
explainability aspect introduced by Grad-CAM provides data heterogeneity, alignment issues, and fusion
clinicians with valuable insights into the AI decision- complexities, which may complicate the training process
Page | 12
Indonesian Journal of Case Reports, Vol. 2, No. 1, 2024
and impact the model's performance. Future research 4. Lundin, A., and Driscoll, B. (2013). Lung Cancer Stem Cells:
Progress and Prospects, Cancer Letters, Vol. 338, No. 1, 89–93.
efforts could concentrate on developing advanced doi:10.1016/j.canlet.2012.08.014.
algorithms for effective data integration and exploring 5. Heuvers, M. E., Hegmans, J. P., Stricker, B. H., and Aerts, J. G.
techniques like deep learning architectures tailored to (2012). Improving Lung Cancer Survival; Time to Move On, BMC
handle multimodal information. These approaches hold Pulmonary Medicine, Vol. 12, No. 1, 77. doi:10.1186/1471-2466-
12-77.
promise in overcoming current limitations and further
6. Chaitanya Thandra, K., Barsouk, A., Saginala, K., Sukumar Aluru,
enhancing the model’s diagnostic accuracy. J., and Barsouk, A. (2021). Epidemiology of Lung Cancer,
Współczesna Onkologia, Vol. 25, No. 1, 45–52.
4. Conclusions doi:10.5114/wo.2021.103829.
7. Cani, M., Turco, F., Butticè, S., Vogl, U. M., Buttigliero, C., Novello,
This study has successfully demonstrated the potential of S., and Capelletto, E. (2023). How Does Environmental and
a modified ResNet50 model with explainable AI Occupational Exposure Contribute to Carcinogenesis in
Genitourinary and Lung Cancers?, Cancers, Vol. 15, No. 10,
techniques for the detection of lung cancer in CT images.
2836. doi:10.3390/cancers15102836.
By achieving high accuracy, precision, sensitivity,
8. Xue, Y., Wang, L., Zhang, Y., Zhao, Y., and Liu, Y. (2022). Air
specificity, and F1-scores, the modified ResNet50 model Pollution: A Culprit of Lung Cancer, Journal of Hazardous
shows promise in significantly enhancing the early Materials, Vol. 434, 128937. doi:10.1016/j.jhazmat.2022.128937.
detection of lung cancer, which is crucial for improving 9. S Cheng, E., Weber, M., Steinberg, J., and Qin Yu, X. (2021). Lung
Cancer Risk in Never-Smokers: An Overview of Environmental
patient prognosis and survival rates. The implementation and Genetic Factors, Chinese Journal of Cancer Research, Vol.
of Grad-CAM provides valuable visual explanations of the 33, No. 5, 548–562. doi:10.21147/j.issn.1000-9604.2021.05.02.
model's decision-making process, addressing the critical 10. Araujo, L. H., Horn, L., Merritt, R. E., Shilo, K., Xu-Welliver, M., and
need for transparency in AI applications in healthcare. Carbone, D. P. (2020). Cancer of the Lung, Abeloff’s Clinical
Oncology, Elsevier, 1108-1158.e16. doi:10.1016/B978-0-323-
47674-4.00069-4.
Author Contributions: Conceptualization, T.R.N., A.M., and R.I.;
methodology, T.R.N., A.M. and A.R.; software, T.R.N. and A.M.; 11. Padinharayil, H., Varghese, J., John, M. C., Rajanikant, G. K.,
Wilson, C. M., Al-Yozbaki, M., Renu, K., Dewanjee, S., Sanyal, R.,
validation, T.Z., A.R., S.S.E. and R.I.; formal analysis, T.R.N.;
Dey, A., Mukherjee, A. G., Wanjari, U. R., Gopalakrishnan, A. V.,
investigation, T.R.N. and A.M.; resources, T.Z. and R.I.; data and George, A. (2023). Non-Small Cell Lung Carcinoma (Nsclc):
curation, T.Z., S.S.E. and R.I.; writing—original draft preparation, Implications on Molecular Pathology and Advances in Early
T.R.N., A.M., and A.R.; writing—review and editing, T.Z., S.S.E., Diagnostics and Therapeutics, Genes & Diseases, Vol. 10, No. 3,
960–989. doi:10.1016/j.gendis.2022.07.023.
and R.I.; visualization, T.R.N.; supervision, T.Z. and R.I.; project
administration, R.I.; funding acquisition, Y.Y. All authors have 12. Qu, Y., Cheng, B., Shao, N., Jia, Y., Song, Q., Tan, B., and Wang, J.
(2020). Prognostic Value of Immune-Related Genes in the
read and agreed to the published version of the manuscript.
Tumor Microenvironment of Lung Adenocarcinoma and Lung
Squamous Cell Carcinoma, Aging, Vol. 12, No. 6, 4757–4777.
Funding: This study does not receive external funding.
doi:10.18632/aging.102871.
Ethical Clearance: Not applicable. 13. Corrales, L., Rosell, R., Cardona, A. F., Martín, C., Zatarain-
Barrón, Z. L., and Arrieta, O. (2020). Lung Cancer in Never
Informed Consent Statement: Not applicable. Smokers: The Role of Different Risk Factors Other Than Tobacco
Smoking, Critical Reviews in Oncology/Hematology, Vol. 148,
Data Availability Statement: The dataset used in this study 102895. doi:10.1016/j.critrevonc.2020.102895.
was obtained from Kaggle (https://www.kaggle.com/datasets/ 14. Wang, B.-Y., Huang, J.-Y., Chen, H.-C., Lin, C.-H., Lin, S.-H., Hung,
W.-H., and Cheng, Y.-F. (2020). The Comparison between
mohamedhanyyy/chest-ctscan-images, accessed 27 November
Adenocarcinoma and Squamous Cell Carcinoma in Lung Cancer
2023) and was made available by Mohamed Hany. We Patients, Journal of Cancer Research and Clinical Oncology, Vol.
acknowledge the contributions of the data providers and the 146, No. 1, 43–52. doi:10.1007/s00432-019-03079-8.
Kaggle platform for making this dataset publicly accessible 15. Travis, W. D. (2020). Lung Cancer Pathology, Clinics in Chest
Medicine, Vol. 41, No. 1, 67–85. doi:10.1016/j.ccm.2019.11.001.
Conflicts of Interest: All the authors declare no conflicts of
16. Demirci, N. Y. (2023). Diagnostic Workup for Lung Cancer, C.
interest. Cingi; A. Yorgancıoğlu; N. Bayar Muluk; A. A. Cruz (Eds.), ,
Springer International Publishing, Cham, 1–16. doi:10.1007/978-
References 3-031-22483-6_62-1.
1. Barta, J. A., Powell, C. A., and Wisnivesky, J. P. (2019). Global 17. Hyldgaard, C., Trolle, C., Harders, S. M. W., Engberg, H.,
Epidemiology of Lung Cancer, Annals of Global Health, Vol. 85, Rasmussen, T. R., and Møller, H. (2022). Increased Use of
Diagnostic Ct Imaging Increases the Detection of Stage IA Lung
No. 1. doi:10.5334/aogh.2419.
Cancer: Pathways and Patient Characteristics, BMC Cancer, Vol.
2. Schabath, M. B., and Cote, M. L. (2019). Cancer Progress and 22, No. 1, 464. doi:10.1186/s12885-022-09585-2.
Priorities: Lung Cancer, Cancer Epidemiology, Biomarkers &
Prevention, Vol. 28, No. 10, 1563–1579. doi:10.1158/1055- 18. Ciello, A. del, Franchi, P., Contegiacomo, A., Cicchetti, G.,
Bonomo, L., and Larici, A. R. (2017). Missed Lung Cancer: When,
9965.EPI-19-0221.
Where, and Why?, Diagnostic and Interventional Radiology, Vol.
3. Leiter, A., Veluswamy, R. R., and Wisnivesky, J. P. (2023). The 23, No. 2, 118–126. doi:10.5152/dir.2016.16187.
Global Burden of Lung Cancer: Current Status and Future
Trends, Nature Reviews Clinical Oncology, Vol. 20, No. 9, 624–
639. doi:10.1038/s41571-023-00798-3.
Page | 13
Indonesian Journal of Case Reports, Vol. 2, No. 1, 2024
19. Bradley, S. H., Abraham, S., Callister, M. E., Grice, A., Hamilton, 32. Holzinger, A., Biemann, C., Pattichis, C. S., and Kell, D. B. (2017).
W. T., Lopez, R. R., Shinkins, B., and Neal, R. D. (2019). Sensitivity What Do We Need to Build Explainable AI Systems for the
of Chest X-Ray for Detecting Lung Cancer in People Presenting Medical Domain?, ArXiv Preprint ArXiv:1712.09923.
with Symptoms: A Systematic Review, British Journal of General
33. Ali, S., Akhlaq, F., Imran, A. S., Kastrati, Z., Daudpota, S. M., and
Practice, Vol. 69, No. 689, e827–e835.
Moosa, M. (2023). The Enlightening Role of Explainable Artificial
doi:10.3399/bjgp19X706853.
Intelligence in Medical & Healthcare Domains: A Systematic
20. Loverdos, K., Fotiadis, A., Kontogianni, C., Iliopoulou, M., and Literature Review, Computers in Biology and Medicine, Vol. 166,
Gaga, M. (2019). Lung Nodules: A Comprehensive Review on 107555. doi:10.1016/j.compbiomed.2023.107555.
Current Approach and Management, Annals of Thoracic
34. Hany, M. (2020). Chest CT-Scan Images Dataset, from
Medicine, Vol. 14, No. 4, 226. doi:10.4103/atm.ATM_110_19.
https://www.kaggle.com/datasets/mohamedhanyyy/chest-
21. Noviandy, T. R., Nainggolan, S. I., Raihan, R., Firmansyah, I., and ctscan-images/data, accessed 27-11-2023.
Idroes, R. (2023). Maternal Health Risk Detection Using Light
35. He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep Residual
Gradient Boosting Machine Approach, Infolitika Journal of Data
Learning for Image Recognition, Computer Vision and Pattern
Science, Vol. 1, No. 2, 48–55. doi:10.60084/ijds.v1i2.123.
Recognition.
22. Maulana, A., Faisal, F. R., Noviandy, T. R., Rizkia, T., Idroes, G. M.,
36. Idroes, G. M., Maulana, A., Suhendra, R., Lala, A., Karma, T.,
Tallei, T. E., El-Shazly, M., and Idroes, R. (2023). Machine Learning
Kusumo, F., Hewindati, Y. T., and Noviandy, T. R. (2023).
Approach for Diabetes Detection Using Fine-Tuned XGBoost
TeutongNet: A Fine-Tuned Deep Learning Model for Improved
Algorithm, Infolitika Journal of Data Science, Vol. 1, No. 1, 1–7.
Forest Fire Detection, Leuser Journal of Environmental Studies,
doi:10.60084/ijds.v1i1.72.
Vol. 1, No. 1, 1–8. doi:10.60084/ljes.v1i1.42.
23. Suhendra, R., Suryadi, S., Husdayanti, N., Maulana, A., and Rizky,
37. Kingma, D. P., and Ba, J. (2014). Adam: A method for stochastic
T. (2023). Evaluation of Gradient Boosted Classifier in Atopic
optimization, ArXiv Preprint ArXiv:1412.6980.
Dermatitis Severity Score Classification, Heca Journal of Applied
Sciences, Vol. 1, No. 2, 54–61. doi:10.60084/hjas.v1i2.85. 38. Vasuki, P., Kanimozhi, J., and Devi, M. B. (2017). A Survey on
Image Preprocessing Techniques for Diverse Fields of Medical
24. Tran, K. A., Kondrashova, O., Bradley, A., Williams, E. D., Pearson,
Imagery, 2017 IEEE International Conference on Electrical,
J. V., and Waddell, N. (2021). Deep Learning in Cancer Diagnosis,
Instrumentation and Communication Engineering (ICEICE), IEEE,
Prognosis and Treatment Selection, Genome Medicine, Vol. 13,
1–6. doi:10.1109/ICEICE.2017.8192443.
No. 1, 152. doi:10.1186/s13073-021-00968-x.
39. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009).
25. Bakator, M., and Radosav, D. (2018). Deep Learning and Medical
Imagenet: A Large-Scale Hierarchical Image Database, 2009 IEEE
Diagnosis: A Review of Literature, Multimodal Technologies and
Conference on Computer Vision and Pattern Recognition , Ieee,
Interaction, Vol. 2, No. 3, 47. doi:10.3390/mti2030047.
248–255.
26. Liu, X., Wang, H., Li, Z., and Qin, L. (2021). Deep Learning in Ecg
40. Idroes, G. M., Noviandy, T. R., Maulana, A., Zahriah, Z.,
Diagnosis: A Review, Knowledge-Based Systems, Vol. 227,
Suhendrayatna, S., Suhartono, E., Khairan, K., Kusumo, F.,
107187. doi:10.1016/j.knosys.2021.107187.
Helwani, Z., and Abd Rahman, S. (2023). Urban Air Quality
27. Maulana, A., Noviandy, T. R., Suhendra, R., Earlia, N., Bulqiah, M., Classification Using Machine Learning Approach to Enhance
Idroes, G. M., Niode, N. J., Sofyan, H., Subianto, M., and Idroes, Environmental Monitoring, Leuser Journal of Environmental
R. (2023). Evaluation of Atopic Dermatitis Severity Using Artificial Studies, Vol. 1, No. 2, 62–68. doi:10.60084/ljes.v1i2.99.
Intelligence, Narra J, Vol. 3, No. 3, e511.
41. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D.,
doi:10.52225/narra.v3i3.511.
and Batra, D. (2017). Grad-CAM: Visual Explanations from Deep
28. Talukder, M. A., Islam, M. M., Uddin, M. A., Akhter, A., Pramanik, Networks via Gradient-Based Localization, 2017 IEEE
M. A. J., Aryal, S., Almoyad, M. A. A., Hasan, K. F., and Moni, M. A. International Conference on Computer Vision (ICCV), IEEE, 618–
(2023). An Efficient Deep Learning Model to Categorize Brain 626. doi:10.1109/ICCV.2017.74.
Tumor Using Reconstruction and Fine-Tuning.
42. Noviandy, T. R., Maulana, A., Khowarizmi, F., and Muchtar, K.
doi:10.48550/arXiv.2305.12844.
(2023). Effect of CLAHE-based Enhancement on Bean Leaf
29. Cellina, M., Cacioppa, L. M., Cè, M., Chiarpenello, V., Costa, M., Disease Classification through Explainable AI, 2023 IEEE 12th
Vincenzo, Z., Pais, D., Bausano, M. V., Rossini, N., Bruno, A., and Global Conference on Consumer Electronics (GCCE) , IEEE, 515–
Floridi, C. (2023). Artificial Intelligence in Lung Cancer Screening: 516. doi:10.1109/GCCE59613.2023.10315394.
The Future Is Now, Cancers, Vol. 15, No. 17, 4344.
43. Samek, W., Wiegand, T., and Müller, K.-R. (2017). Explainable
doi:10.3390/cancers15174344.
Artificial Intelligence: Understanding, Visualizing and
30. Amann, J., Blasimme, A., Vayena, E., Frey, D., and Madai, V. I. Interpreting Deep Learning Models, ArXiv Preprint
(2020). Explainability for Artificial Intelligence in Healthcare: A ArXiv:1708.08296.
Multidisciplinary Perspective, BMC Medical Informatics and
44. Willemink, M. J., Koszek, W. A., Hardell, C., Wu, J., Fleischmann,
Decision Making, Vol. 20, No. 1, 310. doi:10.1186/s12911-020-
D., Harvey, H., Folio, L. R., Summers, R. M., Rubin, D. L., and
01332-6.
Lungren, M. P. (2020). Preparing Medical Imaging Data for
31. Noviandy, T. R., Maulana, A., Idroes, G. M., Suhendra, R., Adam, Machine Learning, Radiology, Vol. 295, No. 1, 4–15.
M., Rusyana, A., and Sofyan, H. (2023). Deep Learning-Based doi:10.1148/radiol.2020192224.
Bitcoin Price Forecasting Using Neural Prophet, Ekonomikalia
Journal of Economics, Vol. 1, No. 1, 19–25.
doi:10.60084/eje.v1i1.51.
Page | 14