0% found this document useful (0 votes)
10 views31 pages

Manuscript Reference

Uploaded by

Snehal Kharade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views31 pages

Manuscript Reference

Uploaded by

Snehal Kharade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

1

1 Article

2 A Novel Hybrid Quantum Architecture-based Lung Cancer


3 Detection using Chest Radiograph and Computerized
4 Tomography Images
5 Jason Elroy Martis 1, Sannidhan M S 2, Balasubramani R 1, A.M. Mutawa 3, *, and M. Murugappan 4, 5, 6, *

6 1
Department of ISE, NMAM Institute of Technology Mangalore, India
7 2
Department of CSE, NMAM Institute of Technology, Mangalore, India
8 3
Computer Engineering Department, College of Engineering and Petroleum, Kuwait University, Kuwait
9 4
Intelligent Signal Processing (ISP) Research Lab, Department of Electronics and Communication
10 Engineering, Kuwait College of Science and Technology, Block 4, Doha, 13133, Kuwait
11 5
Department of Electronics and Communication Engineering, School of Engineering, Vels Institute of
12 Sciences, Technology, and Advanced Studies, Chennai, Tamil Nadu, India
13 6
Center of Excellence for Unmanned Aerial Systems (CoEUAS), Universiti Malaysia Perlis, 02600, Perlis,
14 Malaysia
15 * Correspondence: [email protected], [email protected]

16 Abstract: Lung cancer is the second most common type of cancer among all types of cancer. Early
17 lung tumor detection and diagnosis can benefit patients, healthcare systems, and society in
18 general. Chest radiographs (CXR) and computerized tomography (CT) are valuable tools for
19 detecting lung cancer. Tumors can be detected more accurately and quickly using automated
20 methods, including deep learning (DL). In recent years, quantum layers with parameterized
21 quantum circuits have been proven to enhance the performance of DL models. This work
22 proposes a hybrid framework that combines transfer learning, as feature extraction, with quantum
23 circuits, as the classification. A pre-trained set of models, namely, Visual Geometry Group 16 and
24 19 (VGG16 and VGG19), Inception version 3, Xception, Residual Neural Network fifty layers
25 (ResNet-50), and Re-parameterization Geometry Group (RepVGG) are used for feature extraction,
Citation: To be added by editorial
26 staff during production.
and Singular Value Decomposition (SVD) is used to reduce the number of features. The optimized
27 features are then classified using quantum circuits. A hybrid quantum system improved the
28 Academic Editor: Firstname Last- overall accuracy of the system by 92.12% over a traditional quantum system, which achieved
name
29 89.21%. A state-of-the-art performance was achieved in the evaluation of the framework,
30 Received: date surpassing other approaches. Lung cancer is the second most common type of cancer and poses
31 Revised: date significant health challenges worldwide. Early detection is crucial for improving patient outcomes
32 Accepted: date and reducing treatment complexity. This study introduces a hybrid framework combining deep
33 Published: date learning (DL) and quantum computing to enhance the detection accuracy of lung cancer using
34 chest radiographs (CXR) and computerized tomography (CT) images. Utilizing pre-trained

Copyright: © 2024 by the authors.


Submitted for possible open access
publication under the terms and
conditions of the Creative Commons
Attribution (CC BY) license
(https://creativecommons.org/license
s/by/4.0/).

3 Bioengineering. 2024, 14, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/bioengineering


4 Bioengineering. 2024, 14, x FOR PEER REVIEW 2 of 31
5

35 models for feature extraction and quantum circuits for classification, our system achieves state-of-
36 the-art performance across various metrics. The proposed system not only reaches an overall
37 accuracy of 92.12%, but also excels in other critical performance measures, including a sensitivity
38 of 94%, specificity of 90%, F1-score of 93%, and precision of 92%. These results demonstrate the
39 effectiveness of our hybrid approach in identifying lung cancer signatures more accurately than
40 traditional methods. The integration of quantum computing enhances processing speed and
41 scalability, making our system a promising tool for early lung cancer screening and diagnosis.
42 Hence, by leveraging the strengths of quantum computing, our approach outperforms traditional
43 methods in speed, accuracy, and efficiency. This study underscores the potential of hybrid
44 computational technologies to revolutionize early cancer detection, paving the way for broader
45 clinical applications and ultimately enhancing patient care outcomes.

46 Keywords: Lung tumor classification; Deep learning models; Quantum layers; Transfer learning
47 models; Hybrid quantum layer.
48

49 1. Introduction
50 The lung is a vital organ for human health. A lung tumor is an abnormal growth of
51 cells in the lung, affecting its function and structure. There are several reasons have been
52 identified for lung cancers and reported in the literature. Also, there are a variety of
53 causes and symptoms of lung tumors, including benign and malignant tumors. It is
54 important to conduct research on lung tumors in order to understand their mechanisms,
55 diagnosis, treatment, and prevention. It is highly essential to continue the research on
56 lung tumors to elucidate their pathogenesis, diagnosis, treatment, and prevention.
57 Detecting and diagnosing lung tumors at an early stage can benefit patients, healthcare
58 systems, and society in several ways. Healthcare costs and societal complications can be
59 minimized if advanced lung cancer and palliative care are avoided. Averting late-stage
60 symptoms and treatments will enhance patients' quality of life and well-being and
61 reduce their morbidity and mortality [1]. Prompt and effective treatment can increase
62 patients' chances of surviving and being cured before the tumor spreads to other organs
63 or becomes resistant to treatment.
64 The Computerized Tomography (CT) scan is a valuable tool for detecting lung
65 cancer, particularly in high-risk populations, such as smokers [2]. The CT scan provides
66 detailed cross-sectional images of the lungs, allowing for better visualization and
67 assessment of abnormalities compared to Chest X-rays (CXR). Nevertheless, lung tumors
68 can sometimes be visible on CXR but not clearly detectable on CT scans [2]. Several
69 factors may contribute to this: 1) Smaller tumors may be less visible on CT scans than
70 larger tumors, but more visible on CXR; 2) The tumor's location in the lung may affect its
71 visibility. If the tumor is obscured or overlaps with normal lung tissue on CT scans, CXR
6 Bioengineering. 2024, 14, x FOR PEER REVIEW 3 of 31
7

72 may show it more clearly. 3) Imaging techniques have their own strengths and
73 weaknesses in any clinical diagnosis. Hence, the use of CXR and CT scans is highly
74 complementary and influential in any clinical diagnosis, specifically in lung cancer
75 detection compared to other imaging modalities.
76 The lung is a vital organ for human health, and lung tumors, whether benign or
77 malignant, pose a significant threat by affecting its function and structure. Various
78 causes and symptoms of lung tumors have been identified and reported in the literature.
79 Conducting research on lung tumors is crucial to understand their mechanisms,
80 diagnosis, treatment, and prevention. Early detection and diagnosis of lung tumors are
81 essential, as they can benefit patients, healthcare systems, and society by minimizing
82 healthcare costs and complications associated with advanced lung cancer and palliative
83 care. Early intervention can enhance patients' quality of life, reduce morbidity and
84 mortality, and improve survival chances before the tumor spreads or becomes resistant
85 to treatment.
86
87 CT scans are valuable tools for detecting lung cancer, especially in high-risk
88 populations such as smokers. CT scans provide detailed cross-sectional images of the
89 lungs, allowing for better visualization and assessment of abnormalities compared to
90 Chest X-rays (CXR). However, lung tumors can sometimes be visible on CXR but not
91 clearly detectable on CT scans. This discrepancy may be due to several factors: 1)
92 Smaller tumors may be more visible on CXR than on CT scans; 2) The tumor's location in
93 the lung might affect its visibility, with CXR potentially showing tumors obscured or
94 overlapping with normal lung tissue on CT scans more clearly; 3) Each imaging
95 technique has its strengths and weaknesses. Therefore, using both CXR and CT scans is
96 complementary and influential in clinical diagnosis, particularly in lung cancer detection
97 [1,2].
98 The process of manually identifying tumors is challenging, error-prone, and
99 inconsistent [3]. Depending on the expertise of the radiologist and the prominence of the
100 imaging technique, the response will vary. It is possible to identify tumors from various
101 images more quickly, objectively, and precisely by using automated methods, especially
102 using DL models DL [3-5]. DL is an advanced tool in artificial intelligence (AI) that uses
103 neural networks to learn from input data and perform tasks such as
104 detection/classification/prediction. In medical imaging, such as CT and CXR, DL
105 techniques have been used to classify lung tumors [6]. Classification of lung tumors is a
106 challenging task that requires the accurate and reliable diagnosis of different types and
107 subtypes of lung cancer such as non-small cell lung cancer and small cell lung cancer. In
108 addition, it requires a distinction between benign nodules and other lung diseases.
109 Through DL techniques, lung tumor classification can be improved by mining
110 significant features from the input images (CT/CXR), developing robust and efficient DL
111 models, improving performance and interpretability, and providing clinicians with
8 Bioengineering. 2024, 14, x FOR PEER REVIEW 4 of 31
9

112 decision support and guidance. By providing complementary information and


113 perspectives about lung anatomy and pathology, CT and CXR images can enhance the
114 accuracy of lung tumor classification. It is possible to see tiny lumps in CT images that
115 may not be seen on CXR and to obtain a detailed cross-section of the lungs. The overall
116 geometrical aspects of the lungs can be seen on CXR, and sometimes the abnormalities
117 can be clearly seen due to the image resolution and projection. As a result of combining
118 CT images and CXRs, DL techniques can leverage the strengths of both imaging
119 modalities and overcome their limitations, resulting in more reliable lung tumor
120 classification CT images can reveal small lumps not visible on CXRs, providing detailed
121 cross-sectional views of the lungs. While CXRs offer a broader overview of the lungs'
122 overall geometry, their resolution and projection can sometimes make abnormalities
123 distinctly visible. By combining these imaging modalities, DL techniques can leverage
124 the strengths of both to enhance the reliability of lung tumor classification.[6,7].
125 In general, DL networks require high computing power and more computation
126 time to process the data. There is a direct correlation between the factors listed above
127 and the size of the data. Furthermore, the performance of DL models is highly
128 dependent on network hyperparameters. An incorrect selection of hyperparameters may
129 result in poor accuracy, reliability, robustness, and efficiency. Quantum computing
130 methods, which are a recent development in computing, can help to resolve the issues in
131 DL networks mentioned above. Quantum computing improves the speed, accuracy, and
132 scalability of DL models by incorporating quantum computing. Moreover, by efficiently
133 allocating computation resources, the speed of computation is also increased. Quantum
134 computing features are therefore highly beneficial to DL models in terms of enhancing
135 their robustness and diagnostic accuracy. Therefore, it is possible to enhance DL models
136 through quantum computing by taking advantage of quantum superposition,
137 entanglement, and interference which can improve classification accuracy. DL models
138 can benefit from quantum layers, which are novel components that can be incorporated
139 for leveraging quantum advantages. The quantum layers consist of parameterized
140 quantum circuits (PQCs) that can be trained using classical or quantum optimization
141 algorithms. Different types of quantum layers have been recommended and
142 demonstrated for deep learning classification, such as quantum convolutional neural
143 networks (QCNN), quantum deep neural networks (QDNN), and deep quantum neural
144 networks (DQNN) [8]. According to these studies, quantum layers can perform better
145 than classical layers on a variety of datasets and tasks, including Modified National
146 Institute of Standards and Technology (MNIST) database digit recognition, breast cancer
147 diagnosis, and phase transition detection [9]. It is expected that quantum layers will be
148 instrumental in the future development of quantum machine learning and artificial
149 intelligence [10].
150 In general, DL networks require substantial computing power and extended
151 computation times to process data, with performance closely tied to the size of the data
10 Bioengineering. 2024, 14, x FOR PEER REVIEW 5 of 31
11

152 and the precision of network hyperparameters. Misconfigured hyperparameters can


153 significantly diminish a model’s accuracy, reliability, robustness, and efficiency. Recent
154 advances in quantum computing offer solutions to these challenges, enhancing the
155 speed, accuracy, and scalability of DL models. By efficiently allocating computation
156 resources, these methods not only accelerate processing speeds but also bolster the
157 robustness and diagnostic accuracy of DL systems. Quantum computing leverages
158 principles like superposition, entanglement, and interference to refine classification
159 accuracy. The integration of quantum layers—such as parameterized quantum circuits
160 (PQCs) which can be trained via classical or quantum optimization algorithms—
161 introduces a novel component to traditional networks. These layers have been shown to
162 outperform classical counterparts in various tasks across different datasets, including
163 digit recognition on the Modified National Institute of Standards and Technology
164 (MNIST) database, breast cancer diagnosis, and phase transition detection [8,9]. With
165 ongoing advancements, quantum layers are poised to play a crucial role in the evolution
166 of quantum machine learning and artificial intelligence [10].
167 In this study, we aim to overcome the shortcomings of existing methods for
168 differentiating benign from malignant lung tumors. CT scans or CXR radiographs are
169 currently used to diagnose lung tumors, but neither provides a comprehensive
170 understanding of the complexity and diversity of these tumors. Additionally, existing
171 methods use DL models that require extensive feature engineering and parameter
172 tuning. Our framework leverages pre-trained transfer learning (TL) models that are fine-
173 tuned for lung tumor classification based on CXR and CT images as inputs. In addition,
174 we incorporate a hybrid quantum layer that enhances classification performance by
175 combining CT and CXR features. We evaluate our framework using two standard open-
176 source datasets: ChestX-ray8 and Lung Image Database Consortium image collection
177 (LIDC-IDRI), which are extensively used in research. The proposed RepVGG model
178 with the hybrid quantum layer achieves a noticeable classification accuracy of over 92%,
179 which is more than 3% higher than other standard methods.
180 This research work includes the following contributions to the design of the
181 proposed system:
182  There is a new framework proposed for lung tumor classification that leverages
183 pre-trained TL models that have been fine-tuned for lung tumor classification and
184 use both CXR and CT images as inputs.
185  Hybrid quantum layers that combine CT and CXR data and enhance the TL model
186 to improve classification are introduced.
187  The proposed system has been evaluated on two standard datasets and has
188 achieved state-of-the-art performance for lung tumor classification.
189  The framework performs better than other methods which rely on either CXR or CT
190 images alone or conventional machine learning methods.
12 Bioengineering. 2024, 14, x FOR PEER REVIEW 6 of 31
13

191 This article is organized as follows: Section 1 introduces the research topic, reviews
192 the existing methods for lung cancer detection and classification, and states the research
193 questions. Section 2 presents a literature review related to the aims and objectives of the
194 proposed system. The methodology of the proposed system is described in Section 3,
195 including pre-processing steps, model architecture, training process, evaluation metrics,
196 and experimental setups. The results of the experiments are presented and analyzed in
197 Section 4, along with comparisons with other state-of-the-art systems and a discussion of
198 the capabilities of the proposed system. Lastly, in Section 5, the article summarizes the
199 major points, presents the novelty and significance of the research, and makes some
200 recommendations for future research.

201 2. State-of-the-art research


202 Many studies have applied TL to classify lung nodules or cancers from CT images
203 [11-20]. TL is a technique that transfers the knowledge acquired from a source domain to
204 a target domain, which can solve the problem of limited data in medical image analysis.
205 Different studies have used different CNN architectures and classifiers based on TL,
206 such as VGG16, ResNet50-V2, DenseNet201, SVM, and RF [15-20]. The experimental
207 results have demonstrated that TL can enhance the accuracy and performance of lung
208 cancer detection compared to conventional methods [16-18]. Wang et al. [16] reported an
209 accuracy improvement up to 83% for classifying lung cancer, highlighting the
210 effectiveness of TL. Nishio et al. [17] achieved a sensitivity of 82% and specificity of 79%,
211 demonstrating the impact of image size on TL performance. Da Nóbrega et al. [18] also
212 showed that TL could enhance classification accuracy of lung nodules to 85%. Some
213 studies have also investigated the impacts of data augmentation, image size, and
214 ensemble learning on TL [15,17-20]. The literature review shows that TL is a relevant
215 and effective strategy for lung cancer detection. While most studies focus on applying
216 TL to CT images for lung cancer detection, CXRs are equally important. They are more
217 widely used and accessible but pose challenges for TL due to their low quality.
218 However, CT images also have their own drawbacks [6,7]. Exploring TL for CXR images
219 may require different techniques and would broaden its impact.
220 Several studies have used DL techniques for lung disease classification using both
221 CXR and CT images, which can improve the detection of lung abnormalities such as
222 pneumonia, cancer, and CoVID-19. In [21], [22], and [23], utilized different pre-trained
223 CNN models to classify both types of images (CXR and CT scans), achieving high
224 accuracy and reporting better results than other related works in their literature. In
225 addition, the researchers have used a tuned VGG-19 model to detect CoVID-19 using
226 features extracted from both types of images, which achieved high accuracy of 81%, 83%
227 sensitivity, and 82% specificity [24]. A work conducted in research exertion [25]
228 reviewed recent DL techniques for CoVID-19 diagnosis using medical images and found
229 that CNN was the most popular DL algorithm. The review also suggested that
14 Bioengineering. 2024, 14, x FOR PEER REVIEW 7 of 31
15

230 combining CT and CXR images can provide faster and more accurate results and that
231 pre-processing, transfer learning and data augmentation techniques can help overcome
232 data scarcity problems. The review by Shyni et al. [25] further supports the combination
233 of CT and CXR images to provide faster and better accurate results. data scarcity
234 challenges. Their study reported a notable increase in diagnostic accuracy, where the
235 combined approach achieved an accuracy of approximately 84%. This was a significant
236 improvement over models trained solely on CXR or CT images, which generally
237 achieved accuracies around 74% and 70%, respectively. Moreover, the sensitivity and
238 specificity of the combined models reached as high as 83% and 85%, respectively,
239 compared to 75% sensitivity and 77% specificity for models using only CXR images, and
240 69% sensitivity and 70% specificity for those using only CT images.
241 Quantum computing has been shown to enhance the performance of DL network
242 systems in various applications. QCNN is a novel DL technique that combines quantum
243 and classical computing to process image data. In [26] and [27], the researchers have
244 demonstrated the advantages of QCNN over classic CNN in terms of accuracy and
245 speed on different image classification tasks. In [26], a reported 7% improvement in
246 accuracy was noted, and in [27], accuracy improved by 10% over traditional CNNs.
247 Both articles have also explored the correlation between the chaotic nature of the image
248 and the QCNN performance and found that quantum entanglement plays a key role in
249 improving classification scores. Recently, researchers have proposed a variational
250 quantum deep neural network (VQDNN) model that uses parametrized quantum
251 circuits to achieve better accuracy improvement of approximately 8% than classical
252 neural networks on two datasets with limited qubits in image recognition [28]. In
253 addition, the authors in [29] and [30] explore the use of hybrid TL techniques that
254 combine a classical pre-trained network with a variational quantum circuit as the final
255 layer (classifier) on small datasets. They evaluate different classical feature extractors
256 with a quantum circuit as a classifier on three image datasets: trash (recycling material),
257 Tuberculosis (TB) from CXR images, and crack in concrete images. They show that the
258 hybrid models outperform the classical models by demonstrating an improvement in
259 accuracy rate of over 12% on all datasets, even with qubit constraints. In [31], the
260 researchers introduce a new kind of transformational layer for image recognition, called
261 a quantum convolution or quanvolution layer. Quanvolution layers use random
262 quantum circuits to locally transform the input data, similar to classical convolution
263 layers. They compare classical convolutional neural networks (CNNs), quantum
264 convolutional neural networks (QCNNs), and CNNs with extra non-linearities on the
265 MNIST dataset. They show that QCNNs have faster training and higher accuracy
266 improvement of 9% than over traditional CNNs, suggesting the potential of
267 quanvolution layers for near-term quantum computing.
268 On reviewing the existing literature and found that DL techniques can help with
269 the challenging and important task of classifying lung diseases using medical images.
16 Bioengineering. 2024, 14, x FOR PEER REVIEW 8 of 31
17

270 Many studies have used TL to achieve better results than conventional methods for
271 classifying lung nodules or cancers from CT/CXR images with different CNN
272 architectures and classifiers. Many studies have also shown that QCNNs can outperform
273 classic CNNs in accuracy for different image classification tasks by increasing the speed
274 of computation, and scalability, and reducing the computation power. Quantum
275 computing can boost the performance of DL network systems in various applications.
276 Some studies have used variational quantum circuits to enhance the performance of
277 QCNNs. Based on these findings, we propose a new system that combines TL and
278 QCNNs for classifying lung diseases using both CXR and CT images. We aim to use
279 quantum computing to improve the performance of TL models for medical image
280 analysis. Table 1 provides the summary of the literature conducted.
281
282
283
284
285
286 Table 1. Literature Review Summary
Reference Approach Key Findings Identified Gaps

[11-20] TL TL enhances accuracy and Limited data


performance for lung cancer availability in medical
detection. Different CNN image analysis. Need
architectures and classifiers used, for techniques for CXR.
such as VGG16, ResNet50-V2,
DenseNet201, SVM, and RF.

[16] TL Accuracy improvement for lung Impact of image size on


cancer classification. Reported TL performance not
accuracy up to 83%. fully explored.

[17] TL Demonstrated the impact of image Need for optimization


size on TL performance. Sensitivity of TL models for
82%, Specificity 79%. different image sizes.

[18] TL Enhanced classification accuracy of Requires further


lung nodules. Accuracy up to 85%. validation on larger
datasets.
18 Bioengineering. 2024, 14, x FOR PEER REVIEW 9 of 31
19

[21-23] DL High accuracy for lung disease Integration of pre-


classification from CXR and CT processing and
images. Achieved higher accuracy augmentation
than other related works. techniques needs further
exploration.

[24] DL CoVID-19 detection from CXR and Limited by data scarcity


CT images. Accuracy 81%, Sensitivity and need for larger,
83%, Specificity 82%. diverse datasets.

[25] DL Combined CT and CXR approach for Challenges in combining


CoVID-19 diagnosis: Accuracy 84%, different image
Sensitivity 83%, Specificity 85%. modalities for consistent
performance.

[27] QCNN Correlation between image chaos and Understanding the role
QCNN performance. Reported 10% of quantum
accuracy improvement. entanglement in
performance
improvement.
[28] VQDNN Better accuracy improvement on Qubit limitations and
limited qubits datasets. Reported 8% practical implementation
accuracy improvement. challenges.

[29-30] Hybrid Improved accuracy with small Need for more


TL datasets. Over 12% accuracy extensive testing across
improvement. different types of
datasets.
[31] Quanvolut Faster training and higher accuracy Integration with classical
ion Layer on MNIST. Reported 9% accuracy CNNs and practical
improvement. deployment issues.

287
288 3. Methodology
289 A description of how the proposed system works and is designed is provided in the
290 methodology section of this article. Figure 1 illustrates the connections between the
291 different modules in the systemThis section outlines a proposed system that integrates
292 TL and QCNNs to enhance lung disease classification using Chest X-Ray (CXR) and
20 Bioengineering. 2024, 14, x FOR PEER REVIEW 10 of 31
21

293 Computed Tomography (CT) images. The process begins with acquiring and pre-
294 processing extensive medical image datasets to ensure high quality and uniformity. Pre-
295 trained CNN models, such as VGG16, ResNet50-V2, and DenseNet201, are fine-tuned
296 for specific lung disease classification tasks. QCNNs are developed and integrated with
297 these TL models to create a hybrid system that leverages both classical and quantum
298 computing advantages. The hybrid models are trained, optimized, and evaluated to
299 maximize performance metrics like accuracy, sensitivity, and specificity. Finally, the
300 optimized model is prepared for deployment in clinical settings, ensuring scalability and
301 seamless integration with existing medical systems. This approach aims to overcome
302 data limitations and improve the accuracy and efficiency of lung disease detection.
303 Figure 1 illustrates the overall working steps of the proposed system. This approach
304 aims to overcome data limitations and improve the accuracy and efficiency of lung
305 disease detection.

306

307 Figure 1. Proposed system’s architecture

308 The proposed system as depicted in Figure 1 has three main modules that work
309 together: 1) Image acquisition, 2) Tuning of the TL Model, and 3) Quantum learning and
310 classification. The following subsections describe each module in detail.

311 3.1. Input Image Description


22 Bioengineering. 2024, 14, x FOR PEER REVIEW 11 of 31
23

312 Images are collected from both CXR and CT scans during the image acquisition
313 process. CXR and CT scans are used as the source of the images. The classification task is
314 challenging since CT scans and CXR belong to two different types of images. As a result,
315 we train the network separately for CXR and CT scans, which improves the accuracy
316 and efficiency of feature extraction. Images are converted to grayscale with a range
317 between one and 255. A mathematical formula for the image retrieval process is shown
318 in equations (1) and (2).

I x ( x , y )←dataset (CXR) (1)


I ct ( x , y )← dataset (CT ) (2)

319 Here, I x ( x , y ) is the image taken from a dataset of CXR by means of pixels.
320 Similarly, I ct ( x , y ) stands for the images from the CT dataset. The values (x , y ) are
321 generic to represent the width and height of the single image, respectively. It is
322 necessary to resize all images since neural networks require them to have a fixed size.
323 Nevertheless, resizing has its trade-offs: reducing the size of an image reduces its
324 quality, whereas making it larger increases the training time and complexity. To find a
325 balance between computational cost and accuracy, based on experimental investigation,
326 we use 1024 × 1024 pixels as the resized image size [32]. The relevant evidence for the
327 same is presented in the experimental trials conducted in Table .

328 3.2. Tuning of Transfer Learning Model


329 The purpose of this process is to categorize CXR and CT images into benign,
330 normal, and malignant groups. Malignant tumors can spread beyond the body and pose
331 a threat to other organs. Benign tumors are harmless growths that don't invade nearby
332 tissues. An organ classified as normal works well and has no tumors. As explained in
333 more detail in the following sections, we use a hybrid quantum model in this paper to
334 classify the images.

335 3.2.1. Feature extraction


336 In the field of DL, feature extraction is a crucial step. It employs notable structures
337 that enable the system to assess the structures according to their corresponding classes.
338 TL is a quick training approach that hastens the extraction of features and avoids
339 overfitting by manually training the system. TL involves using pre-trained models that
340 are used for other classification jobs. Using the knowledge gained, we can extrapolate it
341 to suit our needs within a minimum of training time. Figure 2 shows the architecture
342 describing the internal structure of the TL model adopted for training.
24 Bioengineering. 2024, 14, x FOR PEER REVIEW 12 of 31
25

343

344 Figure 2. Leveraging Transfer Learning for feature extraction from CT and CXR images.

345 As shown in Figure 2, first, we use pretrained TL models like VGG16, VGG19,
346 Inception-v3, Xception, ResNet50, and RepVGG to extract features [33-35]. Our choice of
347 these models was based on their variation in convolutional filter usage and the fact that
348 they were developed for different classification problems. Furthermore, we replaced the
349 top classification layer with our own classification rule. Table 2 presents an overview of
350 various pre-trained CNN models used for feature extraction in our study. Each model is
351 evaluated based on its size, the number of hyperparameters, the specific layer used for
352 feature extraction, the initial feature dimension, and the dimension after fusion.
353 Table 2. Summary of pre-trained models used for feature extraction in our researchresearch.
Model Size Hyperpara Feature Extraction Feature Dimensio
name (MB) meters Layer Dimensio n After
(Million) n Fusion
VGG16 528 138.35 Block5_conv3 512 1024
VGG19 549 143.66 Block5_conv4 512 1024
Inception 92 23.85 mixed10 2048 4096
26 Bioengineering. 2024, 14, x FOR PEER REVIEW 13 of 31
27

V3
Xception 88 22.91 block14_sepconv2_a 2048 4096
ct
ResNet50 99 25.636 conv5_block3_out 2048 4096
RepVGG 558 11.68 repvgg_block5 2048 4096
354
355 These pre-trained classifiers as shown in As shown in Figure 2, first, we use
356 pretrained TL models like VGG16, VGG19, Inception-v3, Xception, ResNet50, and
357 RepVGG to extract features [33-35]. Our choice of these models was based on their
358 variation in convolutional filter usage and the fact that they were developed for different
359 classification problems. Furthermore, we replaced the top classification layer with our
360 own classification rule. Table 2 presents an overview of various pre-trained CNN
361 models used for feature extraction in our study. Each model is evaluated based on its
362 size, the number of hyperparameters, the specific layer used for feature extraction, the
363 initial feature dimension, and the dimension after fusion. are fine-tuned on the CXR and
364 CT datasets separately to obtain optimal models catering to extract features from CXR
365 and CT scans. Equations (3) – (5) explain the structure of how features are extracted and
366 finetuned for our classification purpose.
367
❑() (❑()∗❑() ❑() ) (3)
()max () (4)
()
❑ ∗❑ ❑
() ()
(5)
368
369 Here ❑() is the input to the layer (for the first layer, ❑❑ is the input image). ❑() and
❑ are the weights and biases of the layer , respectively. are the activation functions
()
370
371 which are either ReLU or sigmoid. ❑() is the output of layer after applying the
372 activation function. is the last pre-trained layer, ❑() and ❑() are the weights and biases
373 of the final fully connected layer, and is the logits vector representing the raw model
374 predictions. It is necessary to discard the last layer of each model in order to extract
375 relevant featuresclassify the model into our necessary classes. Finally, the CXR and CT
376 datasets are stored separately because they have distinct feature sets. The following
377 sections elaborate on some sample layers a quantum hybrid model for image
378 classification that incorporates these features. Figure 3 illustrates how features are
379 accessed from selected layers of a proposed TL framework.
28 Bioengineering. 2024, 14, x FOR PEER REVIEW 14 of 31
29

380

381 Figure 3. Visual analysis of different layers of TL framework.

382 As shown in Figure 3, different layers extract different types of information from
383 different images. The top three images show the features extracted from X-rays while the
384 bottom two show how ReLU activation helps extract features from CT scans. The
385 visualization in Figure 3 showcases how various neural network layers process X-ray
386 and CT scan images, highlighting distinct feature extraction methods for each type of
387 imaging data.
388 For X-rays, the sequence begins with the top convolutional layer of VGG16, which
389 identifies low-level features such as edges and textures, essential for delineating
390 anatomical structures. This is followed by the ReLU layer of VGG19, which enhances
391 these features by removing negative values, thus improving the visibility of critical
392 details like lesions or masses. The normalization layer of ResNet50 then adjusts the
393 feature maps to a consistent scale, aiding in uniform feature interpretation across
394 different X-ray images.
395 In CT scans, the max pooling layer of InceptionV3 reduces spatial resolution but
396 retains significant features within each region, focusing the analysis on relevant aspects
397 such as tumors. The activation map from RepVGG synthesizes higher-level features,
398 revealing complex tissue textures and enhancing the model's ability to detect
399 abnormalities.
400
30 Bioengineering. 2024, 14, x FOR PEER REVIEW 15 of 31
31

401
402 3.2.2. Merging of Features
403 In this study, we utilize both Computed Tomography (CT) and Chest X-Ray (CXR)
404 imaging modalities for each scan to maximize the diagnostic potential of the imaging
405 data. Features are independently extracted from both the CT and CXR images to harness
406 the unique diagnostic information each modality provides. The detailed set of
407 procedures are explained as follows

408  Feature Extraction Process:


409 In this process, the set of features from the CT images using a dedicated TL model
410 optimized for CT data. These features typically capture detailed anatomical structures
411 and potential abnormalities specific to CT imaging. Equation (6) depicts the
412 mathematical formulation of this process

❑❑ ❑❑ ❑ ❑ ❑
❑ ❑❑ ❑❑ ❑❑ ❑❑
(6)
413 Similarly, a different set of features is extracted from the corresponding CXR
414 images using another TL model that is specifically tuned to exploit the diagnostic
415 strengths of CXR, such as overall lung geometry and certain types of lesions more
416 visible in CXR. The extraction process is explained in equation (7)

❑ ❑ ❑ ❑ ❑
❑ ❑❑❑❑❑❑ ❑❑❑❑ (7)
417
418  Feature Merging Strategy:
419 The features extracted from both CT and CXR images are then merged to form a
420 combined feature vector. This merging process involves concatenating the feature
421 vectors from each modality. The process of feature merging is depicted in equation (8)
422 mathematically.

❑❑ ❑ ❑ {❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ }
❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑
(8)
423

424 The features extracted from CXR and CT images are merged into one feature vector
425 in this step. The output of every TL model is a fixed set of features [35]. Through this
426 step, the features obtained from both domains are enhanced to produce a feature vector
427 twice the size of the original. An explanation of merged features is given in equations (3)
428 - (5).

❑❑ ❑❑ ❑ ❑ ❑
❑ ❑❑ ❑❑ ❑❑ ❑❑
(3)
❑❑ ❑❑ ❑ ❑ ❑
❑ ❑❑ ❑❑ ❑❑ ❑❑
(4)
32 Bioengineering. 2024, 14, x FOR PEER REVIEW 16 of 31
33

❑❑ ❑❑❑❑ {❑❑
❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ }
❑ ❑ ❑ ❑ ❑ ❑ ❑
(5)
429
x
430 In the equationIn equations (6)-(8), f 1 represents the single feature obtained from an
ct
431 CXR image. Similarly, f 1 represents a single feature obtained from a CT image. Also, F x
432 and F ct represents the feature vector of CXR and CT scans, respectively. F total stands for
433 a simple concatenation of the features of both F x and F ct .
434
435

436 3.2.3. Dimensionality reduction


437 This step reduces the dimensionality of the data by applying a layer that transforms
438 many input features into fewer output features. As part of our process, we use a singular
439 value decomposition (SVD) layer to compress the merged input features extracted from
440 the TL models into five quantum features. The main reason for selecting SVD is due to
441 its ability to optimally represent and denoise high-dimensional medical imaging data
442 [36]. The number of features we choose is fixed because it is appropriate for our needs.
443 In Equations (9) - (12) we see the transformation function for Singular Value
444 Decomposition (SVD).

U , Σ, V =SVD (original dimensions) (9)


U =M [ entries ×5 ] (10)
Σ=M [5 ×5 ] (11)
5 ×5=M [ 5× dimensions] (12)
445 Here U represents the complex unitary matrix having a column size of the reduced
446 number of dimensions. Σ represents a natural number nonnegative diagonal matrix of
447 5 ×5 . V stands for a complex unitary matrix of five rows having original dimensional
448 columns. Note that for SVD, we will not be using V but V T which is its transpose.

449 3.2.4. Quantum Layer


450 Circuits with variable parameters, known as variable circuits, play an important
451 role in quantum computing. They are analogous to neural networks in classical
452 computing, which are powerful machine learning models [37,38]. In this study, we
453 implemented a quantum variational circuit with fivethree qubits, each representing a
454 classical binary bit (0|1). Quantum states of electron spin can be determined by Qubits
455 in a magnetic field, leading to spin-ups (1) or spin-downs (0). Three key states make up
456 our quantum variational circuit: Initial, Parameterized, and Measurement. All qubits are
457 initialized to 0 in the Initial state. A parameterized quantum circuit has two input
34 Bioengineering. 2024, 14, x FOR PEER REVIEW 17 of 31
35

458 parameters: and, which represent the input and variational aspects, respectively [39].
459 The classical data is inserted into these quantum circuits using quantum embeddings,
460 which use Hilbert's feature spaces. illustrates the architecture of our proposed quantum
461 circuit.

462 Circuits with variable parameters, known as variational circuits, play an important
463 role in quantum computing. They are analogous to neural networks in classical
464 computing, which are powerful machine learning models [37-39]. In this study, we
465 implemented a quantum variational circuit with five qubits, each representing a classical
466 binary bit (0 or 1). Quantum states of electron spin can be determined by qubits in a
467 magnetic field, leading to spin-up (1) or spin-down (0) states. This spin state represents
468 the fundamental binary information in quantum computing, similar to classical bits but
469 with the added advantage of quantum superposition and entanglement.

470 Our quantum variational circuit is composed of three key states: Initial,
471 Parameterized, and Measurement. In the Initial state, all qubits are initialized to 0. This
472 initialization ensures a known starting point for subsequent quantum operations.
473 In the Parameterized state, the quantum circuit receives two types of input
474 parameters: input data and variational parameters. The input data represents the
475 classical information to be processed, while the variational parameters are tunable
476 parameters optimized during the training process to minimize the cost function. The
477 classical data is inserted into these quantum circuits using quantum embeddings, which
478 map classical data into high-dimensional Hilbert space, enabling the quantum circuit to
479 process it. The final state is the Measurement state, where the quantum system is
480 measured, and the resulting quantum states are collapsed into classical binary outcomes
481 (0 or 1). The measurement results are used to evaluate the performance of the quantum
482 circuit and adjust the variational parameters accordingly.
483 Our quantum variational circuit architecture, as illustrated in Figure 4, integrates
484 these three states into a cohesive framework. The figure provides a visual representation
485 of the quantum circuit, detailing the flow of information from initialization through
486 parameterization to measurement. This architecture leverages the principles of quantum
487 mechanics to perform complex computations, offering the potential for significant
488 advancements in computational power and efficiency compared to classical methods.
489 Classical data integration into quantum circuits is facilitated by quantum embeddings,
490 which utilize Hilbert spaces for feature mapping. This approach allows the quantum
491 variational circuit to process classical data within the quantum domain, harnessing the
492 unique computational capabilities of quantum mechanics.
493 Figure 4 illustrates the architecture of our proposed quantum circuit, detailing the
494 initialization of qubits, the parameterization process, and the measurement outcomes.
495 This comprehensive illustration underscores the intricate design and operational flow of
36 Bioengineering. 2024, 14, x FOR PEER REVIEW 18 of 31
37

496 the quantum variational circuit implemented in this study.

497

498

499 Figure 4. The architecture of the quantum variational circuit with five qubits.

500 In , H represents a Hadamard gate. P, also known as the phase gate or phase shift
501 gate or S gate, is also a single-qubit operation. It changes the phase of a spin along a
502 specific axis. The Hadamard gate is a single-qubit operation that maps the basis state |├
503 0⟩ to (|├ 0⟩+|├ 1⟩)/√2 and |├ 1⟩ to (|├ 0⟩-|├ 1⟩)/√2. The equation concerning the
504 Hadamard gate, and the P gate is shown in equations (13) and (14), respectively [40].

H=
1 1 1
(
√ 2 1 −1 ) (13)

S=
1 0
0 i( ) (14)

505

506
507 3.2.5. Fully Connected Layer
508 A fully connected layer is one where each neuron in one layer connects to every
509 neuron in another layer. Most often, it is the last layer in a network that produces
510 output. In hybrid quantum networks, a fully connected layer can be achieved by using
38 Bioengineering. 2024, 14, x FOR PEER REVIEW 19 of 31
39

511 quantum operations such as controlled-NOT gates, Hadamard gates, and measurements
512 [41,42]. Quantum operations are unitary matrices that transform the quantum state of
513 neurons. Measurement of a quantum state on a specific basis can provide the output of a
514 quantum operation. It is a network architecture that allows any two users to share
515 entanglement resources and perform quantum distribution without trusting any nodes
516 [43]. In a fully connected quantum network, multiple users can communicate in a highly
517 secure and efficient manner. With QCNN, we leverage quantum advantages such as
518 superposition and entanglement to extend the capabilities of classical CNNs. In QCNNs,
519 three layers are present: quantum convolutional layers, pooling layers, and fully
520 connected layers [44-46]. In the quantum convolutional layer, data is filtered using a
521 quantum filter mask and a new quantum state is generated. A coarse-graining operation
522 is performed on the pooling layer to reduce the dimensionality of the data. In the fully
523 connected layer, quantum operations and measurements are used to calculate the final
524 output. Figure 5 graphically illustrates our proposed architecture as it relates to
525 measured qubits. Four layers, each made up of hundred, fifty, twenty, and three
526 neurons, are used in our Fully connected layer to aid image classification.

527

528 Figure 5. The QCNN architecture with quantum operations and measurements.

529 4. Experimental Results and Discussion


530 In this section, we conduct various analyses to evaluate the performance of our
531 hybrid quantum model. In each subsection, we present the results of different analyses.

532 4.1. Dataset description


533 Two primary datasets are used in this study: ChestX-ray8 and LIDC-IDRI [47].
534 There are fifteen classes of chest CXR in ChestX-ray8, some of which are benign, others
40 Bioengineering. 2024, 14, x FOR PEER REVIEW 20 of 31
41

535 malignant, and some are normal. The images are 1024 × 1024pixels and there are
536 1,12,120 images total. There are a variety of different sizes of nodules in the LIDC-IDRI
537 dataset, which was acquired from clinically acquired CT images of the lungs. A total of
538 1,018 slices were obtained from 1,010 lung CT scans. This study used a subset of 5,000
539 lung scans that covered nodules and regions without nodules to ensure comprehensive
540 coverage and representativeness. This subset includes Malignant (1,000 images), Benign
541 (500 images), and Normal (500 images). Preprocessing steps included normalization,
542 resizing all images to a consistent resolution, and data augmentation techniques such as
543 rotation, flipping, and scaling to increase diversity and prevent overfitting. Poor-quality
544 images or those with artifacts were removed. Inclusion criteria were clear labeling for
545 ChestX-ray8 images and clear annotations for LIDC-IDRI scans. Exclusion criteria
546 included ambiguous labels and low-quality scans. 3 presents a brief overview of the
547 datasets after filtering out elements suited to our study.
548 Table 3. A summary of the ChestX-ray8 and LIDC-IDRI datasets used in this study.
Dataset Name Class Number of images Total
Normal 1000
ChestX-ray8 Pneumonia (Benign) 1000 3000
Nodule (Malignant) 1000
Malignant 1000
LIDC-IDRI Benign 500 2000
Normal 500
549
550 4.1.1. Visual presentation of the dataset images
551 In this section, we show examples from each of the three classes that we use in our
552 study in order to illustrate the variety of images in the dataset. Figure 6 Shows a
553 selection of images from both datasets, representing different classes. The first column
554 shows images from the Normal class; the second column shows images from the Benign
555 class; and the third column shows images from the Malignant class. Similarly, the first
556 row represents CXR images corresponding to each class while the second row represents
42 Bioengineering. 2024, 14, x FOR PEER REVIEW 21 of 31
43

557 CT images corresponding to each class.

558

559

560

561 Figure 6. Sample images from the adopted datasets. (a) Normal, (b) Benign, (c) Malignant
44 Bioengineering. 2024, 14, x FOR PEER REVIEW 22 of 31
45

562 Based on the analysis of Figure 6, we can visually observe a slight similarity
563 between the images indicating a particular pattern. Hence, merging features can
564 improve a machine's classification accuracy.

565 4.2. Analysis concerning Image size vs computational cost


566 In this section under Table 4, we describe the resource requirements for classifying
567 lung samples based on image size. A set of three sizes is used such as 1024 × 1024,
568 448 × 448, and 224 × 224. The smaller the image size, the less resources are needed. In
569 addition, the third row (224 × 224 ) of the table has a large variance, resulting in less
570 training time but a lower accuracy rate. The first two variants, however, have a
571 reasonable amount of accuracy with a difference of -2%, which is acceptable given the
572 difference in training time
573 Table 4. Resource requirements for different image sizes.
Resources Duration of
Image Size Accuracy (%) ↑
consumed (GB) training (hours)
1024 × 1024 4.23 3.24 92.80
448 × 448 3.16 2.32 92.00
224 × 224 2.45 1.45 85.00

574 4.3. Per epoch accuracy analysis


575 We have run our proposed architecture on a DL server on a dual Intel Xeon E5-
576 2609V5 Tesla NVIDIA P100 GPU having a total of 3585 cores clocked at a maximum
577 speed of 18.9 Teraflops and listed its different epochs. The system has a RAM capability
578 of 128GB running Ubuntu 18.04 LTS. We have used Keras as our framework that runs on
579 TensorFlow 2.10. Since the system ran on 500 epochs, Table 5 shows brief accuracy and
580 loss values during specific epoch intervals of a certain hybrid quantum model
581 containing RepVGG. The parameters chosen were training accuracy and training loss.
582 Table 5. Accuracy and loss values for different epochs of a hybrid quantum model
Epochs Accuracy (%) ↑ Loss (%)↓
50 10.52 89.48
100 25.32 74.68
150 50.78 49.22
200 65.41 34.59
250 81.45 18.55
300 85.32 14.68
350 86.87 13.13
400 87.74 12.26
450 89.25 10.75
46 Bioengineering. 2024, 14, x FOR PEER REVIEW 23 of 31
47

500 92.12 7.88


550 90.15 7.89
583
584 According to Table , a certain hybrid quantum model containing RepVGG shows
585 brief accuracy and loss values during specific epoch intervals. Training accuracy and
586 training loss were chosen as parameters. The plot in Figure 7 shows that the data is
587 neither overfitted nor under fitted, as the training accuracy curve in Figure 7 follows a
588 typical learning pattern. Likewise, the loss curve in Figure 7 shows a normal decrease as
589 the epochs increase.

590

591 Figure 7. Training and loss accuracy for different epochs of the system.

592 4.4. Analysis concerning accuracy with and without quantum models
593 Comparing the performance of the system with and without a quantum classifier
594 was conducted to demonstrate the effectiveness of the proposed architecture. A
595 comparative analysis of the system without quantum classifier (traditional) versus with
596 quantum classifier (Hybrid) is presented in Table 6 [48-50].
597 Table 6. Comparison of accuracy performance metrics between the system with and without the
598 quantum classifier
Overall
Sensitivit Specific F1-Score Precision MCC
Model Name Accurac
y (%) ity (%) (%) (%) (%)
y (%)
VGG16 85.21 84 86 85 84 0.7
Traditional

VGG19 87.54 86 88 87 87 0.74


Inception
76.52 77 76 76 75 0.53
V3
Xception 74.25 75 74 74 73 0.48
48 Bioengineering. 2024, 14, x FOR PEER REVIEW 24 of 31
49

ResNet50 65.25 66 65 65 64 0.3


RepVGG 89.21 89 90 89 89 0.78
VGG16 89.21 89 90 89 89 0.78
VGG19 89.16 89 90 89 88 0.78
Inception
89.78 90 89 90 90 0.79
Hybrid

V3
Xception 85.23 85 86 85 84 0.7
ResNet50 83.12 83 84 83 82 0.66
RepVGG 79.45 80 79 79 78 0.58
92.12 93 93 96 94 0.84

599 Based on the data in Table 6, our hybrid quantum system improves the overall
600 accuracy of the system, with RepVGG leading the way with an overall rate of 92.12%.
601 The results of this study indicate that quantum systems have an added benefit over
602 traditional DL systems. In addition, the marginal split of all the models’ misclassification
603 with and without the quantum system is shown in Table 7 [21].
604 Table 7. Comparative analysis of misclassified cases

System
Model name TP TN FP FN
Type
VGG16 4050 200 450 300
VGG19 4100 150 350 300
InceptionV3 3500 200 1000 300
Traditional
Xception 3000 700 1000 300
ResNet50 3000 200 1500 300
RepVGG 4000 500 200 300
VGG16 4300 150 200 350
VGG19 4200 250 200 300
InceptionV3 4050 200 425 325
Hybrid
Xception 4000 175 500 325
ResNet50 3500 500 650 350
RepVGG 4400 200 300 100

605 We have also plotted the performance of each hybrid model used in our study
606 through Receiver Operating Characteristic (ROC) curves and confusion matrices. These
607 visualizations provide a deeper insight into the effectiveness of each model used. The
608 ROC plot is presented in Figure 8 and Confusion matrix is presented in Figure 9.
50 Bioengineering. 2024, 14, x FOR PEER REVIEW 25 of 31
51

Figure 8. Performance Evaluation of Hybrid Models Using ROC Curves

The ROC curves illustrate the true positive rate (sensitivity) against the false
positive rate (1-specificity) for various threshold settings. A higher area under the
curve (AUC) indicates better performance in distinguishing between classes. The ROC
curves for our hybrid models demonstrate their superior ability to accurately classify
lung tumor images, showcasing the benefits of integrating quantum computing with
traditional deep learning methods.
52 Bioengineering. 2024, 14, x FOR PEER REVIEW 26 of 31
53

609 Figure 9. Performance Evaluation of Hybrid Models Using Confusion MAtrices.

610 Figure 9's confusion matrices highlight the superior performance of our hybrid
611 models, showing high true positives (TP) and true negatives (TN) while minimizing
612 false positives (FP) and false negatives (FN). This indicates improved accuracy,
613 precision, and recall compared to traditional models. The hybrid models, especially
614 RepVGG with quantum layers, demonstrate significant diagnostic improvements,

615 4.5 Comparision between merging and not mergining of features


616 Table 8 summarizes the classification accuracy achieved by using features from
617 individual models without merging and the improved accuracy obtained by merging
618 features from different models. It also includes the feature dimensions before and after
619 fusion.
54 Bioengineering. 2024, 14, x FOR PEER REVIEW 27 of 31
55

620 Table 8. Performance Analysis of Models with Feature Merging


Model Feature Accuracy Dimension Accuracy
Name Dimension without After Fusion with
(Without Merging (%) Merging (%)
Merging)
VGG16 512 84.5 1024 89.16
VGG19 512 85 1024 89.78
InceptionV 2048 80.75 4096 85.23
3
Xception 2048 78.5 4096 83.12
ResNet50 2048 75 4096 79.45
RepVGG 2048 87.5 4096 92.12

621 The Table 8 demonstrates that merging features from different TL models significantly
622 improves classification accuracy. This improvement across all models validates that
623 merging features captures more detailed patterns, enhancing data representation and
624 classification performance.

625 4.6. State-of-the-art comparison


626 We have also evaluated the classification performance and strength of our hybrid
627 quantum system against other existing state-of-the-art systems. A comprehensive
628 comparison of our system with classification systems and traditional quantum systems
629 is presented in this paper. The Table shows the overall performance of the system.
630
631
632
633
634
635 Table 9. Comparison of our hybrid quantum system with other state-of-the-art systems.

Computational Training
Technique Accuracy (%)
time (Hours)
QCNN [27] 89.50 2.8
VQDNN [28] 90.00 2.52
Hybrid TL [29] 91.32 3.23
Quanvolution [31] 88.24 2.45
Proposed System 92.12 2.32
636
56 Bioengineering. 2024, 14, x FOR PEER REVIEW 28 of 31
57

637
638 Based on the data presented in Table , our hybrid quantum system appears to
639 perform better in terms of accuracy level and training time. As a result, our system
640 performed better across the board, proving the strength of our proposed architecture in
641 all areas.

642 5. Conclusions
643 In this paper, we propose a new framework for lung tumor classification that uses
644 both CT and CXR images as inputs and pre-trained TL models that are tailored to this
645 task. The TL model has been improved by combining features learned from CT and CXR
646 images with a hybrid quantum layer. On two standard datasets: ChestX-ray8 and LIDC-
647 IDRI, we have successfully classified lung tumors using our framework. In addition to
648 our framework, other techniques relying on CXR or CT images alone or on conventional
649 machine learning models do not achieve the same results. We demonstrate that lung
650 tumor classification can be improved using both imaging modalities and quantum
651 computing. As a result, early detection, treatment, and outcome of lung cancer patients
652 can be greatly improved.
653 It is important to note that the following are some possible limitations of the work
654 in relation to the conclusion of the paper:
655  There may be some types of lung cancer that are not suitable for the framework
656 because of their distinct morphological or molecular characteristics.
657  It should be noted that the framework may not capture the diversity and intricacy
658 of lung tumor staging, which may have a substantial impact on the patient's
659 outcome and management.
660  In settings with limited resources, the framework may be inaccessible or expensive,
661 especially in situations where resources are limited.
662  We test the proposed model with a small number of images taken from two
663 different datasets. Nevertheless, the proposed framework needs to be standardized
664 by testing it against a larger number of unknown or new data sets.
665  This study focuses solely on non-invasive imaging techniques and excludes biopsy,
666 the definitive method for lung cancer diagnosis. While this approach reduces
667 patient risk, it may not capture the comprehensive accuracy provided by biopsy.
668 Future research could integrate these methods to enhance both early detection and
669 diagnostic confirmation.

670 We are planning on applying our model to other types of lung diseases as well as
671 other imaging methods in the future. Furthermore, to further improve our framework's
672 performance, we can experiment with other quantum layers and optimization methods
673 in order to further improve the performance of our framework.
58 Bioengineering. 2024, 14, x FOR PEER REVIEW 29 of 31
59

674 Author Contributions: Conceptualization, Jason Elroy Martis, Sannidhan M S, Balasubramani R,


675 A.M. Mutawa and M. Murugappan; Data curation, Jason Elroy Martis and Sannidhan M S; Formal
676 analysis, Balasubramani R; Investigation, Balasubramani R; Methodology, Jason Elroy Martis,
677 Sannidhan M S and Balasubramani R; Software, Jason Elroy Martis, Sannidhan M S and
678 Balasubramani R; Supervision, Balasubramani R, A.M. Mutawa and M. Murugappan; Validation,
679 A.M. Mutawa and M. Murugappan; Visualization, Balasubramani R and M. Murugappan; Writing
680 – original draft, Jason Elroy Martis, Sannidhan M S and M. Murugappan; Writing – review &
681 editing, A.M. Mutawa and M. Murugappan.
682 Funding: This research received no external funding.
683 Conflicts of Interest: The authors declare no conflicts of interest.

684 References
685 1. Althubiti, S.A.; Paul, S.; Mohanty, R.; Mohanty, S.N.; Alenezi, F.; Polat, K. Ensemble learning framework with GLCM texture
686 extraction for early detection of lung cancer on CT images. Computational and Mathematical Methods in Medicine 2022, 2022,
687 doi:10.1155/2022/2733965.
688 2. Westeel, V.; Foucher, P.; Scherpereel, A.; Domas, J.; Girard, P.; Trédaniel, J.; Wislez, M.; Dumont, P.; Quoix, E.; Raffy, O. Chest
689 CT scan plus x-ray versus chest x-ray for the follow-up of completely resected non-small-cell lung cancer (IFCT-0302): a multi-
690 centre, open-label, randomised, phase 3 trial. The Lancet Oncology 2022, 23, 1180-1188, doi:10.1016/S1470-2045(22)00451-X.
691 3. Saber, A.; Sakr, M.; Abo-Seida, O.M.; Keshk, A.; Chen, H. A novel deep-learning model for automatic detection and classifica -
692 tion of breast cancer using the transfer-learning technique. IEEE Access 2021, 9, 71194-71209.
693 4. Sadad, T.; Rehman, A.; Munir, A.; Saba, T.; Tariq, U.; Ayesha, N.; Abbasi, R. Brain tumor detection and multi‐classification us -
694 ing advanced deep learning techniques. Microscopy Research and Technique 2021, 84, 1296-1308.
695 5. Hu, Z.; Tang, J.; Wang, Z.; Zhang, K.; Zhang, L.; Sun, Q. Deep learning for image-based cancer detection and diagnosis− A
696 survey. Pattern Recognition 2018, 83, 134-149.
697 6. Chaunzwa, T.L.; Hosny, A.; Xu, Y.; Shafer, A.; Diao, N.; Lanuti, M.; Christiani, D.C.; Mak, R.H.; Aerts, H.J. Deep learning clas-
698 sification of lung cancer histology using CT images. Scientific reports 2021, 11, 5471.
699 7. Lakshmanaprabu, S.; Mohanty, S.N.; Shankar, K.; Arunkumar, N.; Ramirez, G. Optimal deep learning model for classification
700 of lung cancer on CT images. Future Generation Computer Systems 2019, 92, 374-382.
701 8. Wei, S.; Chen, Y.; Zhou, Z.; Long, G. A quantum convolutional neural network on NISQ devices. AAPPS Bulletin 2022, 32, 1-
702 11.
703 9. Zhao, C.; Gao, X.-S. Qdnn: deep neural networks with quantum layers. Quantum Machine Intelligence 2021, 3, 15.
704 10. Beer, K.; Bondarenko, D.; Farrelly, T.; Osborne, T.J.; Salzmann, R.; Scheiermann, D.; Wolf, R. Training deep quantum neural
705 networks. Nature communications 2020, 11, 808.
706 11. Kora, P.; Mohammed, S.; Surya Teja, M.J.; Usha Kumari, C.; Swaraja, K.; Meenakshi, K. Brain Tumor Detection with Transfer
707 Learning. 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC) 2021, 443-446,
708 doi:10.1109/I-SMAC52330.2021.9640678.
709 12. Mohite, A. Application of transfer learning technique for detection and classification of lung cancer using CT images. Int J Sci
710 Res Manag 2021, 9, 621-634.
711 13. Sundar, S.; Sumathy, S. Transfer learning approach in deep neural networks for uterine fibroid detection. International Journal
712 of Computational Science and Engineering 2022, 25, 52-63.
713 14. Alkassar, S.; Abdullah, M.A.; Jebur, B.A. Automatic brain tumour segmentation using fully convolution network and transfer
714 learning. In Proceedings of the 2019 2nd international conference on electrical, communication, computer, power and control
715 engineering (ICECCPCE), 2019; pp. 188-192.
60 Bioengineering. 2024, 14, x FOR PEER REVIEW 30 of 31
61

716 15. Humayun, M.; Sujatha, R.; Almuayqil, S.N.; Jhanjhi, N. A transfer learning approach with a convolutional neural network for
717 the classification of lung carcinoma. In Proceedings of the Healthcare, 2022; p. 1058.
718 16. Wang, S.; Dong, L.; Wang, X.; Wang, X. Classification of pathological types of lung cancer from CT images by deep residual
719 neural networks with transfer learning strategy. Open Medicine 2020, 15, 190-197.
720 17. Nishio, M.; Sugiyama, O.; Yakami, M.; Ueno, S.; Kubo, T.; Kuroda, T.; Togashi, K. Computer-aided diagnosis of lung nodule
721 classification between benign nodule, primary lung cancer, and metastatic lung cancer at different image size using deep con -
722 volutional neural network with transfer learning. PloS one 2018, 13, e0200721.
723 18. Da Nóbrega, R.V.M.; Peixoto, S.A.; da Silva, S.P.P.; Rebouças Filho, P.P. Lung nodule classification via deep transfer learning
724 in CT lung images. In Proceedings of the 2018 IEEE 31st international symposium on computer-based medical systems
725 (CBMS), 2018; pp. 244-249.
726 19. Phankokkruad, M. Ensemble transfer learning for lung cancer detection. In Proceedings of the 2021 4th international confer -
727 ence on data science and information technology, 2021; pp. 438-442.
728 20. Saikia, T.; Kumar, R.; Kumar, D.; Singh, K.K. An automatic lung nodule classification system based on hybrid transfer learn -
729 ing approach. SN Computer Science 2022, 3, 272.
730 21. Bhandary, A.; Prabhu, G.A.; Rajinikanth, V.; Thanaraj, K.P.; Satapathy, S.C.; Robbins, D.E.; Shasky, C.; Zhang, Y.-D.; Tavares,
731 J.M.R.; Raja, N.S.M. Deep-learning framework to detect lung abnormality–A study with chest X-Ray and lung CT scan im-
732 ages. Pattern Recognition Letters 2020, 129, 271-278.
733 22. Ibrahim, D.M.; Elshennawy, N.M.; Sarhan, A.M. Deep-chest: Multi-classification deep learning model for diagnosing COVID-
734 19, pneumonia, and lung cancer chest diseases. Computers in biology and medicine 2021, 132, 104348.
735 23. Yang, D.; Martinez, C.; Visuña, L.; Khandhar, H.; Bhatt, C.; Carretero, J. Detection and analysis of COVID-19 in medical im -
736 ages using deep learning techniques. Scientific Reports 2021, 11, 19638.
737 24. Kamil, M.Y. A deep learning framework to detect Covid-19 disease via chest X-ray and CT scan images. International Journal of
738 Electrical & Computer Engineering (2088-8708) 2021, 11.
739 25. Shyni, H.M.; Chitra, E. A comparative study of X-ray and CT images in COVID-19 detection using image processing and deep
740 learning techniques. Computer Methods and Programs in Biomedicine Update 2022, 2, 100054.
741 26. Chen, G.; Chen, Q.; Long, S.; Zhu, W.; Yuan, Z.; Wu, Y. Quantum convolutional neural network for image classification. Pat-
742 tern Analysis and Applications 2023, 26, 655-667.
743 27. Sebastianelli, A.; Zaidenberg, D.A.; Spiller, D.; Le Saux, B.; Ullo, S.L. On circuit-based hybrid quantum neural networks for re-
744 mote sensing imagery classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, 15,
745 565-580.
746 28. Wang, Y.; Wang, Y.; Chen, C.; Jiang, R.; Huang, W. Development of variational quantum deep neural networks for image
747 recognition. Neurocomputing 2022, 501, 566-582.
748 29. Mogalapalli, H.; Abburi, M.; Nithya, B.; Bandreddi, S.K.V. Classical–quantum transfer learning for image classification. SN
749 Computer Science 2022, 3, 20.
750 30. Subbiah, G.; Krishnakumar, S.S.; Asthana, N.; Balaji, P.; Vaiyapuri, T. Quantum transfer learning for image classification.
751 TELKOMNIKA (Telecommunication Computing Electronics and Control) 2023, 21, 113-122.
752 31. Henderson, M.; Shakya, S.; Pradhan, S.; Cook, T. Quanvolutional neural networks: powering image recognition with quantum
753 circuits. Quantum Machine Intelligence 2020, 2, 2.
754 32. Kayan, C.E.; Koksal, T.E.; Sevinc, A.; Gumus, A. Deep reproductive feature generation framework for the diagnosis of
755 COVID-19 and viral pneumonia using chest X-ray images. arXiv preprint arXiv:2304.10677 2023.
756 33. Sannidhan, M.; Prabhu, G.A.; Chaitra, K.; Mohanty, J.R. Performance enhancement of generative adversarial network for pho-
757 tograph–sketch identification. Soft Computing 2023, 27, 435-452.
758 34. Ding, X.; Zhang, X.; Ma, N.; Han, J.; Ding, G.; Sun, J. Repvgg: Making vgg-style convnets great again. In Proceedings of the
759 Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021; pp. 13733-13742.
62 Bioengineering. 2024, 14, x FOR PEER REVIEW 31 of 31
63

760 35. Ghose, P.; Alavi, M.; Tabassum, M.; Uddin, A.; Biswas, M.; Mahbub, K.; Gaur, L.; Mallik, S.; Zhao, Z. Detecting COVID-19 in-
761 fection status from chest X-ray and CT scan via single transfer learning-driven approach. Frontiers in genetics 2022, 13, 980338.
762 36. Kallel, F., Sahnoun, M., Ben Hamida, A., & Chtourou, K. (2018). CT scan contrast enhancement using singular value decompo-
763 sition and adaptive gamma correction. Signal, Image and Video Processing, 12, 905-913.
764 37. Sannidhan, M.; Martis, J.E.; Nayak, R.S.; Aithal, S.K.; Sudeepa, K. Detection of Antibiotic Constituent in Aspergillus flavus Us-
765 ing Quantum Convolutional Neural Network. International Journal of E-Health and Medical Communications (IJEHMC)
766 2023, 14, 1-26.
767 38. Abbas, A.; Sutter, D.; Zoufal, C.; Lucchi, A.; Figalli, A.; Woerner, S. The power of quantum neural networks. Nature Computa-
768 tional Science 2021, 1, 403-409.
769 39. Hou, Y.-Y.; Li, J.; Chen, X.-B.; Ye, C.-Q. A partial least squares regression model based on variational quantum algorithm.
770 Laser Physics Letters 2022, 19, 095204.
771 40. Chalumuri, A.; Kune, R.; Manoj, B. A hybrid classical-quantum approach for multi-class classification. Quantum Information
772 Processing 2021, 20, 119.
773 41. Coffey, M.W.; Deiotte, R.; Semi, T. Comment on “Universal quantum circuit for two-qubit transformations with three con -
774 trolled-NOT gates” and “Recognizing small-circuit structure in two-qubit operators”. Physical Review A 2008, 77, 066301.
775 42. Moore, C.; Nilsson, M. Parallel quantum computation and quantum codes. SIAM journal on computing 2001, 31, 799-815.
776 43. Song, G.; Klappenecker, A. Optimal realizations of controlled unitary gates. arXiv preprint quant-ph/0207157 2002.
777 44. Nakaji, K.; Tezuka, H.; Yamamoto, N. Quantum-enhanced neural networks in the neural tangent kernel framework. arXiv pre-
778 print arXiv:2109.03786 2021.
779 45. Oh, S.; Choi, J.; Kim, J. A tutorial on quantum convolutional neural networks (QCNN). In Proceedings of the 2020 Interna -
780 tional Conference on Information and Communication Technology Convergence (ICTC), 2020; pp. 236-239.
781 46. Rajesh, V.; Naik, U.P. Quantum Convolutional Neural Networks (QCNN) using deep learning for computer vision applica-
782 tions. In Proceedings of the 2021 International Conference on Recent Trends on Electronics, Information, Communication &
783 Technology (RTEICT), 2021; pp. 728-734.
784 47. Zhou, Z.; Sodha, V.; Rahman Siddiquee, M.M.; Feng, R.; Tajbakhsh, N.; Gotway, M.B.; Liang, J. Models genesis: Generic auto -
785 didactic models for 3d medical image analysis. In Proceedings of the Medical Image Computing and Computer Assisted In -
786 tervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part IV 22,
787 2019; pp. 384-393.
788 48. Morid, M.A.; Borjali, A.; Del Fiol, G. A scoping review of transfer learning research on medical image analysis using Ima -
789 geNet. Computers in biology and medicine 2021, 128, 104115.
790 49. Alzubaidi, L.; Fadhel, M.A.; Al-Shamma, O.; Zhang, J.; Santamaría, J.; Duan, Y.; R. Oleiwi, S. Towards a better understanding
791 of transfer learning for medical imaging: a case study. Applied Sciences 2020, 10, 4523.
792 50. Veasey, B.P.; Broadhead, J.; Dahle, M.; Seow, A.; Amini, A.A. Lung nodule malignancy prediction from longitudinal CT scans
793 with Siamese convolutional attention networks. IEEE Open Journal of Engineering in Medicine and Biology 2020, 1, 257-264.
794

795 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
796 author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury
797 to people or property resulting from any ideas, methods, instructions or products referred to in the content.

798

You might also like