Manuscript Reference
Manuscript Reference
1 Article
6 1
Department of ISE, NMAM Institute of Technology Mangalore, India
7 2
Department of CSE, NMAM Institute of Technology, Mangalore, India
8 3
Computer Engineering Department, College of Engineering and Petroleum, Kuwait University, Kuwait
9 4
Intelligent Signal Processing (ISP) Research Lab, Department of Electronics and Communication
10 Engineering, Kuwait College of Science and Technology, Block 4, Doha, 13133, Kuwait
11 5
Department of Electronics and Communication Engineering, School of Engineering, Vels Institute of
12 Sciences, Technology, and Advanced Studies, Chennai, Tamil Nadu, India
13 6
Center of Excellence for Unmanned Aerial Systems (CoEUAS), Universiti Malaysia Perlis, 02600, Perlis,
14 Malaysia
15 * Correspondence: [email protected], [email protected]
16 Abstract: Lung cancer is the second most common type of cancer among all types of cancer. Early
17 lung tumor detection and diagnosis can benefit patients, healthcare systems, and society in
18 general. Chest radiographs (CXR) and computerized tomography (CT) are valuable tools for
19 detecting lung cancer. Tumors can be detected more accurately and quickly using automated
20 methods, including deep learning (DL). In recent years, quantum layers with parameterized
21 quantum circuits have been proven to enhance the performance of DL models. This work
22 proposes a hybrid framework that combines transfer learning, as feature extraction, with quantum
23 circuits, as the classification. A pre-trained set of models, namely, Visual Geometry Group 16 and
24 19 (VGG16 and VGG19), Inception version 3, Xception, Residual Neural Network fifty layers
25 (ResNet-50), and Re-parameterization Geometry Group (RepVGG) are used for feature extraction,
Citation: To be added by editorial
26 staff during production.
and Singular Value Decomposition (SVD) is used to reduce the number of features. The optimized
27 features are then classified using quantum circuits. A hybrid quantum system improved the
28 Academic Editor: Firstname Last- overall accuracy of the system by 92.12% over a traditional quantum system, which achieved
name
29 89.21%. A state-of-the-art performance was achieved in the evaluation of the framework,
30 Received: date surpassing other approaches. Lung cancer is the second most common type of cancer and poses
31 Revised: date significant health challenges worldwide. Early detection is crucial for improving patient outcomes
32 Accepted: date and reducing treatment complexity. This study introduces a hybrid framework combining deep
33 Published: date learning (DL) and quantum computing to enhance the detection accuracy of lung cancer using
34 chest radiographs (CXR) and computerized tomography (CT) images. Utilizing pre-trained
35 models for feature extraction and quantum circuits for classification, our system achieves state-of-
36 the-art performance across various metrics. The proposed system not only reaches an overall
37 accuracy of 92.12%, but also excels in other critical performance measures, including a sensitivity
38 of 94%, specificity of 90%, F1-score of 93%, and precision of 92%. These results demonstrate the
39 effectiveness of our hybrid approach in identifying lung cancer signatures more accurately than
40 traditional methods. The integration of quantum computing enhances processing speed and
41 scalability, making our system a promising tool for early lung cancer screening and diagnosis.
42 Hence, by leveraging the strengths of quantum computing, our approach outperforms traditional
43 methods in speed, accuracy, and efficiency. This study underscores the potential of hybrid
44 computational technologies to revolutionize early cancer detection, paving the way for broader
45 clinical applications and ultimately enhancing patient care outcomes.
46 Keywords: Lung tumor classification; Deep learning models; Quantum layers; Transfer learning
47 models; Hybrid quantum layer.
48
49 1. Introduction
50 The lung is a vital organ for human health. A lung tumor is an abnormal growth of
51 cells in the lung, affecting its function and structure. There are several reasons have been
52 identified for lung cancers and reported in the literature. Also, there are a variety of
53 causes and symptoms of lung tumors, including benign and malignant tumors. It is
54 important to conduct research on lung tumors in order to understand their mechanisms,
55 diagnosis, treatment, and prevention. It is highly essential to continue the research on
56 lung tumors to elucidate their pathogenesis, diagnosis, treatment, and prevention.
57 Detecting and diagnosing lung tumors at an early stage can benefit patients, healthcare
58 systems, and society in several ways. Healthcare costs and societal complications can be
59 minimized if advanced lung cancer and palliative care are avoided. Averting late-stage
60 symptoms and treatments will enhance patients' quality of life and well-being and
61 reduce their morbidity and mortality [1]. Prompt and effective treatment can increase
62 patients' chances of surviving and being cured before the tumor spreads to other organs
63 or becomes resistant to treatment.
64 The Computerized Tomography (CT) scan is a valuable tool for detecting lung
65 cancer, particularly in high-risk populations, such as smokers [2]. The CT scan provides
66 detailed cross-sectional images of the lungs, allowing for better visualization and
67 assessment of abnormalities compared to Chest X-rays (CXR). Nevertheless, lung tumors
68 can sometimes be visible on CXR but not clearly detectable on CT scans [2]. Several
69 factors may contribute to this: 1) Smaller tumors may be less visible on CT scans than
70 larger tumors, but more visible on CXR; 2) The tumor's location in the lung may affect its
71 visibility. If the tumor is obscured or overlaps with normal lung tissue on CT scans, CXR
6 Bioengineering. 2024, 14, x FOR PEER REVIEW 3 of 31
7
72 may show it more clearly. 3) Imaging techniques have their own strengths and
73 weaknesses in any clinical diagnosis. Hence, the use of CXR and CT scans is highly
74 complementary and influential in any clinical diagnosis, specifically in lung cancer
75 detection compared to other imaging modalities.
76 The lung is a vital organ for human health, and lung tumors, whether benign or
77 malignant, pose a significant threat by affecting its function and structure. Various
78 causes and symptoms of lung tumors have been identified and reported in the literature.
79 Conducting research on lung tumors is crucial to understand their mechanisms,
80 diagnosis, treatment, and prevention. Early detection and diagnosis of lung tumors are
81 essential, as they can benefit patients, healthcare systems, and society by minimizing
82 healthcare costs and complications associated with advanced lung cancer and palliative
83 care. Early intervention can enhance patients' quality of life, reduce morbidity and
84 mortality, and improve survival chances before the tumor spreads or becomes resistant
85 to treatment.
86
87 CT scans are valuable tools for detecting lung cancer, especially in high-risk
88 populations such as smokers. CT scans provide detailed cross-sectional images of the
89 lungs, allowing for better visualization and assessment of abnormalities compared to
90 Chest X-rays (CXR). However, lung tumors can sometimes be visible on CXR but not
91 clearly detectable on CT scans. This discrepancy may be due to several factors: 1)
92 Smaller tumors may be more visible on CXR than on CT scans; 2) The tumor's location in
93 the lung might affect its visibility, with CXR potentially showing tumors obscured or
94 overlapping with normal lung tissue on CT scans more clearly; 3) Each imaging
95 technique has its strengths and weaknesses. Therefore, using both CXR and CT scans is
96 complementary and influential in clinical diagnosis, particularly in lung cancer detection
97 [1,2].
98 The process of manually identifying tumors is challenging, error-prone, and
99 inconsistent [3]. Depending on the expertise of the radiologist and the prominence of the
100 imaging technique, the response will vary. It is possible to identify tumors from various
101 images more quickly, objectively, and precisely by using automated methods, especially
102 using DL models DL [3-5]. DL is an advanced tool in artificial intelligence (AI) that uses
103 neural networks to learn from input data and perform tasks such as
104 detection/classification/prediction. In medical imaging, such as CT and CXR, DL
105 techniques have been used to classify lung tumors [6]. Classification of lung tumors is a
106 challenging task that requires the accurate and reliable diagnosis of different types and
107 subtypes of lung cancer such as non-small cell lung cancer and small cell lung cancer. In
108 addition, it requires a distinction between benign nodules and other lung diseases.
109 Through DL techniques, lung tumor classification can be improved by mining
110 significant features from the input images (CT/CXR), developing robust and efficient DL
111 models, improving performance and interpretability, and providing clinicians with
8 Bioengineering. 2024, 14, x FOR PEER REVIEW 4 of 31
9
191 This article is organized as follows: Section 1 introduces the research topic, reviews
192 the existing methods for lung cancer detection and classification, and states the research
193 questions. Section 2 presents a literature review related to the aims and objectives of the
194 proposed system. The methodology of the proposed system is described in Section 3,
195 including pre-processing steps, model architecture, training process, evaluation metrics,
196 and experimental setups. The results of the experiments are presented and analyzed in
197 Section 4, along with comparisons with other state-of-the-art systems and a discussion of
198 the capabilities of the proposed system. Lastly, in Section 5, the article summarizes the
199 major points, presents the novelty and significance of the research, and makes some
200 recommendations for future research.
230 combining CT and CXR images can provide faster and more accurate results and that
231 pre-processing, transfer learning and data augmentation techniques can help overcome
232 data scarcity problems. The review by Shyni et al. [25] further supports the combination
233 of CT and CXR images to provide faster and better accurate results. data scarcity
234 challenges. Their study reported a notable increase in diagnostic accuracy, where the
235 combined approach achieved an accuracy of approximately 84%. This was a significant
236 improvement over models trained solely on CXR or CT images, which generally
237 achieved accuracies around 74% and 70%, respectively. Moreover, the sensitivity and
238 specificity of the combined models reached as high as 83% and 85%, respectively,
239 compared to 75% sensitivity and 77% specificity for models using only CXR images, and
240 69% sensitivity and 70% specificity for those using only CT images.
241 Quantum computing has been shown to enhance the performance of DL network
242 systems in various applications. QCNN is a novel DL technique that combines quantum
243 and classical computing to process image data. In [26] and [27], the researchers have
244 demonstrated the advantages of QCNN over classic CNN in terms of accuracy and
245 speed on different image classification tasks. In [26], a reported 7% improvement in
246 accuracy was noted, and in [27], accuracy improved by 10% over traditional CNNs.
247 Both articles have also explored the correlation between the chaotic nature of the image
248 and the QCNN performance and found that quantum entanglement plays a key role in
249 improving classification scores. Recently, researchers have proposed a variational
250 quantum deep neural network (VQDNN) model that uses parametrized quantum
251 circuits to achieve better accuracy improvement of approximately 8% than classical
252 neural networks on two datasets with limited qubits in image recognition [28]. In
253 addition, the authors in [29] and [30] explore the use of hybrid TL techniques that
254 combine a classical pre-trained network with a variational quantum circuit as the final
255 layer (classifier) on small datasets. They evaluate different classical feature extractors
256 with a quantum circuit as a classifier on three image datasets: trash (recycling material),
257 Tuberculosis (TB) from CXR images, and crack in concrete images. They show that the
258 hybrid models outperform the classical models by demonstrating an improvement in
259 accuracy rate of over 12% on all datasets, even with qubit constraints. In [31], the
260 researchers introduce a new kind of transformational layer for image recognition, called
261 a quantum convolution or quanvolution layer. Quanvolution layers use random
262 quantum circuits to locally transform the input data, similar to classical convolution
263 layers. They compare classical convolutional neural networks (CNNs), quantum
264 convolutional neural networks (QCNNs), and CNNs with extra non-linearities on the
265 MNIST dataset. They show that QCNNs have faster training and higher accuracy
266 improvement of 9% than over traditional CNNs, suggesting the potential of
267 quanvolution layers for near-term quantum computing.
268 On reviewing the existing literature and found that DL techniques can help with
269 the challenging and important task of classifying lung diseases using medical images.
16 Bioengineering. 2024, 14, x FOR PEER REVIEW 8 of 31
17
270 Many studies have used TL to achieve better results than conventional methods for
271 classifying lung nodules or cancers from CT/CXR images with different CNN
272 architectures and classifiers. Many studies have also shown that QCNNs can outperform
273 classic CNNs in accuracy for different image classification tasks by increasing the speed
274 of computation, and scalability, and reducing the computation power. Quantum
275 computing can boost the performance of DL network systems in various applications.
276 Some studies have used variational quantum circuits to enhance the performance of
277 QCNNs. Based on these findings, we propose a new system that combines TL and
278 QCNNs for classifying lung diseases using both CXR and CT images. We aim to use
279 quantum computing to improve the performance of TL models for medical image
280 analysis. Table 1 provides the summary of the literature conducted.
281
282
283
284
285
286 Table 1. Literature Review Summary
Reference Approach Key Findings Identified Gaps
[27] QCNN Correlation between image chaos and Understanding the role
QCNN performance. Reported 10% of quantum
accuracy improvement. entanglement in
performance
improvement.
[28] VQDNN Better accuracy improvement on Qubit limitations and
limited qubits datasets. Reported 8% practical implementation
accuracy improvement. challenges.
287
288 3. Methodology
289 A description of how the proposed system works and is designed is provided in the
290 methodology section of this article. Figure 1 illustrates the connections between the
291 different modules in the systemThis section outlines a proposed system that integrates
292 TL and QCNNs to enhance lung disease classification using Chest X-Ray (CXR) and
20 Bioengineering. 2024, 14, x FOR PEER REVIEW 10 of 31
21
293 Computed Tomography (CT) images. The process begins with acquiring and pre-
294 processing extensive medical image datasets to ensure high quality and uniformity. Pre-
295 trained CNN models, such as VGG16, ResNet50-V2, and DenseNet201, are fine-tuned
296 for specific lung disease classification tasks. QCNNs are developed and integrated with
297 these TL models to create a hybrid system that leverages both classical and quantum
298 computing advantages. The hybrid models are trained, optimized, and evaluated to
299 maximize performance metrics like accuracy, sensitivity, and specificity. Finally, the
300 optimized model is prepared for deployment in clinical settings, ensuring scalability and
301 seamless integration with existing medical systems. This approach aims to overcome
302 data limitations and improve the accuracy and efficiency of lung disease detection.
303 Figure 1 illustrates the overall working steps of the proposed system. This approach
304 aims to overcome data limitations and improve the accuracy and efficiency of lung
305 disease detection.
306
308 The proposed system as depicted in Figure 1 has three main modules that work
309 together: 1) Image acquisition, 2) Tuning of the TL Model, and 3) Quantum learning and
310 classification. The following subsections describe each module in detail.
312 Images are collected from both CXR and CT scans during the image acquisition
313 process. CXR and CT scans are used as the source of the images. The classification task is
314 challenging since CT scans and CXR belong to two different types of images. As a result,
315 we train the network separately for CXR and CT scans, which improves the accuracy
316 and efficiency of feature extraction. Images are converted to grayscale with a range
317 between one and 255. A mathematical formula for the image retrieval process is shown
318 in equations (1) and (2).
319 Here, I x ( x , y ) is the image taken from a dataset of CXR by means of pixels.
320 Similarly, I ct ( x , y ) stands for the images from the CT dataset. The values (x , y ) are
321 generic to represent the width and height of the single image, respectively. It is
322 necessary to resize all images since neural networks require them to have a fixed size.
323 Nevertheless, resizing has its trade-offs: reducing the size of an image reduces its
324 quality, whereas making it larger increases the training time and complexity. To find a
325 balance between computational cost and accuracy, based on experimental investigation,
326 we use 1024 × 1024 pixels as the resized image size [32]. The relevant evidence for the
327 same is presented in the experimental trials conducted in Table .
343
344 Figure 2. Leveraging Transfer Learning for feature extraction from CT and CXR images.
345 As shown in Figure 2, first, we use pretrained TL models like VGG16, VGG19,
346 Inception-v3, Xception, ResNet50, and RepVGG to extract features [33-35]. Our choice of
347 these models was based on their variation in convolutional filter usage and the fact that
348 they were developed for different classification problems. Furthermore, we replaced the
349 top classification layer with our own classification rule. Table 2 presents an overview of
350 various pre-trained CNN models used for feature extraction in our study. Each model is
351 evaluated based on its size, the number of hyperparameters, the specific layer used for
352 feature extraction, the initial feature dimension, and the dimension after fusion.
353 Table 2. Summary of pre-trained models used for feature extraction in our researchresearch.
Model Size Hyperpara Feature Extraction Feature Dimensio
name (MB) meters Layer Dimensio n After
(Million) n Fusion
VGG16 528 138.35 Block5_conv3 512 1024
VGG19 549 143.66 Block5_conv4 512 1024
Inception 92 23.85 mixed10 2048 4096
26 Bioengineering. 2024, 14, x FOR PEER REVIEW 13 of 31
27
V3
Xception 88 22.91 block14_sepconv2_a 2048 4096
ct
ResNet50 99 25.636 conv5_block3_out 2048 4096
RepVGG 558 11.68 repvgg_block5 2048 4096
354
355 These pre-trained classifiers as shown in As shown in Figure 2, first, we use
356 pretrained TL models like VGG16, VGG19, Inception-v3, Xception, ResNet50, and
357 RepVGG to extract features [33-35]. Our choice of these models was based on their
358 variation in convolutional filter usage and the fact that they were developed for different
359 classification problems. Furthermore, we replaced the top classification layer with our
360 own classification rule. Table 2 presents an overview of various pre-trained CNN
361 models used for feature extraction in our study. Each model is evaluated based on its
362 size, the number of hyperparameters, the specific layer used for feature extraction, the
363 initial feature dimension, and the dimension after fusion. are fine-tuned on the CXR and
364 CT datasets separately to obtain optimal models catering to extract features from CXR
365 and CT scans. Equations (3) – (5) explain the structure of how features are extracted and
366 finetuned for our classification purpose.
367
❑() (❑()∗❑() ❑() ) (3)
()max () (4)
()
❑ ∗❑ ❑
() ()
(5)
368
369 Here ❑() is the input to the layer (for the first layer, ❑❑ is the input image). ❑() and
❑ are the weights and biases of the layer , respectively. are the activation functions
()
370
371 which are either ReLU or sigmoid. ❑() is the output of layer after applying the
372 activation function. is the last pre-trained layer, ❑() and ❑() are the weights and biases
373 of the final fully connected layer, and is the logits vector representing the raw model
374 predictions. It is necessary to discard the last layer of each model in order to extract
375 relevant featuresclassify the model into our necessary classes. Finally, the CXR and CT
376 datasets are stored separately because they have distinct feature sets. The following
377 sections elaborate on some sample layers a quantum hybrid model for image
378 classification that incorporates these features. Figure 3 illustrates how features are
379 accessed from selected layers of a proposed TL framework.
28 Bioengineering. 2024, 14, x FOR PEER REVIEW 14 of 31
29
380
382 As shown in Figure 3, different layers extract different types of information from
383 different images. The top three images show the features extracted from X-rays while the
384 bottom two show how ReLU activation helps extract features from CT scans. The
385 visualization in Figure 3 showcases how various neural network layers process X-ray
386 and CT scan images, highlighting distinct feature extraction methods for each type of
387 imaging data.
388 For X-rays, the sequence begins with the top convolutional layer of VGG16, which
389 identifies low-level features such as edges and textures, essential for delineating
390 anatomical structures. This is followed by the ReLU layer of VGG19, which enhances
391 these features by removing negative values, thus improving the visibility of critical
392 details like lesions or masses. The normalization layer of ResNet50 then adjusts the
393 feature maps to a consistent scale, aiding in uniform feature interpretation across
394 different X-ray images.
395 In CT scans, the max pooling layer of InceptionV3 reduces spatial resolution but
396 retains significant features within each region, focusing the analysis on relevant aspects
397 such as tumors. The activation map from RepVGG synthesizes higher-level features,
398 revealing complex tissue textures and enhancing the model's ability to detect
399 abnormalities.
400
30 Bioengineering. 2024, 14, x FOR PEER REVIEW 15 of 31
31
401
402 3.2.2. Merging of Features
403 In this study, we utilize both Computed Tomography (CT) and Chest X-Ray (CXR)
404 imaging modalities for each scan to maximize the diagnostic potential of the imaging
405 data. Features are independently extracted from both the CT and CXR images to harness
406 the unique diagnostic information each modality provides. The detailed set of
407 procedures are explained as follows
❑❑ ❑❑ ❑ ❑ ❑
❑ ❑❑ ❑❑ ❑❑ ❑❑
(6)
413 Similarly, a different set of features is extracted from the corresponding CXR
414 images using another TL model that is specifically tuned to exploit the diagnostic
415 strengths of CXR, such as overall lung geometry and certain types of lesions more
416 visible in CXR. The extraction process is explained in equation (7)
❑ ❑ ❑ ❑ ❑
❑ ❑❑❑❑❑❑ ❑❑❑❑ (7)
417
418 Feature Merging Strategy:
419 The features extracted from both CT and CXR images are then merged to form a
420 combined feature vector. This merging process involves concatenating the feature
421 vectors from each modality. The process of feature merging is depicted in equation (8)
422 mathematically.
❑❑ ❑ ❑ {❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ }
❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑ ❑
(8)
423
424 The features extracted from CXR and CT images are merged into one feature vector
425 in this step. The output of every TL model is a fixed set of features [35]. Through this
426 step, the features obtained from both domains are enhanced to produce a feature vector
427 twice the size of the original. An explanation of merged features is given in equations (3)
428 - (5).
❑❑ ❑❑ ❑ ❑ ❑
❑ ❑❑ ❑❑ ❑❑ ❑❑
(3)
❑❑ ❑❑ ❑ ❑ ❑
❑ ❑❑ ❑❑ ❑❑ ❑❑
(4)
32 Bioengineering. 2024, 14, x FOR PEER REVIEW 16 of 31
33
❑❑ ❑❑❑❑ {❑❑
❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ ❑❑ }
❑ ❑ ❑ ❑ ❑ ❑ ❑
(5)
429
x
430 In the equationIn equations (6)-(8), f 1 represents the single feature obtained from an
ct
431 CXR image. Similarly, f 1 represents a single feature obtained from a CT image. Also, F x
432 and F ct represents the feature vector of CXR and CT scans, respectively. F total stands for
433 a simple concatenation of the features of both F x and F ct .
434
435
458 parameters: and, which represent the input and variational aspects, respectively [39].
459 The classical data is inserted into these quantum circuits using quantum embeddings,
460 which use Hilbert's feature spaces. illustrates the architecture of our proposed quantum
461 circuit.
462 Circuits with variable parameters, known as variational circuits, play an important
463 role in quantum computing. They are analogous to neural networks in classical
464 computing, which are powerful machine learning models [37-39]. In this study, we
465 implemented a quantum variational circuit with five qubits, each representing a classical
466 binary bit (0 or 1). Quantum states of electron spin can be determined by qubits in a
467 magnetic field, leading to spin-up (1) or spin-down (0) states. This spin state represents
468 the fundamental binary information in quantum computing, similar to classical bits but
469 with the added advantage of quantum superposition and entanglement.
470 Our quantum variational circuit is composed of three key states: Initial,
471 Parameterized, and Measurement. In the Initial state, all qubits are initialized to 0. This
472 initialization ensures a known starting point for subsequent quantum operations.
473 In the Parameterized state, the quantum circuit receives two types of input
474 parameters: input data and variational parameters. The input data represents the
475 classical information to be processed, while the variational parameters are tunable
476 parameters optimized during the training process to minimize the cost function. The
477 classical data is inserted into these quantum circuits using quantum embeddings, which
478 map classical data into high-dimensional Hilbert space, enabling the quantum circuit to
479 process it. The final state is the Measurement state, where the quantum system is
480 measured, and the resulting quantum states are collapsed into classical binary outcomes
481 (0 or 1). The measurement results are used to evaluate the performance of the quantum
482 circuit and adjust the variational parameters accordingly.
483 Our quantum variational circuit architecture, as illustrated in Figure 4, integrates
484 these three states into a cohesive framework. The figure provides a visual representation
485 of the quantum circuit, detailing the flow of information from initialization through
486 parameterization to measurement. This architecture leverages the principles of quantum
487 mechanics to perform complex computations, offering the potential for significant
488 advancements in computational power and efficiency compared to classical methods.
489 Classical data integration into quantum circuits is facilitated by quantum embeddings,
490 which utilize Hilbert spaces for feature mapping. This approach allows the quantum
491 variational circuit to process classical data within the quantum domain, harnessing the
492 unique computational capabilities of quantum mechanics.
493 Figure 4 illustrates the architecture of our proposed quantum circuit, detailing the
494 initialization of qubits, the parameterization process, and the measurement outcomes.
495 This comprehensive illustration underscores the intricate design and operational flow of
36 Bioengineering. 2024, 14, x FOR PEER REVIEW 18 of 31
37
497
498
499 Figure 4. The architecture of the quantum variational circuit with five qubits.
500 In , H represents a Hadamard gate. P, also known as the phase gate or phase shift
501 gate or S gate, is also a single-qubit operation. It changes the phase of a spin along a
502 specific axis. The Hadamard gate is a single-qubit operation that maps the basis state |├
503 0⟩ to (|├ 0⟩+|├ 1⟩)/√2 and |├ 1⟩ to (|├ 0⟩-|├ 1⟩)/√2. The equation concerning the
504 Hadamard gate, and the P gate is shown in equations (13) and (14), respectively [40].
H=
1 1 1
(
√ 2 1 −1 ) (13)
S=
1 0
0 i( ) (14)
505
506
507 3.2.5. Fully Connected Layer
508 A fully connected layer is one where each neuron in one layer connects to every
509 neuron in another layer. Most often, it is the last layer in a network that produces
510 output. In hybrid quantum networks, a fully connected layer can be achieved by using
38 Bioengineering. 2024, 14, x FOR PEER REVIEW 19 of 31
39
511 quantum operations such as controlled-NOT gates, Hadamard gates, and measurements
512 [41,42]. Quantum operations are unitary matrices that transform the quantum state of
513 neurons. Measurement of a quantum state on a specific basis can provide the output of a
514 quantum operation. It is a network architecture that allows any two users to share
515 entanglement resources and perform quantum distribution without trusting any nodes
516 [43]. In a fully connected quantum network, multiple users can communicate in a highly
517 secure and efficient manner. With QCNN, we leverage quantum advantages such as
518 superposition and entanglement to extend the capabilities of classical CNNs. In QCNNs,
519 three layers are present: quantum convolutional layers, pooling layers, and fully
520 connected layers [44-46]. In the quantum convolutional layer, data is filtered using a
521 quantum filter mask and a new quantum state is generated. A coarse-graining operation
522 is performed on the pooling layer to reduce the dimensionality of the data. In the fully
523 connected layer, quantum operations and measurements are used to calculate the final
524 output. Figure 5 graphically illustrates our proposed architecture as it relates to
525 measured qubits. Four layers, each made up of hundred, fifty, twenty, and three
526 neurons, are used in our Fully connected layer to aid image classification.
527
528 Figure 5. The QCNN architecture with quantum operations and measurements.
535 malignant, and some are normal. The images are 1024 × 1024pixels and there are
536 1,12,120 images total. There are a variety of different sizes of nodules in the LIDC-IDRI
537 dataset, which was acquired from clinically acquired CT images of the lungs. A total of
538 1,018 slices were obtained from 1,010 lung CT scans. This study used a subset of 5,000
539 lung scans that covered nodules and regions without nodules to ensure comprehensive
540 coverage and representativeness. This subset includes Malignant (1,000 images), Benign
541 (500 images), and Normal (500 images). Preprocessing steps included normalization,
542 resizing all images to a consistent resolution, and data augmentation techniques such as
543 rotation, flipping, and scaling to increase diversity and prevent overfitting. Poor-quality
544 images or those with artifacts were removed. Inclusion criteria were clear labeling for
545 ChestX-ray8 images and clear annotations for LIDC-IDRI scans. Exclusion criteria
546 included ambiguous labels and low-quality scans. 3 presents a brief overview of the
547 datasets after filtering out elements suited to our study.
548 Table 3. A summary of the ChestX-ray8 and LIDC-IDRI datasets used in this study.
Dataset Name Class Number of images Total
Normal 1000
ChestX-ray8 Pneumonia (Benign) 1000 3000
Nodule (Malignant) 1000
Malignant 1000
LIDC-IDRI Benign 500 2000
Normal 500
549
550 4.1.1. Visual presentation of the dataset images
551 In this section, we show examples from each of the three classes that we use in our
552 study in order to illustrate the variety of images in the dataset. Figure 6 Shows a
553 selection of images from both datasets, representing different classes. The first column
554 shows images from the Normal class; the second column shows images from the Benign
555 class; and the third column shows images from the Malignant class. Similarly, the first
556 row represents CXR images corresponding to each class while the second row represents
42 Bioengineering. 2024, 14, x FOR PEER REVIEW 21 of 31
43
558
559
560
561 Figure 6. Sample images from the adopted datasets. (a) Normal, (b) Benign, (c) Malignant
44 Bioengineering. 2024, 14, x FOR PEER REVIEW 22 of 31
45
562 Based on the analysis of Figure 6, we can visually observe a slight similarity
563 between the images indicating a particular pattern. Hence, merging features can
564 improve a machine's classification accuracy.
590
591 Figure 7. Training and loss accuracy for different epochs of the system.
592 4.4. Analysis concerning accuracy with and without quantum models
593 Comparing the performance of the system with and without a quantum classifier
594 was conducted to demonstrate the effectiveness of the proposed architecture. A
595 comparative analysis of the system without quantum classifier (traditional) versus with
596 quantum classifier (Hybrid) is presented in Table 6 [48-50].
597 Table 6. Comparison of accuracy performance metrics between the system with and without the
598 quantum classifier
Overall
Sensitivit Specific F1-Score Precision MCC
Model Name Accurac
y (%) ity (%) (%) (%) (%)
y (%)
VGG16 85.21 84 86 85 84 0.7
Traditional
V3
Xception 85.23 85 86 85 84 0.7
ResNet50 83.12 83 84 83 82 0.66
RepVGG 79.45 80 79 79 78 0.58
92.12 93 93 96 94 0.84
599 Based on the data in Table 6, our hybrid quantum system improves the overall
600 accuracy of the system, with RepVGG leading the way with an overall rate of 92.12%.
601 The results of this study indicate that quantum systems have an added benefit over
602 traditional DL systems. In addition, the marginal split of all the models’ misclassification
603 with and without the quantum system is shown in Table 7 [21].
604 Table 7. Comparative analysis of misclassified cases
System
Model name TP TN FP FN
Type
VGG16 4050 200 450 300
VGG19 4100 150 350 300
InceptionV3 3500 200 1000 300
Traditional
Xception 3000 700 1000 300
ResNet50 3000 200 1500 300
RepVGG 4000 500 200 300
VGG16 4300 150 200 350
VGG19 4200 250 200 300
InceptionV3 4050 200 425 325
Hybrid
Xception 4000 175 500 325
ResNet50 3500 500 650 350
RepVGG 4400 200 300 100
605 We have also plotted the performance of each hybrid model used in our study
606 through Receiver Operating Characteristic (ROC) curves and confusion matrices. These
607 visualizations provide a deeper insight into the effectiveness of each model used. The
608 ROC plot is presented in Figure 8 and Confusion matrix is presented in Figure 9.
50 Bioengineering. 2024, 14, x FOR PEER REVIEW 25 of 31
51
The ROC curves illustrate the true positive rate (sensitivity) against the false
positive rate (1-specificity) for various threshold settings. A higher area under the
curve (AUC) indicates better performance in distinguishing between classes. The ROC
curves for our hybrid models demonstrate their superior ability to accurately classify
lung tumor images, showcasing the benefits of integrating quantum computing with
traditional deep learning methods.
52 Bioengineering. 2024, 14, x FOR PEER REVIEW 26 of 31
53
610 Figure 9's confusion matrices highlight the superior performance of our hybrid
611 models, showing high true positives (TP) and true negatives (TN) while minimizing
612 false positives (FP) and false negatives (FN). This indicates improved accuracy,
613 precision, and recall compared to traditional models. The hybrid models, especially
614 RepVGG with quantum layers, demonstrate significant diagnostic improvements,
621 The Table 8 demonstrates that merging features from different TL models significantly
622 improves classification accuracy. This improvement across all models validates that
623 merging features captures more detailed patterns, enhancing data representation and
624 classification performance.
Computational Training
Technique Accuracy (%)
time (Hours)
QCNN [27] 89.50 2.8
VQDNN [28] 90.00 2.52
Hybrid TL [29] 91.32 3.23
Quanvolution [31] 88.24 2.45
Proposed System 92.12 2.32
636
56 Bioengineering. 2024, 14, x FOR PEER REVIEW 28 of 31
57
637
638 Based on the data presented in Table , our hybrid quantum system appears to
639 perform better in terms of accuracy level and training time. As a result, our system
640 performed better across the board, proving the strength of our proposed architecture in
641 all areas.
642 5. Conclusions
643 In this paper, we propose a new framework for lung tumor classification that uses
644 both CT and CXR images as inputs and pre-trained TL models that are tailored to this
645 task. The TL model has been improved by combining features learned from CT and CXR
646 images with a hybrid quantum layer. On two standard datasets: ChestX-ray8 and LIDC-
647 IDRI, we have successfully classified lung tumors using our framework. In addition to
648 our framework, other techniques relying on CXR or CT images alone or on conventional
649 machine learning models do not achieve the same results. We demonstrate that lung
650 tumor classification can be improved using both imaging modalities and quantum
651 computing. As a result, early detection, treatment, and outcome of lung cancer patients
652 can be greatly improved.
653 It is important to note that the following are some possible limitations of the work
654 in relation to the conclusion of the paper:
655 There may be some types of lung cancer that are not suitable for the framework
656 because of their distinct morphological or molecular characteristics.
657 It should be noted that the framework may not capture the diversity and intricacy
658 of lung tumor staging, which may have a substantial impact on the patient's
659 outcome and management.
660 In settings with limited resources, the framework may be inaccessible or expensive,
661 especially in situations where resources are limited.
662 We test the proposed model with a small number of images taken from two
663 different datasets. Nevertheless, the proposed framework needs to be standardized
664 by testing it against a larger number of unknown or new data sets.
665 This study focuses solely on non-invasive imaging techniques and excludes biopsy,
666 the definitive method for lung cancer diagnosis. While this approach reduces
667 patient risk, it may not capture the comprehensive accuracy provided by biopsy.
668 Future research could integrate these methods to enhance both early detection and
669 diagnostic confirmation.
670 We are planning on applying our model to other types of lung diseases as well as
671 other imaging methods in the future. Furthermore, to further improve our framework's
672 performance, we can experiment with other quantum layers and optimization methods
673 in order to further improve the performance of our framework.
58 Bioengineering. 2024, 14, x FOR PEER REVIEW 29 of 31
59
684 References
685 1. Althubiti, S.A.; Paul, S.; Mohanty, R.; Mohanty, S.N.; Alenezi, F.; Polat, K. Ensemble learning framework with GLCM texture
686 extraction for early detection of lung cancer on CT images. Computational and Mathematical Methods in Medicine 2022, 2022,
687 doi:10.1155/2022/2733965.
688 2. Westeel, V.; Foucher, P.; Scherpereel, A.; Domas, J.; Girard, P.; Trédaniel, J.; Wislez, M.; Dumont, P.; Quoix, E.; Raffy, O. Chest
689 CT scan plus x-ray versus chest x-ray for the follow-up of completely resected non-small-cell lung cancer (IFCT-0302): a multi-
690 centre, open-label, randomised, phase 3 trial. The Lancet Oncology 2022, 23, 1180-1188, doi:10.1016/S1470-2045(22)00451-X.
691 3. Saber, A.; Sakr, M.; Abo-Seida, O.M.; Keshk, A.; Chen, H. A novel deep-learning model for automatic detection and classifica -
692 tion of breast cancer using the transfer-learning technique. IEEE Access 2021, 9, 71194-71209.
693 4. Sadad, T.; Rehman, A.; Munir, A.; Saba, T.; Tariq, U.; Ayesha, N.; Abbasi, R. Brain tumor detection and multi‐classification us -
694 ing advanced deep learning techniques. Microscopy Research and Technique 2021, 84, 1296-1308.
695 5. Hu, Z.; Tang, J.; Wang, Z.; Zhang, K.; Zhang, L.; Sun, Q. Deep learning for image-based cancer detection and diagnosis− A
696 survey. Pattern Recognition 2018, 83, 134-149.
697 6. Chaunzwa, T.L.; Hosny, A.; Xu, Y.; Shafer, A.; Diao, N.; Lanuti, M.; Christiani, D.C.; Mak, R.H.; Aerts, H.J. Deep learning clas-
698 sification of lung cancer histology using CT images. Scientific reports 2021, 11, 5471.
699 7. Lakshmanaprabu, S.; Mohanty, S.N.; Shankar, K.; Arunkumar, N.; Ramirez, G. Optimal deep learning model for classification
700 of lung cancer on CT images. Future Generation Computer Systems 2019, 92, 374-382.
701 8. Wei, S.; Chen, Y.; Zhou, Z.; Long, G. A quantum convolutional neural network on NISQ devices. AAPPS Bulletin 2022, 32, 1-
702 11.
703 9. Zhao, C.; Gao, X.-S. Qdnn: deep neural networks with quantum layers. Quantum Machine Intelligence 2021, 3, 15.
704 10. Beer, K.; Bondarenko, D.; Farrelly, T.; Osborne, T.J.; Salzmann, R.; Scheiermann, D.; Wolf, R. Training deep quantum neural
705 networks. Nature communications 2020, 11, 808.
706 11. Kora, P.; Mohammed, S.; Surya Teja, M.J.; Usha Kumari, C.; Swaraja, K.; Meenakshi, K. Brain Tumor Detection with Transfer
707 Learning. 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC) 2021, 443-446,
708 doi:10.1109/I-SMAC52330.2021.9640678.
709 12. Mohite, A. Application of transfer learning technique for detection and classification of lung cancer using CT images. Int J Sci
710 Res Manag 2021, 9, 621-634.
711 13. Sundar, S.; Sumathy, S. Transfer learning approach in deep neural networks for uterine fibroid detection. International Journal
712 of Computational Science and Engineering 2022, 25, 52-63.
713 14. Alkassar, S.; Abdullah, M.A.; Jebur, B.A. Automatic brain tumour segmentation using fully convolution network and transfer
714 learning. In Proceedings of the 2019 2nd international conference on electrical, communication, computer, power and control
715 engineering (ICECCPCE), 2019; pp. 188-192.
60 Bioengineering. 2024, 14, x FOR PEER REVIEW 30 of 31
61
716 15. Humayun, M.; Sujatha, R.; Almuayqil, S.N.; Jhanjhi, N. A transfer learning approach with a convolutional neural network for
717 the classification of lung carcinoma. In Proceedings of the Healthcare, 2022; p. 1058.
718 16. Wang, S.; Dong, L.; Wang, X.; Wang, X. Classification of pathological types of lung cancer from CT images by deep residual
719 neural networks with transfer learning strategy. Open Medicine 2020, 15, 190-197.
720 17. Nishio, M.; Sugiyama, O.; Yakami, M.; Ueno, S.; Kubo, T.; Kuroda, T.; Togashi, K. Computer-aided diagnosis of lung nodule
721 classification between benign nodule, primary lung cancer, and metastatic lung cancer at different image size using deep con -
722 volutional neural network with transfer learning. PloS one 2018, 13, e0200721.
723 18. Da Nóbrega, R.V.M.; Peixoto, S.A.; da Silva, S.P.P.; Rebouças Filho, P.P. Lung nodule classification via deep transfer learning
724 in CT lung images. In Proceedings of the 2018 IEEE 31st international symposium on computer-based medical systems
725 (CBMS), 2018; pp. 244-249.
726 19. Phankokkruad, M. Ensemble transfer learning for lung cancer detection. In Proceedings of the 2021 4th international confer -
727 ence on data science and information technology, 2021; pp. 438-442.
728 20. Saikia, T.; Kumar, R.; Kumar, D.; Singh, K.K. An automatic lung nodule classification system based on hybrid transfer learn -
729 ing approach. SN Computer Science 2022, 3, 272.
730 21. Bhandary, A.; Prabhu, G.A.; Rajinikanth, V.; Thanaraj, K.P.; Satapathy, S.C.; Robbins, D.E.; Shasky, C.; Zhang, Y.-D.; Tavares,
731 J.M.R.; Raja, N.S.M. Deep-learning framework to detect lung abnormality–A study with chest X-Ray and lung CT scan im-
732 ages. Pattern Recognition Letters 2020, 129, 271-278.
733 22. Ibrahim, D.M.; Elshennawy, N.M.; Sarhan, A.M. Deep-chest: Multi-classification deep learning model for diagnosing COVID-
734 19, pneumonia, and lung cancer chest diseases. Computers in biology and medicine 2021, 132, 104348.
735 23. Yang, D.; Martinez, C.; Visuña, L.; Khandhar, H.; Bhatt, C.; Carretero, J. Detection and analysis of COVID-19 in medical im -
736 ages using deep learning techniques. Scientific Reports 2021, 11, 19638.
737 24. Kamil, M.Y. A deep learning framework to detect Covid-19 disease via chest X-ray and CT scan images. International Journal of
738 Electrical & Computer Engineering (2088-8708) 2021, 11.
739 25. Shyni, H.M.; Chitra, E. A comparative study of X-ray and CT images in COVID-19 detection using image processing and deep
740 learning techniques. Computer Methods and Programs in Biomedicine Update 2022, 2, 100054.
741 26. Chen, G.; Chen, Q.; Long, S.; Zhu, W.; Yuan, Z.; Wu, Y. Quantum convolutional neural network for image classification. Pat-
742 tern Analysis and Applications 2023, 26, 655-667.
743 27. Sebastianelli, A.; Zaidenberg, D.A.; Spiller, D.; Le Saux, B.; Ullo, S.L. On circuit-based hybrid quantum neural networks for re-
744 mote sensing imagery classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, 15,
745 565-580.
746 28. Wang, Y.; Wang, Y.; Chen, C.; Jiang, R.; Huang, W. Development of variational quantum deep neural networks for image
747 recognition. Neurocomputing 2022, 501, 566-582.
748 29. Mogalapalli, H.; Abburi, M.; Nithya, B.; Bandreddi, S.K.V. Classical–quantum transfer learning for image classification. SN
749 Computer Science 2022, 3, 20.
750 30. Subbiah, G.; Krishnakumar, S.S.; Asthana, N.; Balaji, P.; Vaiyapuri, T. Quantum transfer learning for image classification.
751 TELKOMNIKA (Telecommunication Computing Electronics and Control) 2023, 21, 113-122.
752 31. Henderson, M.; Shakya, S.; Pradhan, S.; Cook, T. Quanvolutional neural networks: powering image recognition with quantum
753 circuits. Quantum Machine Intelligence 2020, 2, 2.
754 32. Kayan, C.E.; Koksal, T.E.; Sevinc, A.; Gumus, A. Deep reproductive feature generation framework for the diagnosis of
755 COVID-19 and viral pneumonia using chest X-ray images. arXiv preprint arXiv:2304.10677 2023.
756 33. Sannidhan, M.; Prabhu, G.A.; Chaitra, K.; Mohanty, J.R. Performance enhancement of generative adversarial network for pho-
757 tograph–sketch identification. Soft Computing 2023, 27, 435-452.
758 34. Ding, X.; Zhang, X.; Ma, N.; Han, J.; Ding, G.; Sun, J. Repvgg: Making vgg-style convnets great again. In Proceedings of the
759 Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021; pp. 13733-13742.
62 Bioengineering. 2024, 14, x FOR PEER REVIEW 31 of 31
63
760 35. Ghose, P.; Alavi, M.; Tabassum, M.; Uddin, A.; Biswas, M.; Mahbub, K.; Gaur, L.; Mallik, S.; Zhao, Z. Detecting COVID-19 in-
761 fection status from chest X-ray and CT scan via single transfer learning-driven approach. Frontiers in genetics 2022, 13, 980338.
762 36. Kallel, F., Sahnoun, M., Ben Hamida, A., & Chtourou, K. (2018). CT scan contrast enhancement using singular value decompo-
763 sition and adaptive gamma correction. Signal, Image and Video Processing, 12, 905-913.
764 37. Sannidhan, M.; Martis, J.E.; Nayak, R.S.; Aithal, S.K.; Sudeepa, K. Detection of Antibiotic Constituent in Aspergillus flavus Us-
765 ing Quantum Convolutional Neural Network. International Journal of E-Health and Medical Communications (IJEHMC)
766 2023, 14, 1-26.
767 38. Abbas, A.; Sutter, D.; Zoufal, C.; Lucchi, A.; Figalli, A.; Woerner, S. The power of quantum neural networks. Nature Computa-
768 tional Science 2021, 1, 403-409.
769 39. Hou, Y.-Y.; Li, J.; Chen, X.-B.; Ye, C.-Q. A partial least squares regression model based on variational quantum algorithm.
770 Laser Physics Letters 2022, 19, 095204.
771 40. Chalumuri, A.; Kune, R.; Manoj, B. A hybrid classical-quantum approach for multi-class classification. Quantum Information
772 Processing 2021, 20, 119.
773 41. Coffey, M.W.; Deiotte, R.; Semi, T. Comment on “Universal quantum circuit for two-qubit transformations with three con -
774 trolled-NOT gates” and “Recognizing small-circuit structure in two-qubit operators”. Physical Review A 2008, 77, 066301.
775 42. Moore, C.; Nilsson, M. Parallel quantum computation and quantum codes. SIAM journal on computing 2001, 31, 799-815.
776 43. Song, G.; Klappenecker, A. Optimal realizations of controlled unitary gates. arXiv preprint quant-ph/0207157 2002.
777 44. Nakaji, K.; Tezuka, H.; Yamamoto, N. Quantum-enhanced neural networks in the neural tangent kernel framework. arXiv pre-
778 print arXiv:2109.03786 2021.
779 45. Oh, S.; Choi, J.; Kim, J. A tutorial on quantum convolutional neural networks (QCNN). In Proceedings of the 2020 Interna -
780 tional Conference on Information and Communication Technology Convergence (ICTC), 2020; pp. 236-239.
781 46. Rajesh, V.; Naik, U.P. Quantum Convolutional Neural Networks (QCNN) using deep learning for computer vision applica-
782 tions. In Proceedings of the 2021 International Conference on Recent Trends on Electronics, Information, Communication &
783 Technology (RTEICT), 2021; pp. 728-734.
784 47. Zhou, Z.; Sodha, V.; Rahman Siddiquee, M.M.; Feng, R.; Tajbakhsh, N.; Gotway, M.B.; Liang, J. Models genesis: Generic auto -
785 didactic models for 3d medical image analysis. In Proceedings of the Medical Image Computing and Computer Assisted In -
786 tervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part IV 22,
787 2019; pp. 384-393.
788 48. Morid, M.A.; Borjali, A.; Del Fiol, G. A scoping review of transfer learning research on medical image analysis using Ima -
789 geNet. Computers in biology and medicine 2021, 128, 104115.
790 49. Alzubaidi, L.; Fadhel, M.A.; Al-Shamma, O.; Zhang, J.; Santamaría, J.; Duan, Y.; R. Oleiwi, S. Towards a better understanding
791 of transfer learning for medical imaging: a case study. Applied Sciences 2020, 10, 4523.
792 50. Veasey, B.P.; Broadhead, J.; Dahle, M.; Seow, A.; Amini, A.A. Lung nodule malignancy prediction from longitudinal CT scans
793 with Siamese convolutional attention networks. IEEE Open Journal of Engineering in Medicine and Biology 2020, 1, 257-264.
794
795 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
796 author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury
797 to people or property resulting from any ideas, methods, instructions or products referred to in the content.
798