Optimal Deep Learning Model for Classification of Lung Cancer
Optimal Deep Learning Model for Classification of Lung Cancer
PII: S0167-739X(18)31701-1
DOI: https://doi.org/10.1016/j.future.2018.10.009
Reference: FUTURE 4514
Please cite this article as: Lakshmanaprabu S.K., et al., Optimal deep learning model for
classification of lung cancer on CT images, Future Generation Computer Systems (2018),
https://doi.org/10.1016/j.future.2018.10.009
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to
our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form.
Please note that during the production process errors may be discovered which could affect the
content, and all legal disclaimers that apply to the journal pertain.
Optimal Deep Learning Model for Classification of Lung Cancer
on CT Images
Lakshmanaprabu S.K1,*, Sachi Nandan mohanty2, Shankar K3, Arunkumar N4, Gustavo
Ramirez5
1
Department of Electronics and Instrumentation Engineering, B.S.Abdur Rahman Crescent Institute
of science and Technology, Chennai, India. Email: [email protected]
2
Department of Computer Science & Engineering, Gandhi institute for technology, Bhubaneswar,
India. Email: [email protected]
3
School of Computing, Kalasalingam Academy of Research and Education, Krishnankoil, India.
Email: [email protected]
4
Department of Electronics and Instrumentation Engineering, SASTRA University, Tanjavur, India,
Email: [email protected]
5
Department of Telematics, University of Cauca, Colombia, Email :[email protected]
*corresponding Author.
Abstract
Lung cancer is one of the dangerous diseases that cause huge cancer death worldwide. Early
detection of lung cancer is the only possible way to improve a patient's chance for survival. A
Computed Tomography (CT) scan used to find the position of tumor and identify the level of
cancer in the body. The current study presents an innovative automated diagnosis
classification method for Computed Tomography (CT) images of lungs. In this paper, the CT
scan of lung images was analyzed with the assistance of Optimal Deep Neural Network
(ODNN) and Linear Discriminate Analysis (LDA). The deep features extracted from a CT
lung images and then dimensionality of feature is reduced using LDR to classify lung nodules
as either malignant or benign. The ODNN is applied to CT images and then, optimized using
Modified Gravitational Search Algorithm (MGSA) for identify the lung cancer classification
The comparative results show that the proposed classifier gives the sensitivity of 96.2%,
Key Terms: Image Processing, Computed Tomography, Lung Cancer, LDA, Optimization,
Classification.
Nomenclature
CT Computed Tomography
EC Evolutionary Computation
NN Neural Network
1. Introduction
Medical image analysis has extraordinary supremacy in the field of health sector, particularly
in noninvasive treatment and clinical examination [1]. The acquired restorative images such
as X-rays, CT, MRI, and ultrasound imaging are used for specific diagnosis [2]. In medical
imaging, CT is one of the filtering mechanism which use attractive fields to capture images in
films [3]. Lung cancer is one-of-its-kind of cancer that leads to 1.61 million deaths per year.
In Indonesia, lung cancer is ranked in the third position among the prevalent cancers, for the
most part, found in the MIoT centers [4]. The survival rate is higher if the cancer is diagnosed
at the beginning stages. The early discovery of lung cancer is not a simple assignment.
Around 80% of the patients are diagnosed effectively only at the center or propelled phase of
cancer [5]. Lung cancer is positioned second among males and tenth among females [2]
globally. The information given in these studies is a general portrayal of lung cancer location
framework that contains four basic stages. The lung cancer is the third most frequent cancer
in women, after breast and colorectal cancers [6, 7]. Feature extraction process is one of the
the striking features of CT imaging is its non-obtrusive character. The rise of angles, which
The selected or extracted features set will extract the relevant information from the
input data to the reduction process [11]. The reduced features are assigned to a support vector
machine for the purpose of training and testing. The models used for lung cancer image
classification are neural network models with binarization image pre-processing [12]. The
existing research work for lung cancer classification was performed using a neural network
model which provided 80% accuracy [13]. Various investigations have been conducted
regarding lung cancer classification and Classifiers, for example, ‘SVM, KNN and ANN’
[14]. The SVM is a universal useful learning method based on statistical learning hypothesis
[15]. However, these techniques are expensive and detect lung cancer at its advanced stages
due to which the chance for survival is very low. The early detection of cancer can be helpful
in curing the disease completely. So, the requirement of developing a technique to detect the
The contribution of the current work considers two important phases: First phase is
the CT lung cancer classification processes where the selected features are extracted to LDA
reduction process and in the second phase, optimal deep learning classifier with MGSA
optimization algorithm is used to classify the CT lung cancer images. The proposed method
outperformed over other methods and also it is shown that the performance improvement is
statistically significant. In the rest of this paper, section 2 discuss about literature study where
section 3 depicted the current issues of the classifier. Further, the 4thsection extravagantly
contemplated the proposed philosophy. At that point, section 5 contains the execution and
investigation of this work followed by the conclusion with recommendations for future work.
The general visualization of lung cancer is poor since specialists are unable to
discover the infected region until the point when it reaches propelled stage. Five-year
survival is around 54% for beginning time lung cancer which is restricted to the lungs,
The danger of lung cancer increments with the number of cigarettes smoked after
some time; specialists allude to this hazard as far as pack-long periods of smoking
history. A little segment of lung cancers occurs in individuals with no known hazard
factors for the illness. A portion of these may very well be arbitrary occasions that
greater treatment alternatives and a far more possibility of survival. Be that as it may,
just 16% of the individuals are diagnosed in the beginning stage when the sickness is
generally treatable.
2. Literature review
In 2018, Yutong Xie et al. [17] recommended an algorithm for lung nodule classification that
circuits the Texture, Shape and Deep model-learned data (Fuse-TSD) at the choice level. This
Hiba Chougrad et al. [18] investigated a CAD framework based on CNN to classify
the breast cancer. Deep learning generally requires expansive datasets to prepare systems
while transfer learning method uses a little datasets of medical images. The CNNs optimally
trained with the help of transfer learning method. . The CNN accomplished the best outcomes
in terms of accuracy i.e., 98.94%. Heba Mohsen et al. [19] demonstrated the DNN classifier
for brain tumor classification where the DNN is combined with wavelet transform and
In 2015, Alok Sharma et al. [20] proposed a method of regularized linear discriminant
algorithm. In order to investigate the medical data for prediction of disease needs a proper set
of features. There have been many evolutionary algorithm has been applied to obtain the
optimal selection of features. Recently, gravitational search algorithm and Elephant Herd
optimizations are utilized for the selection of optimal features [21] [28].
for CT images. The statistical used for the classification model developed. The paper claimed
that feed forward back propagation network provide better accuracy compared to feed
forward networks. Also, the skewness feature has more significance in enhancement of classifier
accuracy [22].
In the study conducted by Hao Wang et al during 2016, proposed a summed up LDA
method based on Euclidean norm called ELDA method to defeat the existing disadvantages
classification. The trial results exhibited that this algorithm accomplishes better results
similarly with high accuracy and viability than any other gait recognition procedures in the
model [24].
In existing techniques, the lung images were captured and subjected to segmentation
specifically after which the SVM classifier was applied and then the accuracies were
measured [22].
The current framework had a limitation since it could not predict the sort and shape or
size of the tumor and it dealt with a number of pixels which is not valuable for the
earlier detection of cancer [17]. At the point, when ANN produces a testing solution,
it does not provide some insights as to why and how. This reduces confide in the
network.
Neural networks are ‘black box’ and have been restricted in their ability to
profound networks with many hidden layers and are capable of modeling complex
structures. However, the training algorithm is again more complex and dynamically
It is estimated that by utilizing this model, various existing data mining and image
processing strategies could be made to work on together in multiple ways. The main
disadvantage of the LDA technique is that it only distinguishes the images containing
anomalies [23].
The GSA drawback is the metropolis standard for contrasting the places of moving
particles along with controlling the molecule to move and conquer their arbitrariness.
4. Methodology
The proposed approach used to classify the CT images of the human lung which has a few
stages such as preprocessing, feature extraction, reduction and finally the classification.
Initially, the CT images were considered to improve the quality of images followed by the
feature extraction procedure to extract the features (histogram, Texture, and wavelet) of the
images based on strategies. After the feature extraction, dimensionality reduction technique
considered reduce the feature for the classification process, the purpose of dimension
reduction is reducing the computational time and cost of our classifying Method, feature
reduction is utilized that is LDA. The LDA based feature reduction technique is applied in the
proposed classifying method for reducing the computational time and cost. The maximum
features utilized for classification increase the computation time as well as the storage
memory. During classification phase, the CT lung images are classified as normal, benign
and malignant based on the extracted features. Generally, the classification issue has two
phases such as training and testing phases; the classifier is trained with the chosen features of
the training data. On the other hand, during the testing phase, the outcomes of the
classification procedure signify whether the images contain the lung cancer regions or the
non-cancer regions. The current study utilized ODNN classifier and MGSA optimization is
used for the optimized structure. This approach is illustrated in figure 1 with outstanding
straightforwardness and minimal effort in both trainings as well as in testing process in the
if the im
mage is noiisy and targ
get pixels neeighboring pixel worth
h is somewhhere around
d 0's and
equalizaation.
complexxity of the CT images is to be inccreased and set with the limit, so tthat it conseequently
single values or matrix vector. Feature extraction computes dimensionality reduction in image
processing based on which image can be used for classification. It includes diminishing the
input data into a reduced representative set of features. The features are utilized as
contributions to classifiers that assign them to the class what they represented. The aim of
feature extraction is to reduce the data by estimating positive properties. In the current study,
histogram features, texture feature and wavelet features whereas all these features are
demonstrates the number of pixels in an image at each power value. Transforming the power
From the input image, a total range of gray levels is evaluated by the histogram method.
Here, there are 256 gray levels which ranges from 0 to 255. It has some common features, for
Variance: The variance gives the number of gray level fluctuations from the mean gray level
value. The statistical distributions, like the variance in the length of lines of a particular limit,
Mean: The mean gives the average gray level of each region and it is helpful only as a harsh
Standard Deviation: The definition of Standard Deviation is the square root of the variance
denoting image contrast. The image contrast level is evaluated by high and low variance
values. This denotes that a high contrast image has high variance whereas a low contrast
the histogram value is categorized into two sets, positive and negative
depicts how anomaly the image is. Kurtosis and Skewness are used in the statistical analysis
Texture features are extracted from the input image only next to histogram features. Since the
abnormality is generally spread in the image, the textural orientation of each class is
extraordinary, which helps to attain better classification accuracy. The gray level
co-occurrence matrix symbolizes a statistical method of reviewing the surface that takes into
account the spatial relationship of pixels. The GLCM functions spell out the texture of an
image, by estimating the recurrence of occurrences of pairs of the pixel with the same values.
Generally, these features are calculated by utilizing GLCM probability values and it has
somewhere in the range of 22 features among which a few features is considered for the
G Pij
Fij
L 1
(2)
F
i , j 0
ij
In the above equation, Fijdenotes the ‘frequency of occurrences between two grey levels’, L is
the Number of quantized grey levels, ‘i’ and ‘j’ for a given displacement vector for the
Energy: This guarantees that the maximum constant values or intermittent consistency in
Entropy: It refers to the quantity of data in the image which is required for the compression
process. The image with low entropy exhibits tiny contrast and large runs of image pixels in
evaluates the image homogeneity assuming that the prevalent values for minor gray tone
changes in pair components. Along these lines, the homogeneity is an evaluation which
Contrast: This is the one that calculates the spatial recurrence of an image and the varying
moments of GLCM. It symbolizes the variance between the maximum and the base values of
Correlations: Correlation evaluates the linear dependence of gray levels of adjoining pixels.
The tracking of the digital image correlation stands for an optical procedure, which misuses
tracking and the images registration is approached for the measurements of variations in
images.
The wavelet transform gives an image handling information because of its beneficial features.
The DWT speaks to a linear transformation, which is the function on the data vector whose
length is related energy. In the wavelet transform, the feature extractions are carried out by
means of two stages as follows. First, the subband of the natural image is developed and these
subband are evaluated with the help of various resolutions. Wavelet is an extraordinary
numerical method to include extraction and has been used to separate the wavelet coefficients
from images. The mean prediction of DWT coefficients is figure out by taking the normal
coarse coefficient.
Where δat is the mean value for approximation coefficient since initially, the images are
outfitted to the low pass channel which screens the low recurrence image within the cut off
recurrence. Thereafter, the image signals are outfitted to a high pass filter which screens the
dimensiionality redduction facttor for featuure vectors before the classificatiion process without
Fig 2: LDA
A
c N ss
M w
mij j mij j (4)
T
j 1 i 1
Where ‘c’ denotess the numbeer of classe s and mj, Ns and αj are a test of a class, Nu
umber of
such as follows.
k
R s ( m j m) ( m j m) T (5)
j 1
discrim
minant hypotthesis. Thiss standard aattempts to expand thee proportionn of determ
minant of
where the connection between a set of classes is not same as another set. From the minimal
In the CT image classification model, the current study proposed DNN in view of profound
learning approach. DL structure broadens the customary NN by adding more hidden layers to
the system design between the input and output layers so as to demonstrate more
unpredictable and nonlinear connections. After the features selection, the grouping step is
performed with the help of DNN on the resultant component vector. This classifier works
with the help of two capacities such as profound DBN and RBM. In order to improve the
involved steps of optimal deep learning model described in the section below along with an
During the training stage, a DBN is utilized which is a deep design and feed-forward neural
network, i.e. with various hidden layers. The DBN model awards the system to deliver
evident-starts based on its hidden units' states which depicts the system conviction. The
parameters of a DBN are the weights among the units of layers in addition to the bias of
layer. It is a principal challenging task to set up the parameters to train DNN help of a
RBM is a two-layer rehashed neural framework in which the stochastic twofold sources of
affiliations. A preparation case is demonstrated in which the class check is ignored and it is
expanded stochastically through RBM in condition (6). This vector is also coursed the other
way thhrough RB
BM which impacts inn a confab
bulation (rre-trying) oof the rem
markable
informaation data.
i j I J
F w, h I ij wi h j i wi j h j (6)
i 1 j 1 i 1 j 1
The novel population-based heuristic algorithm is based on the law of gravity and mass
correspondence through the gravitational force. The GSA approach provides a solution to the
issue by its mass position and gravitation; also the fitness function of the algorithm is
determined by its inertial masses. Subsequently, each mass over an answer and the strategy is
directed [28] by properly adjusting the gravitational and inertial masses. The new position is
updated for the probability function which is utilized as a part of random value selection
Initially, ‘w’ sets of agents are considered, their positions specified and represented as
follows: In the equation (7) shows w i1 the position of agent and .wis a search space of agent
to choose weights.
w { w i1 ...... w is } (7)
In this CT lung image classification, the maximum specificity ratio based on the trained and
tested structure of DNN is considered as the fitness function and it is shown in the equation
below (8).
The force between two particles is proportionally relative to their masses and conversely
corresponding to their distance, each one of the particles moves towards those particles which
are heavy in their mass. This is derived under the equations (9) and (10).
D i(t )
(9)
Mass i(t )
D i(t )
Fit i ( t ) worst ( t )
Di ( t )
Best ( t ) worst ( t ) (10)
Where ‘Fit’ represents the fitness value of the particle ‘i’ at a time‘t’. For the maximization
problem and for estimating the acceleration of an agent, a set of total force applied from
To give a stochastic trademark to GSA, the total force that follows up on the molecule ‘i’ in
the dth measurement is set to be a randomly weighted total of search segments of the forces.
Mass j (t ) * Massi (t )
Forcei (t )
jk best
Rn j gr(t )
Ed ij
( w j (t ) wi (t ) )
(11)
Here w j (t ) represents the position of the i _ th particle in the dimension; Massi and
Massjdenote the gravitational mass related to the particles ‘i’ and ‘j’; Then gr ( t ) is the
gravitational constant; Ed ij denotes the Euclidian distance between the particles ‘i’ and ‘j’
In equation (11), the random values are selected by calculating the probability function of
This algorithm is iteratively associated for various iterations to join at a sufficiently adequate
solution. It justifies saying that the random walk of the insect is compelled by the
The best solutions which fulfill the objective function are discovered and the algorithm is
prepared to give exact solutions in light of maximizing the accuracy of CT lung images in the
classification process. If unable to get optimal results in iteration 1, then move for iteration
_New= iteration +1, until get the optimal weights of DNN process, the steps will be repeated.
The working rule of this stage depends on the normal backpropagation algorithm. To detect
and classify the abnormal, an output layer is proposed as the highest point of the DNN.
Additionally, there is‘N’ number of input neurons (based on the features), and three hidden
layers are used in the current study DNN. The optimized weight is planned through the
training stage with the assistance of a training data set, where back propagation begins with
the weights that were achieved in the pre-training stage. From the optimal weights, the layer
T (m i 1 / n) (m i opt _ w ji n j
(13)
T (n i 1 / m) (n i opt _ w ji m j
where m and n denote the bias vectors for visible and hidden layers and σ is a logistic function
with the range of (0, 1). Further, the training dataset is skilled until the optimized weight is
grasped, or maximum accuracy is attained with the help of equation (13). Finally, on the basis
of the optimal weight (w), the lung images are classified in the testing stage by testing the
data set.
The proposed CT lung image classification models were implemented in the working
platform of MATLAB 2016 with system configurations such as an i5 processor with 4GB
RAM. In this cancer image classification process, standard CT database was used and the
proposed model was compared with existing classifiers like NN, SVM, KNN, DNN and so
In the pproposed woork, the dataabase compprised of 50 low-dosagee and recordded lung caancer CT
the radiiologist andd also provided in the ddataset. Thee test imagees considereed for the proposed
p
Fig 4: Sam
mple database images
The mosst commonlyy used evalua odel include in the table 2.The
ation metho ds for a classsification mo
tabulateed in table 3.
3 For lung image
i invesstigation, 70
0 images weere considerred for train
ning and
the rem
maining 30 im
mages weree consideredd for the testing processs.
Table 2: P
Performance Metrics
Metrics
M mula
Form
Truue Positive--TP
Truue Negativee-TP
Falsse Negativee-FN
TP
Sen
nsitivity Sen
T FN
TP
TN
Specificity Spc
T FP
TN
TP TN
Accuracy Acc
TP TN FP FN
TP
PPV PPV
TP FP
TN
NPV NPV
TN FN
Normal 22 1 4 27
Training Malignant 2 18 2 22
Benign 1 20 0 21
Total Images 25 39 6 70
Normal 6 0 2 8
Malignant 1 9 1 11
Testing
Benign 0 11 0 11
Total Images 7 20 3 30
The results attained by the ODNN model were illustrated with point operations. From the
results, the performance of the proposed model is determined by the ability to detect cancer
or non-cancerous lung image. Based on the testing data, the model is able to predict the
Time (Sec)
1.5
PCA
1 LDA
ICA
0.5
0
1 2 3 4
Images
The LDA was utilized for lessening the difficulty of the framework in feature reduction time
as illustrated in figure 5. The dimensional value of the feature vector got reduced from the
images. The comparison graph clearly shows that the proposed technique achieves less
computational time coupled with high classification accuracy (because of LDA-based feature
reduction). The ideal opportunity for network training is not considered since the
weights/biases of the LDA should keep unchanged unless the properties of images change a
great deal. Confining the element vectors to the part picked by the LDA, prompts an
the dim
minished feaatures are co
onsidered foor the trainiing-testing process.
p In the hidden layer 4,
Table 5: Proposed CT
T lung imaage classificcation with pre-processsed resultss
Con
ntrast
CT
T image Type Accuraccy Sensitiivity Speccificity
Enh
hanced
Table 5 demonstraates the accuracy leveel of lung cancer image classificcation ratess for the
exhibiteed, for CT
T image, ass normal oor benign or malignaant. The prroposed OD
DNN is
comparred with exxisting classifiers and it demonstrated that proposed aalgorithm provides
p
algorithhm. Seconddly, the texture and coolor featurees are conssidered for grouping CT
C lung
kernel ffunction.
100
80
ODNN
MLP
Metrics (%)
60
RBF
Linear
40
A
ANN
KNN
20 DNN
0
Accuracy
y Seensitivity Specificcity
(a)
100
80 OD
DNN
LP
ML
Metrics (%)
60 RB
BF
Linnear
40 AN
NN
KN
NN
20 DN
NN
0
PPV NPV
(b)
Figure 5 provides the comparative analysis of classifiers with various measurements like
PPV, NPV, Accuracy, Sensitivity, specificity, and accuracy. In this investigation, two
human lung classification. It was inferred that the proposed techniques produce classification
accuracy of 99%in proposed classifier, which is demonstrated during the testing phase.The
classification, it is accuracy of82.29% in NN 90.54% in SVM, and 74.55% in DNN. The PPV
and NPV values indicate better performance as nearly 98% in proposed model. After
completing the analysis, the classification specificity was 95% which is not considered as a
decent performance, similarly sensitivity parameter. This may be because of the commotion
that was exhibited in the phase data due to which the image was misclassified.
Number of
Accuracy Sensitivity Specificity PPV NPV
Folds
much as single feature extraction. Every time, an overlap is utilized for training and the rest is
utilized for the test. The results variance is reduced with a larger k. All the observations are
utilized for both training and validation and each observation are utilized for validation for
only once.
6. Conclusion
The proposed ODNN with feature reduction demonstrated the better classification in case of
lung CT Images compared with others classification techniques. An automatic lung cancer
classification approach reduces the manual labeling time and avoids a human mistake.
Through machine learning techniques, the researchers planned to achieve better precision and
accuracy in recognizing a normal and abnormal lung image. According to the experimental
outcomes, the proposed technique is effective for the classification of the human lung images
in terms of accuracy, sensitivity, and specificity with its values 94.56%, 96.2%, and 94.2%
respectively. The accuracy level has clearly evident that the proposed algorithm is deeply
operate, non-invasive and cheap. In future work, we will use high dosage CT lung images and
References
[1] Rattan, S., Kaur, S., Kansal, N. and Kaur, J., 2017, December. An optimized lung
Early Detection of Lung Cancer for Preventive Health Care: A Survey. International
[3] Detterbeck, F.C., 2017. The 8 th Edition Lung Cancer Stage Classification: What
[4] Li, J., Wang, Y., Song, X. and Xiao, H., 2018. Adaptive multinomial regression with
[5] Wutsqa, D.U. and Mandadara, H.L.R., 2017, October. Lung cancer classification
using radial basis function neural network model with point operation. In Image and
[6] Sharma, D. and Jindal, G., 2011. Computer Aided Diagnosis System for Detection of
[7] Bhatnagar, D., Tiwari, A.K., Vijayarajan, V. and Krishnamoorthy, A., 2017,
Conference Series: Materials Science and Engineering (Vol. 263, No. 4, p. 042100).
IOP Publishing.
[8] Sui, X., Jiang, W., Chen, H., Yang, F., Wang, J. and Wang, Q., 2017. Validation of
the stage groupings in the eighth edition of the TNM classification for lung cancer.
[9] El-Sherbiny, B., Nabil, N., El-Naby, S.H., Emad, Y., Ayman, N., Mohiy, T. and
AbdelRaouf, A., 2018, March. BLB (Brain/Lung cancer detection and segmentation
and Breast Dense calculation). In Deep and Representation Learning (IWDRL), 2018
[10] Chen, F., Zhang, D., Wu, J. and Zhang, B., 2017, December. Computerized analysis
of tongue sub-lingual veins to detect lung and breast cancers. In Computer and
2712). IEEE.
[11] Al-Tarawneh, M.S., 2012. Lung cancer detection using image processing techniques.
[12] Akay, Mehmet Fatih. "Support vector machines combined with feature selection for
breast cancer diagnosis." Expert systems with applications 36, no. 2 (2009): 3240-
3247.
[13] Xie, Y., Zhang, J., Xia, Y., Fulham, M. and Zhang, Y., 2018. Fusing texture, shape
[14] Sharma, D. and Jindal, G., 2011. Computer Aided Diagnosis System for Detection of
[16] Sarker, P., Shuvo, M.M.H., Hossain, Z. and Hasan, S., 2017, September.
[18] Chougrad, H., Zouaki, H. and Alheyane, O., 2018. Deep convolutional neural
[19] Mohsen, H., El-Dahshan, E.S.A., El-Horbaty, E.S.M. and Salem, A.B.M., 2017.
Classification using deep learning neural networks for brain tumors. Future
[20] Sharma, A. and Paliwal, K.K., 2015. A deterministic approach to regularized linear
[21] Nagpal, S., Arora, S. and Dey, S., 2017. Feature Selection using Gravitational Search
[22] Kuruvilla, J. and Gunavathi, K., 2014. Lung cancer classification using neural
pp.202-209.
[23] Wang, H., Fan, Y., Fang, B. and Dai, S., 2018. Generalized linear discriminant
[24] Wang, Z. and Tao, J., 2006, November. A fast implementation of adaptive histogram
IEEE.pp.1-4.
[25] Hiremath, P.S. and Shivashankar, S., 2006. Wavelet based features for texture
linear discriminant analysis. In Chemical and Biological Standoff Detection III (Vol.
[27] Eldos, T. and Al Qasim, R., 2013. On the performance of the Gravitational Search
[28] Lakshmanaprabu, S. K., Shankar, K., Khanna, A., Gupta, D., Rodrigues, J. J.,
Mohamed A. Elsoud, Majid Alkhambashi. Optimal feature level fusion based ANFIS
classifier for brain MRI image classification. Concurrency Computat Pract Exper.
2018;e4887.https://doi.org/10.1002/cpe.4887
[30] http://www.via.cornell.edu/lungdb.html
BIOG
GRAPHY
features
image.
Algorithm (MGSA).