Diagnostics 13 03007
Diagnostics 13 03007
Review
A Review of Recent Advances in Brain Tumor Diagnosis Based
on AI-Based Classification
Reham Kaifi 1,2,3
1 Department of Radiological Sciences, College of Applied Medical Sciences, King Saud bin Abdulaziz
University for Health Sciences, Jeddah City 22384, Saudi Arabia; [email protected]
2 King Abdullah International Medical Research Center, Jeddah City 22384, Saudi Arabia
3 Medical Imaging Department, Ministry of the National Guard—Health Affairs,
Jeddah City 11426, Saudi Arabia
Abstract: Uncontrolled and fast cell proliferation is the cause of brain tumors. Early cancer detection is
vitally important to save many lives. Brain tumors can be divided into several categories depending
on the kind, place of origin, pace of development, and stage of progression; as a result, tumor
classification is crucial for targeted therapy. Brain tumor segmentation aims to delineate accurately
the areas of brain tumors. A specialist with a thorough understanding of brain illnesses is needed
to manually identify the proper type of brain tumor. Additionally, processing many images takes
time and is tiresome. Therefore, automatic segmentation and classification techniques are required to
speed up and enhance the diagnosis of brain tumors. Tumors can be quickly and safely detected by
brain scans using imaging modalities, including computed tomography (CT), magnetic resonance
imaging (MRI), and others. Machine learning (ML) and artificial intelligence (AI) have shown
promise in developing algorithms that aid in automatic classification and segmentation utilizing
various imaging modalities. The right segmentation method must be used to precisely classify
patients with brain tumors to enhance diagnosis and treatment. This review describes multiple
types of brain tumors, publicly accessible datasets, enhancement methods, segmentation, feature
extraction, classification, machine learning techniques, deep learning, and learning through a transfer
to study brain tumors. In this study, we attempted to synthesize brain cancer imaging modalities
with automatically computer-assisted methodologies for brain cancer characterization in ML and DL
Citation: Kaifi, R. A Review of
frameworks. Finding the current problems with the engineering methodologies currently in use and
Recent Advances in Brain Tumor
Diagnosis Based on AI-Based
predicting a future paradigm are other goals of this article.
Classification. Diagnostics 2023, 13,
3007. https://doi.org/10.3390/ Keywords: brain tumors; magnetic resonance imaging; computed tomography; computer-aided
diagnostics13183007 diagnostic and detection; deep learning; machine learning
and the odds of survival are significantly reduced if it spreads to nearby cells. Undoubtedly,
many lives could be preserved if cancer was detected at its earliest stage using quick and
affordable diagnostic methods. Both invasive and noninvasive approaches may be utilized
to diagnose brain cancer. An incision is made during a biopsy to extract a lesion sample for
analysis. It is regarded as the gold standard for the diagnosis of cancer, where pathologists
examine several cell characteristics of the tumor specimen under a microscope to verify
the malignancy.
Noninvasive techniques include physical inspections of the body and imaging modal-
ities employed for imaging the brain [5]. In comparison to brain biopsy, other imaging
modalities, such as CT scans and MRI images, are more rapid and secure. Radiologists
use these imaging techniques to identify brain problems, evaluate the development of
diseases, and plan surgeries [6]. However, brain scans or image interpretation to diagnose
illnesses are prone to inter-reader variability and accuracy, which depends on the medical
practitioner’s competency [5]. It is crucial to accurately identify the type of brain disorder
to reduce diagnostic errors. Utilizing computer-aided diagnostic (CAD) technologies can
improve accuracy. The fundamental idea behind CAD is to offer a computer result as an
additional guide to help radiologists interpret images and shorten the reading time for
images. This enhances the accuracy and stability of radiological diagnosis [7]. Several
CAT-based artificial intelligence techniques, such as machine learning (ML) and deep
learning (DL), are described in this review for diagnosing tissues and segmenting tumors.
The segmentation process is a crucial aspect of image processing. This approach includes a
procedure for extracting the area that helps determine whether a region is infected. Using
MRI images to segment brain tumors presents various challenges, including image noise,
low contrast, loss borders, shifting intensities inside tissues, and tissue-type variation.
The most complex and crucial task in many medical image applications is detecting
and segmenting brain tumors because it often requires much data and information. Tumors
come in a variety of shapes and sizes. Automatic or semiautomatic detection/segmentation,
helped by AI, is currently crucial in medical diagnostics. The medical professionals must
authenticate the boundaries and areas of the brain cancer and ascertain where precisely it
rests and the exact impacted locations before therapies such as chemotherapy, radiation, or
brain surgery. This review examines the output from various algorithms that are used in
segmenting and detecting brain tumors.
The review is structured as follows: Types of brain tumors are described in Section 2.
The imaging modalities utilized in brain imaging are discussed in Section 3. The review
algorithms used in the study are provided in Section 4. A review of the relevant state-of-
the-art is provided in Section 5. The review is discussed in Section 6. The work’s conclusion
is presented in Section 7.
forms are rated, with grades I being the least malignant (e.g., meningiomas, pituitary
tumors) and IV being the most malignant. Despite differences in grading systems that rely
on the kind of tumor, this denotes the pace of growth [10]. The most frequent type of brain
tumor in adults is glioma, which may be classified into HGG and LGG. The WHO further
categorized LGG into I–II grade tumors and HGG into III–IV grade. To reduce diagnosing
errors, accurate identification of the specific type of brain disorder is crucial for treatment
planning. A summary of various types of brain tumors is provided in Table 1.
3. Imaging Modalities
For many years, the detection of brain abnormalities has involved the use of several
medical imaging methods. The two brain imaging approaches are structural and functional
scanning [11]. Different measurements relating to brain anatomy, tumor location, traumas,
and other brain illnesses compose structural imaging [12]. The finer-scale metabolic alter-
ations, lesions, and visualization of brain activity are all picked up by functional imaging
methods. Techniques including CT, MRI, SPECT, positron emission tomography (PET),
(FMRI), and ultrasound (US) are utilized to localize brain tumors for their size, location as
well as shape, and other characteristics [13].
3.1. MRI
MRI is a noninvasive procedure that utilizes nonionizing, safe radiation [14] to display
the 3D anatomical structure of any region of the body without the need for cutting the
tissue. To acquire images, it employs RF pulses and an intense magnetic field [15].
The body is intended to be positioned within an intense magnetic field. The water
molecules of the human body are initially in their equilibrium state when the magnets
are off. The magnetic field is then activated by moving the magnets. The body’s water
molecules align with the magnetic field’s direction under the effect of this powerful mag-
netic field [14]. Protons are stimulated to spin opposing the magnetic field and realign
Diagnostics 2023, 13, 3007 4 of 32
Diagnostics 2023, 13, x FOR PEER REVIEW
by the application of a high RF energy pulse to the body in the magnetic field’s direction.
Protons are stimulated to spin opposing the magnetic field and realign by the application
which
of a high the scanner
RF energy pulse detects
to the bodyandin thetransforms
magnetic field’sinto visual
direction. images
When [16]. The tis
the RF energy
pulse is stopped, the water molecules return to their state of equilibrium and align with
determines
Diagnostics 2023, 13, x FOR PEER REVIEW the amount of RF energy the water molecules can use.
the magnetic field once more [14]. This causes the water molecules to produce RF energy,
4 ofAs
32 we can
1, healthy
which brain
the scanner has and
detects white matter
transforms into(WM), gray [16].
visual images matter (GM),
The tissue and CSF, a
structure
structural
determines theMRI amountscan
of RF[17].
energyThe primary
the water difference
molecules can use. Asbetween
we can see inthese
Figuretissues
1, i
which
healthythe scanner
brain detects
has white and(WM),
matter transforms into visual
gray matter (GM), images
and CSF,[16]. The tissue
according structure
to a structural
MRI scanthe
determines
MRI scan [17].
is amount
based of onRFthe
The primary
amount
energy thebetween
difference
of water
water molecules
they contain,
can use.
these tissues in aAs
with
we can see
structural MRI
WM
in scan
constitut
Figure is
and
1,
based GM
healthy
on thecontaining
brain has white
amount 80%
of watermatterwater.
they (WM), The
contain,gray CSF
with WMfluid
matter (GM),isand
almost
constituting CSF, entirely
according
70% water and to
GMcompose
a
shown in Figure 1.
containing
structural 80%
MRI water.
scan The
[17]. CSF
The fluid
primary is almost entirely
difference composed
between these of water,
tissues in as
a shown
structuralin
Figure
MRI 1. is based on the amount of water they contain, with WM constituting 70% water
scan
and GM containing 80% water. The CSF fluid is almost entirely composed of water, as
shown in Figure 1.
Figure 1. Healthy brain MRI image showing white matter (WM), gray matter (GM), and CSF [17].
Figure 1. Healthy brain MRI image showing white matter (WM), gray matter (GM), and CSF [17].
Figure 1. Healthy brain MRI image showing white matter (WM), gray matter (GM),
Figure
Figure 22 illustrates
illustratesthe
thefundamental
fundamentalMRI MRI planes
planesused to to
used visualize
visualizethethe
anatomy of the
anatomy of
brain: axial,
Figure
the brain: coronal, and
axial,2coronal, sagittal.
illustrates Tl, T2, and
the fundamental
and sagittal. FLAIR
Tl, T2, and FLAIRMRI
MRIMRIsequences
planes are most
usedare
sequences often
tomost em‐ the a
visualize
often
ployed
employedforfor
brain analysis [14].
[14].AATl‐weighted
Tl-weightedscan scancan
candistinguish
distinguishbetween
between gray
brain: axial,brain analysis
coronal,
matter.T2-weighted
and
T2‐weightedimaging
imaging
sagittal. Tl, T2,
is water‐content
and FLAIR
sensitive and
MRI gray and
sequences
is therefore
are m
white matter. is water-content sensitive and is therefore ideallyideally
suited
ployed
suited forwhere
brain
to conditions
to conditions analysis
where
water [14].within
water accumulates
accumulates A Tl‐weighted
within of thescan
the tissues
the tissues of thecan
brain. distinguish betw
brain.
white matter. T2‐weighted imaging is water‐content sensitive and is the
suited to conditions where water accumulates within the tissues of the brai
Figure
Figure 2.
2. Fundamental
Fundamental MRI
MRI planes:
planes: (a)
(a) coronal,
coronal, (b)
(b) sagittal,
sagittal, and
and (c)
(c) axial.
axial.
Diagnostics 2023, 13, x FOR PEER REVIEWMost tumors show low or medium gray intensity on T1-w. On T2-w, most tumors 5 of 32
exhibit bright intensity [17]. Examples of MRI tumor intensity level are shown in Figure 3.
Figure 3. MRI brain tumor: (a) FLAIR image, (b) T1 image, and (c) T2 image [17].
T1 T2 Flair
White Matter Bright Dark Dark
Gray Matter Gray Dark Dark
CSF Dark Bright Dark
Tumor Dark Bright Bright
Figure3.3.MRI
Figure MRIbrain
braintumor:
tumor: (a)
(a) FLAIR
FLAIRimage,
image,(b)
(b)T1
T1image,
image,and
and(c)
(c)T2
T2image
image[17].
[17].
Another
TableAnother type of
2. Properties MRI MRI
identified as functional magnetic resonance imaging (fMRI)
typeofofvarious sequences.
MRI identified as functional magnetic resonance imaging (fMRI) [18]
[18] measures changes in blood oxygenation to interpret brain activity. An area of the
measures changes in blood oxygenation to interpret brain activity. An area of the brain that
brain that is more active begins T1 to use more blood andT2 oxygen. As a result,Flair an fMRI cor‐
is more active begins to use more blood and oxygen. As a result, an fMRI correlates the
relates the location
White Matter and mentalBright process to map the continuingDark activity in theDark brain.
location and mental process to map the continuing activity in the brain.
Gray Matter Gray Dark Dark
3.2.CT
3.2. CT CSF Dark Bright Dark
CT scanners provide finely
Tumor provide finely detailed
CT scanners detailed images of
Dark images of the interior the interior of the
Bright of the body using body ausing
Bright a re‐
revolving
volving
X-ray beamX‐ray
andbeam
a rowand a row of detectors.
of detectors. On a computer,
On a computer, specific algorithms
specific algorithms are used toare used
process
to process
the Another
images the images
type
captured captured
offrom
MRI variousfromangles
identified variousto angles
create to
as functional create cross‐sectional
magnetic resonance
cross-sectional images images
imaging
of of the
the (fMRI)
entire
entire
[18]
body body
measures
[19]. [19]. However,
changes
However, a CTinscana CT
blood scan
offercan
canoxygenation
more offer
tomore precise
interpret
precise images of images
brain of spine,
theactivity.
skull, the
Anskull,
area
and ofspine,
the
other
and
bone other bone
brainstructures
that is more structures
close close
to abegins
active to a
brain tumor, brain tumor,
as shown
use more as
bloodinand shown
Figure in Figure
4. Patients
oxygen. 4. Patients
typically
As a result, typically
an fMRIreceive
cor‐
receivethe
contrast
relates contrast
injections injections
location toand to highlight
highlight
mental processaberrant
aberrant tissues.
to map the tissues. The may
Thecontinuing
patient patient mayinoccasionally
occasionally
activity take dyetake
the brain. to
dye to improve
improve theirWhen
their image. image.anWhen
MRI is anunavailable,
MRI is unavailable, and the
and the patient haspatient has an im‐
an implantation
plantation
like
3.2. aCT like a pacemaker,
pacemaker, a CT scan may a CTbescan may be to
performed performed
diagnose to diagnose
a brain tumor.a brain tumor. The
The benefits of
benefits
using CT of using
scanning CT scanning
are low are
cost, low cost,
improved improved
tissue tissue
classificationclassification
detection,
CT scanners provide finely detailed images of the interior of the body using a re‐ detection,
quick quick
imaging,
imaging,
and
volving and more
moreX‐ray beam widespread
widespread a row ofavailability.
availability.
and The radiation
detectors. The
On a radiation
risk in a CT risk
computer, scan in
is a100
specific CT scangreater
times
algorithms is 100
are times
than
used
greater
in a than
standard in a
X-raystandard
diagnosisX‐ray
[19].diagnosis [19].
to process the images captured from various angles to create cross‐sectional images of the
entire body [19]. However, a CT scan can offer more precise images of the skull, spine,
and other bone structures close to a brain tumor, as shown in Figure 4. Patients typically
receive contrast injections to highlight aberrant tissues. The patient may occasionally take
dye to improve their image. When an MRI is unavailable, and the patient has an im‐
plantation like a pacemaker, a CT scan may be performed to diagnose a brain tumor. The
benefits of using CT scanning are low cost, improved tissue classification detection, quick
imaging, and more widespread availability. The radiation risk in a CT scan is 100 times
greater than in a standard X‐ray diagnosis [19].
Figure4.4.CT
Figure CTbrain
braintumor.
tumor.
3.3. PET
An example of a nuclear medicine technique that analyzes the metabolic activity of
biological tissues is positron emission tomography (PET) [20]. Therefore, to help evaluate
the tissue being studied, a small amount of a radioactive tracer is utilized throughout the
procedure. Fluorodeoxyglucose (FDG) is a popular PET agent for imaging the brain. To
Diagnostics 2023, 13, 3007 6 of 32
3.3. PET
An example of a nuclear medicine technique that analyzes the metabolic activity of
Diagnostics 2023, 13, x FOR PEER REVIEW
biologicaltissues is positron emission tomography (PET) [20]. Therefore, to help evaluate 6 of 32
the tissue being studied, a small amount of a radioactive tracer is utilized throughout the
procedure. Fluorodeoxyglucose (FDG) is a popular PET agent for imaging the brain. To
provide
providemore
moreconclusive
conclusiveinformation
informationon onmalignant
malignant (cancerous)
(cancerous) tumors
tumors and
and other
other lesions,
lesions,
PET
PET may also be utilized in conjunction with other diagnostic procedures like CT
may also be utilized in conjunction with other diagnostic procedures like CT or
or MRI.
MRI.
PET
PETscans
scansananorgan
organor ortissue
tissuebybyutilizing
utilizingaascanning
scanningdevice
devicetotofind
findphotons
photonsreleased
releasedby byaa
radionuclide
radionuclide atat that
that site
site[20].
[20]. The
Thechemical
chemical compounds
compounds that
that are
arenormally
normally utilized
utilized by
bythe
the
specific organ or tissue throughout its metabolic process are combined with a
specific organ or tissue throughout its metabolic process are combined with a radioactiveradioactive
atom
atomtotocreate
createthe
thetracer
tracerused
usedin inPET
PETscans,
scans,as
asshown
shownininFigure
Figure5.5.
Figure5.5.PET
Figure PETbrain
braintumor.
tumor.
3.4.
3.4. SPECT
SPECT
A
Anuclear
nuclearimaging
imagingexamination
examinationcalled
calleda asingle-photon
single‐photonemission
emission computed
computed tomogra-
tomog‐
phy (SPECT) combines CT with a radioactive tracer. The tracer is what
raphy (SPECT) combines CT with a radioactive tracer. The tracer is what enables medical enables medical
professionals
professionals to to observe thethe blood
bloodflow
flowtototissues
tissuesand and organs
organs [21].
[21]. A tracer
A tracer is injected
is injected into
into
the the patient’s
patient’s bloodstream
bloodstream prior
prior to to theSPECT
the SPECTscan. scan.TheThe radiolabeled
radiolabeled tracer
tracer generates
generates
gamma
gammarays raysthat
thatthethe
CTCT
scanner cancan
scanner detect sincesince
detect it is radiolabeled. Gamma-ray
it is radiolabeled. information
Gamma‐ray infor‐
ismation
gathered by the computer
is gathered and shown
by the computer on shown
and the CT on cross-sections. A 3D representation
the CT cross‐sections. A 3D repre‐ of
the brain can be created by adding these cross-sections back together [21].
sentation of the brain can be created by adding these cross‐sections back together [21].
3.5. Ultrasound
3.5. Ultrasound
An ultrasound is a specialized imaging technique that provides details that can be
An ultrasound is a specialized imaging technique that provides details that can be
useful in cancer diagnosis, especially for soft tissues. It is frequently employed as the
useful in cancer diagnosis, especially for soft tissues. It is frequently employed as the ini‐
initial step in the typical cancer diagnostic procedure [22]. One advantage of ultrasound
tial step in the typical cancer diagnostic procedure [22]. One advantage of ultrasound is
is that a test can be completed swiftly and affordably without subjecting the patient to
that a test can be completed swiftly and affordably without subjecting the patient to ra‐
radiation. However, ultrasound cannot independently confirm a cancer diagnosis and is
diation. However, ultrasound cannot independently confirm a cancer diagnosis and is
unable to generate images with the precise level of resolution or detail like a CT or MRI
unable to generate images with the precise level of resolution or detail like a CT or MRI
scan. A medical expert gently moves a transducer throughout the patient’s skin across
scan. A medical expert gently moves a transducer throughout the patient’s skin across the
the region of the body being examined during a conventional ultrasound examination.
region of the body being examined during a conventional ultrasound examination. A
A succession of high-frequency sounds is generated by the transducer, which “bounce
succession of high‐frequency
off” the patient’s sounds
interior organs. Theisensuing
generated by thereturn
echoes transducer,
to the which “bounce
ultrasound off”
device,
the patient’s
which interior organs.
then transforms Thewaves
the sound ensuing
intoechoes returnthat
a 2D image to the
mayultrasound
be observed device, which
in real-time
then transforms the sound waves into a 2D image that may be observed in
on a monitor. According to [22], US probes have been applied in brain tumor resection. real‐time on a
monitor. According to [22], US probes have been applied in brain tumor resection.
According to the degree of density inside the tissue being assessed, the shape and strength Ac‐
cording to the degree of density inside the tissue being assessed, the shape
of ultrasonic echoes can change. An ultrasound can detect tumors that may be malignant and strength
of ultrasonic
because solid echoes
massescan
andchange. An ultrasound
fluid-filled cysts bounce can detect
sound tumors
waves that may be malignant
differently.
because solid masses and fluid‐filled cysts bounce sound waves differently.
4.1.1.
4.1.1.Machine
MachineLearning
Learning
ML
ML isis aa branch
branch ofof AI
AI that
that allows
allows computers
computers to to learn
learn without
without beingbeing explicitly
explicitly pro-
pro‐
grammed.
grammed.Classifying
Classifyingmedical
medicalimages,
images,including
includinglesions,
lesions,into
intovarious
variousgroups
groupsusing
usinginput
input
features
featureshas
hasbecome
becomeone oneofofthe
thelatest
latestapplications
applicationsofof
ML.ML.There
There areare
two types
two of ML
types of MLalgo-
al‐
rithms: supervised learning and unsupervised learning [23]. ML algorithms
gorithms: supervised learning and unsupervised learning [23]. ML algorithms learn from learn from
labeled
labeleddata
datain insupervised
supervisedlearning.
learning. Unsupervised
Unsupervised learning
learning is is the
the process
processby bywhich
whichML ML
systems
systemsattempt
attemptto tocomprehend
comprehendthe theinterdata
interdatarelationship
relationshipusing
usingunlabeled
unlabeleddata.
data.ML
MLhas has
been employed to analyze brain cancers in the context of brain imaging
been employed to analyze brain cancers in the context of brain imaging [24]. The main [24]. The main
stages
stagesofofMLMLclassification
classificationare
areimage
imagepreprocessing,
preprocessing,feature
featureextraction,
extraction, feature
feature selection,
selection,
and classification. Figure 6 illustrates the process architecture.
and classification. Figure 6 illustrates the process architecture.
Figure6.6.ML
Figure MLblock
blockdiagram.
diagram.
images, the preprocessing stage must be effective enough to eliminate as much noise as
possible without affecting essential image components [25]. This procedure is carried out
using a variety of approaches, including cropping, image scaling, histogram equalization,
filtering using a median filter, and image adjusting [26].
3. Feature extraction
The process of converting images into features based on several image characteristics in
the medical field is known as feature extraction. These features carry the same information
as the original images but are entirely different. This technique has the advantages of
enhancing classifier accuracy, decreasing overfitting risk, allowing users to analyze data,
and speeding up training [27]. Texture, contrast, brightness, shape, gray level co-occurrence
matrix (GLCM) [28], Gabor transforms [29], wavelet-based features [30], 3D Haralick
features [31], and histogram of local binary patterns (LBP) [32] are some of the examples of
the various types of features.
4. Feature selection
The technique attempts to arrange the features in ascending order of importance
or relevance, with the top features being mostly employed in classification. As a result,
multiple feature selection techniques are needed to reduce redundant information to
discriminate between relevant and nonrelated features [33], such as PCA [34], genetic
algorithm (GA) [35], and ICA [36].
5. ML algorithm
Machine learning aims to divide the input information into separate groups based
on common features or patterns of behavior. KNN [35], ANN [37], RF [38], and SVM [39]
are examples of supervised methods. These techniques include two stages: training and
testing. During training, the data are manually labeled using human involvement. The
model is first constructed in this step, after which it is utilized to determine the classes that
are unlabeled in the testing stage. Application of the KNN algorithm works by finding
the points that are closest to each other by computing the distance between them using
one of several different approaches, including the Hamming, Manhatten, Euclidean, and
Minkowski distances [35].
The support vector machine (SVM) technique is frequently employed for classification
tasks. Every feature forming a data point in this approach, which represents a coordinate,
is formed in a distinct n-space. As a result, the objective of the SVM method is to identify
a boundary or line across a space with n dimensions, referred to as a hyperplane that
separates classes [39]. There are numerous ways to create different hyperplanes, but the
one with the maximum margin is the best. The maximum margin is the separation between
the most extreme data points inside a class, often known as the support vectors.
Figure8.8.DL
Figure DLblock
blockdiagram.
diagram.
Figure 8. DL block diagram.
Diagnostics 2023, 13, 3007 10 of 32
obtain the ideal threshold value [59]. Otsu thresholding [38] is the popular method among
these techniques.
false negative (FN) results when the model wrongly predicts a pixel belonging to a certain
class [71].
TP in classification tasks refers to an image that is accurately categorized into a positive
category based on the ground truth. Similar to this, the TN result occurs when the model
properly classifies an image in the negative category. As opposed to that, FP results occur
when the model wrongly assigns an image in the positive class while the actual datum is
in the negative category. FN results occur when the model misclassifies an image while it
belongs in the positive category. Through the four elements mentioned above, different
performance measures enable us to expand the analysis.
Accuracy (ACC) measures a model’s ability to correctly categorize all pixels/classes,
whether they are positive or negative. Sensitivity (SEN) shows the percentage of accurately
predicted positive images/pixels among all actual positive samples. It evaluates a model’s
ability to recognize relevant samples or pixels. The percentage of actual negatives that were
predicted is known as specificity (SPE). It indicates a percentage of classes or pixels that
could not be accurately recognized [71].
The precision (PR) or positive predictive value (PPV) measures how frequently the
model correctly predicts the class or pixel. It provides the precise percentage of positively
expected results from models. The most often used statistic that combines SEN and
precision is the F1 score [72]. It refers to the two-dimensional harmonic mean.
The Jaccard index (JI), also known as intersection over union (IoU), calculates the
percentage of overlap between the model’s prediction output and the annotation ground-
truth mask.
The spatial overlap between the segmented region of the model and the ground-
truth tumor region is measured by the Dice similarity coefficient (DSC). A DSC value
of zero means there is no spatial overlap between the annotated model result and the
actual tumor location, whereas a value of one means there is complete spatial overlap. The
receiver characteristics curve is summarized by the area under the curve (AUC), which
compares SEN to the false positive rate as a measure of a classifier’s ability to discriminate
between classes.
The similarity between the segmentation produced by the model and the expert-
annotated ground truth is known as the similarity index (SI). It describes how the identifica-
tion of the tumor region is comparable to that of the input image [71]. Table 3 summarizes
different performance equations.
Parameter Equation
ACC ( TP + TN )/( TP + FN + FP + TN )
SEN TP/( TP + FN )
SPE TN/( TN + FP)
PR TP/( TP + FP)
F1_SCORE 2 ∗ PR ∗ SEN/( PR + SEN )
DCS 2 ∗ TP/(2 ∗ TP + FP + FN
Jaccard TP/( TP + FP + FN )
5. Literature Review
5.1. Article Selection
The major goal of this study is to review and understand brain tumor classification
and detection strategies developed worldwide between 2010 and 2023. This present study
aims to review the most popular techniques for detecting brain cancer that have been made
available globally, in addition to looking at how successful CAD systems are in this process.
We did not target any one publisher specifically, but we utilized articles from a variety
of sources to account for the diversity of knowledge in a particular field. We collected
Diagnostics 2023, 13, 3007 13 of 32
appropriate articles from several internet scientific research article libraries. We searched
the pertinent publications using IEEE Explore, Medline, ScienceDirect, Google Scholar,
and ResearchGate.
Each time, the filter choice for the year (2010 to 2023) was chosen so that only papers
from the chosen period were presented. Most frequently, we used terms like “detection
of MRI images using deep learning,” “classification of brain tumor from CT/MRI images
using deep learning,” “detection and classification of brain tumor using deep learning,”
“CT brain tumor,” “PET brain tumor,” etc. This study offers an analysis of 53 chosen
publications.
phase. LBP in three orthogonal planes and an enhanced histogram of images are employed
in the third stage, the feature extraction step. Lastly, the random forest is employed as a
classifier for distinguishing tumorous areas since it can work flawlessly with large inputs
and has a high level of segmentation accuracy. The overall outcome was acceptable, with a
mean Jaccard value of 87% and a DSC of 93%.
By combining two K-means and FCM-clustering approaches, Almahfud et al. [83]
suggest a technique for segmenting human brain MRI images to identify brain cancers.
Because K-means is more susceptible to color variations, it can rapidly and effectively
discover optima and local outliers. So that the cluster results are better and the calculation
procedure is simpler, the K-means results are clustered once more with FCM to categorize
the convex contour based on the border. To increase accuracy, morphology and noise
reduction procedures are also suggested. Sixty-two brain MRI scans were used in the study,
and the accuracy rate was 91.94%.
According to Pereira et al. [69], an automated segmentation technique based on CNN
architecture was proposed, which explores small three-by-three kernels. Given the smaller
number of weights in the network, using small kernels enables the creation of more intricate
architectures and helps prevent overfitting. Additionally, they looked at the use of intensity
normalizing as an initial processing step, which, when combined with data augmentation,
was highly successful in segmenting brain tumors in MRI images. Their suggestion was
verified using the BRATS database, yielding Dice similarity coefficient values of 0.88, 0.83,
and 0.77 for the Challenge dataset for the whole, core, and enhancing areas.
According to the properties of a separated local square, they suggested a unique
approach for segmenting brain tumors [84]. The suggested procedure essentially consists
of three parts. An image was divided into homogenous sections with roughly comparable
properties and sizes using the super-pixel segmentation technique in the first stage. The
second phase was the extraction of gray statistical features and textural information. In
the last phase of building the segmentation model, super-pixels were identified as either
tumor areas or nontumor regions using SVM. They used 20 images from the BRATS dataset,
where a DSC of 86.12% was attained, to test the suggested technique.
The CAD system suggested by Gupta et al. [85] offers a noninvasive method for
the accurate tumor segmentation and detection of gliomas. The system takes advantage
of the super pixels’ combined properties and the FCM-clustering technique. The sug-
gested CAD method recorded 98% accuracy for glioma detection in both low-grade and
high-grade tumors.
Brain tumor segmentation using the CNN-based data transfer to SVM classifier ap-
proach was proposed by Cui et al. [68]. Two cascaded phases comprise their algorithm.
They trained CNN in the initial step to understand the mapping of the image region to
the tumor label region. In the testing phase, they passed the testing image and CNN’s
anticipated label output to an SVM classifier for precise segmentation. Tests and evalua-
tions show that the suggested structure outperforms separate SVM-based or CNN-based
segmentation, while DSC achieved 86.12%.
The two-pathway-group CNN architecture described by Razzak et al. is a novel
approach for brain tumor segmentation that simultaneously takes advantage of local and
global contextual traits. This approach imposes equivariance in the 2PG-CNN model
to prevent instability and overfitting parameter sharing. The output of a basic CNN is
handled as an extra source and combined at the last layer of the 2PG CNN, where the
cascade architecture was included. When a group CNN was embedded into a two-route
architecture for model validation using BRATS datasets, the results were DSC 89.2%, PR
88.22%, and SEN 88.32% [86].
A semantic segmentation model for the segmentation of brain tumors from multi-
modal 3D MRIs for the BRATS dataset was published in [87]. After experimenting with
several normalizing techniques, they discovered that group-norm and instance-norm per-
formed equally well. Additionally, they have tested with more advanced methods of data
augmentation, such as random histogram pairing, linear image transformations, rotations,
Diagnostics 2023, 13, 3007 15 of 32
and random image filtering, but these have yet to show any significant benefit. Further,
raising the network depth had no positive effect on performance. However, increasing the
number of filters consistently produced better results. Their BRATS end testing dataset
values were 0.826, 0.882, and 0.837 for overall Dice coefficient or improved tumor core,
entire tumor, and tumor center, respectively.
CNN was used by Karayegen and Aksahin [88] to offer a semantic segmentation
approach for autonomously segmenting brain tumors on BRATS image datasets that
include images from four distinct imaging modalities (T1, T1C, T2, and FLAIR). This
technique was effectively used, and images were shown in a variety of planes, including
sagittal, coronal, and axial, to determine the precise tumor location and parameters such as
height, breadth, and depth. In terms of tumor prediction, evaluation findings of semantic
segmentation carried out using networks are incredibly encouraging. The mean IoU and
mean prediction ratio were both calculated to be 86.946 and 91.718, respectively.
A novel, completely automatic method for segmenting brain tumor regions was
proposed by Ullah et al. [89] using multiscale residual attention CNN (MRA-UNet). To
maintain the sequential information, MRA-UNet uses three sequential slices as its input.
By employing multiscale learning in a cascade path, it can make use of the adaptable
region of interest strategy and precisely segment improved and core tumor regions. In the
BRATS-2020 dataset, their method produced novel outcomes with an overall Dice score of
90.18%.
A new technique for segmenting brain tumors using the fuzzy Otsu thresholding
morphology (FOTM) approach was presented by Wisaeng and Sa-Ngiamvibool [90]. The
values from each single histogram in the original MRI image were modified by using a
color normalizing preprocessing method in conjunction with histogram specification. The
findings unambiguously demonstrate that image gliomas, image meningiomas, and image
pituitary have average accuracy indices of 93.77%, 94.32%, and 94.37%, respectively. A
summary of MRI brain tumor segmentation is provided in Table 5.
In [92],
In [92],the
theauthors
authorssuggested
suggestedaanovel,
novel,wavelet-energy-based
wavelet‐energy‐basedmethodmethodfor forautomatically
automati‐
cally classifying MR images of the human brain into normal or abnormal.
classifying MR images of the human brain into normal or abnormal. The classifier The classifier
was
was SVM, and biogeography‐based optimization (BBO) was utilized
SVM, and biogeography-based optimization (BBO) was utilized to enhance the SVM’s to enhance the
SVM’s weights.
weights. They succeeded
They succeeded in achieving
in achieving 99% precision
99% precision and accuracy.
and 97% 97% accuracy.
Amin et al. [28] suggest an automated technique to distinguish
Amin et al. [28] suggest an automated technique to distinguish between
between malignant
malignant
and benign brain MRI images. The segmentation of potential lesions has usedvariety
and benign brain MRI images. The segmentation of potential lesions has used a of of
a variety
methodologies. Then, considering shape, texture, and intensity, a feature set was selected
methodologies. Then, considering shape, texture, and intensity, a feature set was selected
for every candidate lesion. The SVM classifier is then used on the collection of features to
for every candidate lesion. The SVM classifier is then used on the collection of features
compare the proposed framework’s precision using various cross‐validations. Three
to compare the proposed framework’s precision using various cross-validations. Three
benchmark datasets, including Harvard, Rider, and Local, are used to verify the sug‐
benchmark datasets, including Harvard, Rider, and Local, are used to verify the suggested
gested technique. For the procedure, the average accuracy was 97.1%, the area under the
technique. For the procedure, the average accuracy was 97.1%, the area under the curve
curve was 0.98, the sensitivity was 91.9%, and the specificity was 98.0%.
was 0.98, the sensitivity was 91.9%, and the specificity was 98.0%.
A suitable CAD approach toward classifying brain tumors is proposed in [93]. The
A suitable CAD approach toward classifying brain tumors is proposed in [93]. The
database includes meningioma, astrocytoma, normal brain areas, and primary brain tu‐
database includes meningioma, astrocytoma, normal brain areas, and primary brain tumors.
mors. The radiologists selected 20 × 20 regions of interest (ROIs) for every image in the
The radiologists selected 20 × 20 regions of interest (ROIs) for every image in the dataset.
dataset. Altogether, these ROI(s) were used to extract 371 intensity and texture features.
Altogether, these ROI(s) were used to extract 371 intensity and texture features. These three
These three classes were divided using the ANN classifier. Overall classification accuracy
classes were divided using the ANN classifier. Overall classification accuracy was 92.43%.
was 92.43%.
Four hundred twenty-eight T1 MR images from 55 individuals were used in a varied
dataset for multiclass brain tumor classification [94]. A based-on content active contour
model extracted 856 ROIs. These ROIs were used to extract 218 intensity and texture
Diagnostics 2023, 13, 3007 17 of 32
features. PCA was employed in this study to reduce the size of the feature space. The ANN
was then used to classify these six categories. The classification accuracy was seen to have
reached 85.5%.
A unique strategy for classifying brain tumors in MRI images was proposed in [95]
by employing improved structural descriptors and hybrid kernel-SVM. To better classify
the image and improve the texture feature extraction process using statistical parameters,
they used GLCM and histograms to derive the texture feature from every region. Different
kernels were combined to create a hybrid kernel SVM classifier to enhance the classification
process. They applied this technique to only axial T1 brain MRI images—93% accuracy for
their suggested strategy.
A hybrid system composed of two ML techniques was suggested in [96] for classifying
brain tumors. For this, 70 brain MR images overall (60 abnormal, 10 normal) were taken
into consideration. DWT was used to extract features from the images. Using PCA, the
total number of features was decreased. Following feature extraction, feed-forward back-
propagation ANN and KNN were applied individually on the decreased features. The
back-propagation learning method for updating weights is covered by FP-ANN. KNN has
already been covered. Using KNN and FP-ANN, this technique achieves 97% and 98%
accuracy, respectively [96].
A strategy for classifying brain MRI images was presented in [97]. Initially, they used
an enhanced image improvement method that comprises two distinct steps: noise removal
and contrast enhancement using histogram equalization. Then, using a DWT to extract
features from an improved MR brain image, they further decreased these features by mean
and standard deviation. Finally, they developed a sophisticated deep neural network
(DNN) to classify the brain MRI images as abnormal or normal, and their strategy achieved
95.8%.
[96] MRI 2010 GLCM PCA ANN and KNN 98% and
97%
Figure 11. Proposed method. Reprinted (adapted) with permission from [102]. Copyright
Figure
Figure 11. Proposed 11. Proposed
Mathematical
method. method.
Biosciences
Reprinted Reprinted
with (adapted)
and Engineering.
(adapted) with
permission permission
from from [102].
[102]. Copyright 2020Copyright
Mathematical Biosciences and Engineering.
Mathematical Biosciences and Engineering.
For accurate glioma grade prediction, researchers developed a custom
For accurate For grade
CNN‐based
glioma accurate
deep glioma researchers
learning
prediction, grade [103]
model prediction, researchers
and evaluated
developed developed
the performance
a customized CNN-based a custom
using Alex
CNN‐based
GoogleNet,
deep learning model deep learning
andevaluated
[103] and SqueezeNet model
the by [103] and evaluated
transfer learning.
performance the performance
Based onGoogleNet,
using AlexNet, gliomaAlex
using
104 clinicaland pat
SqueezeNet byGoogleNet,
with (50 LGGs
transfer andand
SqueezeNet
learning. 54 onby
HGGs),
Based transfer
they
104 learning.
trained
clinical Based
and evaluated
glioma onthe
patients 104 clinical
models.
with glioma
The
(50 LGGs pati
training
with (50 LGGs using
was expanded and 54a variety
HGGs),of they
datatrained and evaluated
augmentation theAmodels.
methods. five‐foldThe training
cross‐valida
was expanded using a variety of data augmentation methods. A five‐fold cross‐valida
Diagnostics 2023, 13, 3007 19 of 32
Figure 12. Workflow of the suggested active learning framework based on transfer learning. Re‐
Figure 12. Workflow of the suggested active learning framework based on transfer learning.
printed (adapted) with permission from [104]. Copyright 2021 Frontiers in Artificial Intelligence.
Reprinted (adapted) with permission from [104]. Copyright 2021 Frontiers in Artificial Intelligence.
A total of 131 patients with glioma were enrolled [105]. A rectangular ROI was used
A total of 131 patients with glioma were enrolled [105]. A rectangular ROI was used to
to segment tumor images, and this ROI contained around 80% of the tumor. The test da‐
segment tumor images, and this ROI contained around 80% of the tumor. The test dataset
taset was then created by randomly selecting 20% of the patient‐level data. Models pre‐
was then created by randomly selecting 20% of the patient-level data. Models previously
viously trained on the expansive natural image database ImageNet were applied to MRI
trained on the expansive natural image database ImageNet were applied to MRI images,
images, and then AlexNet and GoogleNet were developed from scratch and fine‐tuned.
and then AlexNet and GoogleNet were developed from scratch and fine-tuned. Five-fold
Five‐fold cross‐validation (CV) was used on the patient‐level split to evaluate the classi‐
cross-validation (CV) was used on the patient-level split to evaluate the classification task.
fication task. The averaged performance metrics for validation accuracy, test accuracy,
The averaged performance metrics for validation accuracy, test accuracy, and test AUC
and test AUC from the five‐fold CV of GoogleNet were, respectively, 0.867, 0.909, and
from the five-fold CV of GoogleNet were, respectively, 0.867, 0.909, and 0.939.
0.939.
Hamdaoui
Hamdaoui et et al.
al. [106]
[106] proposed
proposed anan intelligent
intelligentmedical
medicaldecision‐support
decision-support system
system forfor
identifying and categorizing brain tumors using images from the risk of malignancy
identifying and categorizing brain tumors using images from the risk of malignancy in‐ index.
They
dex. employed
They employeddeep deep
transfer learning
transfer principles
learning to avoid
principles the the
to avoid scarcity of training
scarcity of trainingdata
required to construct the CNN model. For this, they selected seven CNN
data required to construct the CNN model. For this, they selected seven CNN architec‐architectures that
had
tures that had already been trained on an ImageNet dataset that they carefully fitted on of
already been trained on an ImageNet dataset that they carefully fitted on (MRI) data
brain
(MRI) tumors gathered
data of from the
brain tumors BRATSfrom
gathered database, as shown
the BRATS in Figure
database, 13. Justinthe
as shown prediction
Figure 13.
Diagnostics 2023, 13, x FOR PEER REVIEW 20 of 32
Diagnostics 2023, 13, 3007 20 of 32
Just the prediction that received the highest score among the predictions made by the
that
sevenreceived the highest
pretrained CNNs score among the
is produced predictions
to increase made
their by the
model’s seven pretrained
accuracy. CNNs
They evaluated
is
theproduced to increase
effectiveness their model’s
of the primary accuracy.
two‐class model,They evaluated
which includesthe effectiveness
LGG and HGG of the
brain
primary two-class model, which includes LGG and HGG brain cancers, using a
cancers, using a ten‐way cross‐validation method. The test precision, F1 score, test preci‐ ten-way
cross-validation method. The
sion, and test sensitivity for test
theirprecision,
suggestedF1 model
score, test
wereprecision,
98.67%,and test sensitivity
98.06%, 98.33%, and for
their suggested model
98.06%, respectively. were 98.67%, 98.06%, 98.33%, and 98.06%, respectively.
Figure 13. Proposed process for deep transfer learning. Reprinted (adapted) with permission from
Figure 13. Proposed process for deep transfer learning. Reprinted (adapted) with permission
[106]. Copyright 2021 Indonesian Journal of Electrical Engineering and Computer Science.
from [106]. Copyright 2021 Indonesian Journal of Electrical Engineering and Computer Science.
A new
A new AIAI diagnosis
diagnosis modelmodel called
called EfficientNetB0
EfficientNetB0 was was created
created by by Khazaee
Khazaee et et al. [107]
al. [107]
to assess and categorize human brain gliomas utilizing sequences
to assess and categorize human brain gliomas utilizing sequences from MR images. They from MR images. They
used aa common
used commondatasetdataset(BRATS-2019)
(BRATS‐2019)toto validate
validate thethe
new new AI model,
AI model, andandtheythey
showedshowed
that
thatAI
the the AI components—CNN
components—CNN and transfer
and transfer learning—provided
learning—provided outstanding outstanding
performance perfor‐
for
mance for categorizing and grading glioma
categorizing and grading glioma images, with 98.8% accuracy. images, with 98.8% accuracy.
In [70],
In [70], the
the researchers
researchers developed
developed aa model model using
using transfer
transfer learning
learning and and pretrained
pretrained
ResNet18 to identify basal basal ganglia
ganglia germinomas
germinomas more more accurately.
accurately. In this retrospective
analysis, 73 patients with basal ganglioma were enrolled. Based Based on on both
both T1 T1 and
and T2T2 data,
data,
brain tumors
tumors were were manually
manuallysegmented.
segmented.ToTocreate createthethetumor
tumor classification
classification model,
model,thethe
T1
sequence
T1 sequence waswas utilized.
utilized.Transfer
Transfer learning
learning and
anda a2D 2Dconvolutional
convolutionalnetwork networkwere were used.
used.
Five‐fold cross‐validation
Five-fold cross-validation was wasusedusedtototraintrainthe
themodel,
model,and andit it
resulted
resulted in ina mean
a mean AUCAUC of
88%.
of 88%.
Researchers suggested an effective hyperparameter optimization method for CNN
based on Bayesian
Bayesianoptimization
optimization[108]. [108].ThisThismethod
method waswas assessed
assessed byby categorizing
categorizing 30643064
T1
T1 images
images intointo
threethree
typestypes of brain
of brain cancerscancers (glioma,
(glioma, pituitary,
pituitary, and and meningioma).
meningioma). Five
Five pop‐
popular
ular deep deep pretrained
pretrained models
models are are compared
compared to the
to the improved
improved CNN’sCNN’s performance
performance us-
using
ing transfer
transfer learning.
learning. TheirTheir
CNNCNN achievedachieved
98.70%98.70% validation
validation accuracy accuracy after applying
after applying Bayes‐
Bayesian optimization.
ian optimization.
A novel
novel generated
generatedtransfer
transferDL DL model
model waswasdeveloped
developed by Alanazi
by Alanazi et al.et[109] for the
al. [109] forearly
the
diagnosis
early diagnosisof brain cancers
of brain into their
cancers intodifferent categories,
their different such as
categories, meningioma,
such as meningioma, pituitary,
pi‐
and
tuitary,glioma. Several Several
and glioma. layers oflayers
the models were first
of the models wereconstructed from scratch
first constructed from to test the
scratch to
performance of standalone CNN models performed for
test the performance of standalone CNN models performed for brain MRI images. The brain MRI images. The weights
of the neurons
weights of the were
neurons thenwere
revised
thenusing the transfer
revised using the learning
transferapproach
learningtoapproach
categorize to brain
cate‐
MRI images into tumor subclasses using the 22-layer, isolated
gorize brain MRI images into tumor subclasses using the 22‐layer, isolated CNN model. CNN model. Consequently,
the transfer-learned
Consequently, model that was created
the transfer‐learned model had thatanwasaccuracy
created ratehadof 95.75%.
an accuracy rate of
95.75%. Rizwan et al. [110] suggested a method to identify various BT classes using Gaussian-
CNNRizwanon two et datasets.
al. [110]One of the datasets
suggested a method is employed
to identifytovarious
categorize lesions using
BT classes into pituitary,
Gaussi‐
glioma, and meningioma. The other distinguishes between
an‐CNN on two datasets. One of the datasets is employed to categorize lesions the three glioma classes
into(II, III,
pitu‐
and IV). The first and second datasets, respectively, have 233 and
itary, glioma, and meningioma. The other distinguishes between the three glioma classes 73 victims from a total of
3064 and 516 images on T1 enhanced images. For the two datasets,
(II, III, and IV). The first and second datasets, respectively, have 233 and 73 victims from a the suggested method
has
totalan ofaccuracy
3064 andof516 99.8% andon
images 97.14%.
T1 enhanced images. For the two datasets, the suggested
A seven-layer CNN was suggested in [111] to assist with the three-class categorization
method has an accuracy of 99.8% and 97.14%.
of brain MR images. To decrease computing time, separable convolution was used. The
A seven‐layer CNN was suggested in [111] to assist with the three‐class categoriza‐
suggested separable CNN model achieved 97.52% accuracy on a publicly available dataset
tion of brain MR images. To decrease computing time, separable convolution was used.
of 3064 images.
Diagnostics 2023, 13, 3007 21 of 32
Several pretrained CNNs were utilized in [112], including GoogleNet, Alexnet, Resnet50,
Resnet101, VGG-16, VGG-19, InceptionResNetV2, and Inceptionv3. To accommodate
additional image categories, the final few layers of these networks were modified. Data
from the clinical, Harvard, and Figshare repositories were widely used to assess these
models. The dataset was divided into training and testing halves in a 60:40 ratio. The
validation on the test set demonstrates that, compared to other proposed models, the
Alexnet with transfer learning demonstrated the best performance in the shortest time. The
suggested method obtained accuracies of 100%, 94%, and 95.92% using three datasets and
is more generic because it does not require any manually created features.
The suggested framework [113] describes three experiments that classified brain
malignancies such as meningiomas, gliomas, and pituitary tumors using three designs of
CNN (AlexNet, VGGNet, and GoogleNet). Using the MRI slices of the brain tumor dataset
from Figshare, each study then investigates transfer learning approaches like fine-tuning
and freezing. The data augmentation approaches are applied to the MRI slices for results
generalization, increasing dataset samples, and minimizing the risk of overfitting. The fine-
tuned VGG16 architecture attained the best accuracy at 98.69% in terms of categorization
in the proposed studies.
An effective hybrid optimization approach was used in [114] for the segmentation and
classification of brain tumors. To improve categorization, the CNN features were extracted.
The suggested chronological Jaya honey badger algorithm (CJHBA) was used to train the
deep residual network (DRN), which was used to conduct the classification by using the
retrieved features as input. The Jaya algorithm, the honey badger algorithm (HBA), and
the chronological notion are all combined in the proposed CJHBA. Using BRATS-2018, the
performance is assessed. The highest accuracy is 92.10%. A summary of MRI brain tumor
classification using DL is provided in Table 7.
Performance
Ref. Scan Year Technique Method Result
Metrics
[101] MRI 2015 DL Custom-CNN 96.00% Acc
[7] MRI 2019 DL Custom-CNN 98.70% Acc
96%
[102] MRI 2020 DL VGG-16, Inception-v3, ResNet-50 75% Acc
89%
[103] MRI 2021 DL AlexNet, GoogleNet, SqueezeNet 97.10% Acc
[104] MRI 2021 DL Custom-CNN 82.89% ROC
[105] MRI 2018 DL AlexNet 90.90% Test acc
98.67% precision,
98.06% F1 score,
[106] MRI 2021 DL multi-CNN structure
98.33% precision,
98.06% sensitivity
[107] MRI 2022 DL EfficientNetB0 98.80% Acc
[70] MRI 2022 DL ResNet18 88.00% AUC
[108] MRI 2022 DL Custom-CNN 98.70% Acc
[109] MRI 2022 DL Custom-CNN 95.75% Acc
[110] MRI 2022 DL Gaussian-CNN 99.80% Acc
[111] MRI 2020 DL seven-layer CNN 97.52% Acc
[112] MRI 2021 DL Alexnet 100.00% Acc
[113] MRI 2019 DL VGG16 98.69% Acc
[114] MRI 2023 DL CNN 92.10% Acc
Diagnostics 2023, 13, 3007 22 of 32
ing, segmentation of images, extracting features, and image categorization. They segmented
tumors using a fuzzy clustering approach and extracted key features using GLCM. In the
classification stage, improved SVM was finally used. The suggested approach has an 88%
accuracy rate.
A fully automated system for segmenting and diagnosing brain tumors was proposed
by Farajzadeh et al. [121]. This is accomplished by first applying five distinct preprocessing
techniques to an MR image, passing the images through a DWT, and then extracting six
local attributes from the image. The processed images are then delivered to an NN, which
subsequently extracts higher-order attributes from them. Another NN then weighs the
features and concatenates them with the initial MR image. The hybrid U-Net is then fed with
the concatenated data to segment the tumor and classify the image. For segmenting and
categorizing brain tumors, they attained accuracy rates of 98.93% and 98.81%, respectively.
Segmentation Feature
Ref. Year Classifier Accuracy
Method Extraction
shape and SVM and 97.44% and
[115] 2017 FCM
statistical ANN 97.37%
DWT and
[118] 2017 FCM CNN 98.00%
PCA
[52] 2019 watershed shape KNN 89.50%
[30] 2019 Ostu’s DWT SVM 99.00%
thresholding and
[117] 2020 CNN SVM 87.4%.
watershed
GLCM and
[116] 2020 canny ANN 98.90%
Gabor
[119] 2023 thresholding wavelet CNN 99.00%
Improved
[120] 2023 fuzzy clustering GLCM 88.00%
SVM
[121] 2023 U-Net DWT CNN 98.93%
Figure14.
Figure 14.Architecture
ArchitectureofofNN.
NN.
A unique correlation learning technique utilizing CNN and ANN was proposed by
Woniak et al. [125]. CNN used the support neural network to determine the best filters for
the convolution and pooling layers. Consequently, the main neural classification improved
efficiency and learns more quickly. Results indicated that the CLM model can achieve 96%
accuracy, 95% precision, and 95% recall.
The contribution of image fusion to an enhanced brain tumor classification framework
was examined by Nanmaran et al. [126], and this new fusion-based tumor categoriza-
tion model can be more successfully applied to personalized therapy. A distinct cosine
transform-based (DCT) fusion technique is utilized to combine MRI and SPECT images of
benign and malignant class brain tumors. With the help of the features extracted from fused
images, SVM, KNN, and decision trees were set to test. When using features extracted from
fused images, the SVM classifier outperformed KNN and decision tree classifiers with an
overall accuracy of 96.8%, specificity of 93%, recall of 94%, precision of 95%, and F1 score
of 91%. Table 9 provides different segmentation and classification methods employing
CT images.
Feature
Ref. Year Type Segmentation Feature Extraction Classification Result
Selection
[122] 2011 CT NN WCT and WST GA - 97.00%
FCM and
[123] 2011 CT GLCM and WCT GA SVM 98.00%
k-mean
[124] 2020 CT Semantic - - GoogleNet 99.60%
[125] 2021 CT - - - CNN 96.00%
[126] 2022 SPECT/MRI - DCT - SVM 96.80%
6. Discussion
Most brain tumor segmentation and classification strategies are presented in this
review. The quantitative efficiency of numerous conventional ML- and DL-based algorithms
is covered in this article. Figure 15 displays the total number of publications published
between 2010 and 2022 used in this review. Figure 16 displays the total number of articles
published that perform classification, segmentation, or both.
Diagnostics 2023, 13,
Diagnostics x FOR
2023, PEER REVIEW
13, 3007 25 of 25
32 of 32
Diagnostics 2023, 13, x FOR PEER REVIEW 25 of 32
Number
Number of
of articles
articles for
for each
each year
year from
from 2010
2010 to
to 2022.
2022.
2022
2022
2021
2021
2020
2020
2019
2019
2018
2018
2017
2017
2016
2016
2015
2015
2014
2014
2013
2013
2012
2012
2011
2011
2010
2010
2009
2009
0 5 10 15 20 25 30 35 40 45 50
0 5 10 15 20 25 30 35 40 45 50
Figure 15.15.
Number
Numberof articles
articlespublished from2010
2010 to 2022.
Figure
Figure 15. Number ofofarticles published from
published from 2010toto
2022.
2022.
segmentation
segmentation
classification
classification
0 2 4 6 8 10 12 14 16 18 20 22 24 26
0 2 4 6 8 10 12 14 16 18 20 22 24 26
Figure 16. Number of articles published that perform classification, segmentation, or both.
Figure 16.16.
Figure Number
Numberofofarticles
articlespublished thatperform
published that perform classification,
classification, segmentation,
segmentation, or both.
or both.
Brain
Braintumor
tumorsegmentation
segmentation uses traditionalimage
uses traditional image segmentation
segmentation methods
methods like region
like region
Brain tumor segmentation uses traditional image segmentation methods like region
growth and unsupervised machine learning.Noise,
Noise,low low image quality, and the initial
growth and unsupervised machine learning. Noise, low image quality, and initial
growth and unsupervised machine learning. image quality, and the the initial
seed point
seed pointareareitsitsbiggest
biggest challenges. Theclassification
challenges. The classification of of pixels
pixels intointo multiple
multiple classesclasses
has has
seed point are its biggest challenges. The classification of pixels into multiple classes has
been accomplished
been accomplished in the thesecond
second generation
generation of segmentation
of segmentation methods methods using unsuper‐
using unsupervised
been accomplished in the second generation of segmentation methods using unsuper‐
ML,ML,
vised suchsuch
as FCM as and
FCM K-means. These techniques
and K‐means. are, nevertheless,
These techniques quite noise sensitive.
are, nevertheless, quite noise
vised ML, such
Pixel-level as FCM and K‐means.
classification-based segmentation These techniques
approaches are,conventional
utilizing nevertheless, quite noise
supervised
sensitive. Pixel‐level classification‐based segmentation approaches utilizing conventional
sensitive.
ML have Pixel‐level
been classification‐based segmentation approaches utilizing
whichconventional
supervised ML presented
have beento presented
overcome this
to difficulty.
overcomeFeature engineering,
this difficulty. Feature extracts
engineering,
supervised ML have been presented to overcome this difficulty. Feature
the tumor-descriptive pieces of information for the model’s training, is frequently used in engineering,
which extracts the tumor‐descriptive pieces of information for the model’s training, is
which extractswith
conjunction thethese
tumor‐descriptive pieces ofpostprocessing
techniques. Additionally, information helpsfor the model’s
further improvetraining,
the is
frequently
results ofused in conjunction
supervised with these
machine learning techniques.
segmentation. Additionally,
Through postprocessing
the pipeline of its compo-helps
frequently used in conjunction with these techniques. Additionally, postprocessing helps
further improve
nent parts, the results
the deep of supervised
learning-based approachmachine learning
accomplishes segmentation.
an end-to-end Through
segmentation of the
further improve the results of supervised machine learning segmentation. Through the
pipeline
tumorsof its an
using component
MRI image.parts, the deep
These models learning‐based
frequently eliminate the approach accomplishes
requirement for manu- an
pipeline of its component parts, the deep learning‐based approach accomplishes an
end‐to‐end
ally built segmentation of tumorsextracting
features by automatically using an tumorMRI image. These
descriptive models frequently
information. However,elim‐
end‐to‐end segmentation of tumors using an MRI image. These models frequently elim‐
their
inate theapplication
requirement in thefor
medical domains
manually is limited
built features by by
the automatically
need for a big dataset for training
extracting tumor de‐
inate the requirement
the models for
and the complexity manually built features by automatically extracting tumor de‐
scriptive information. However,of their
understanding
application them.
in the medical domains is limited by
scriptive information. However, their application in the medical domains is limited by
the need for a big dataset for training the models and the complexity of understanding
the need for a big dataset for training the models and the complexity of understanding
them.
them.
In addition to the segmentation of the brain cancer region from the MRI scan, the
In addition to the segmentation of the brain cancer region from the MRI scan, the
classification of the tumor into its appropriate type is crucial for diagnosis and treatment
Diagnostics 2023, 13, 3007 26 of 32
In addition to the segmentation of the brain cancer region from the MRI scan, the
classification of the tumor into its appropriate type is crucial for diagnosis and treatment
planning, which in today’s medical practice necessitates a biopsy process. Several ap-
proaches that use shallow ML and DL have been put forth for classifying brain tumors.
Type shallow ML techniques frequently include preprocessing, ROI identification, and
feature extraction steps. Extracting descriptive information is a difficult task because of the
inherent noise sensitivity associated with MRI image collection as well as differences in the
shape, size, and position of tumor tissue cells. As a result, deep learning algorithms are
currently the most advanced method for classifying many types of brain cancers, includ-
ing astrocytomas, gliomas, meningiomas, and pituitary tumors. This review has covered
several classifications of brain tumors.
The noisy nature of an MRI image is one of the most frequent difficulties in ML-
based segmentation and classification of brain tumors. To increase the precision of brain
tumor segmentation and classification models, noise estimation and denoising MRI images
is a vital preprocessing operation. As a result, several methods, including the median
filter [115], Wiener filter and DWT [30], and DL-based methods [117], have been suggested
for denoising MRI images.
Large amounts of data are needed for DL models to operate effectively, but there need
to be more datasets available. Data augmentation aids in expanding small datasets and
creating a powerful generalized model. A common augmentation method for MRI images
has yet to be developed. Although many methods have been presented by researchers,
their primary goal is to increase the number of images. Most of the time, they ignore the
connections between space and texture. An identical augmentation technique is required
for comparative analysis to be conducted on its foundation.
They discovered that stroke patients are more likely than other cancer types to acquire
brain cancer. Another intriguing conclusion of the study is that women between the ages
of 40 and 60 and elderly stroke patients are more likely to acquire brain cancer.
8. Future Directions
The main applications of CADx systems are in educating and training; clinical practice
is not one of them. CADx-based systems still need to be widely used in clinics. The
absence of established techniques for assessing CADx systems in a practical environment
is one cause of this. The performance metrics outlined in this study provide a helpful and
necessary baseline for comparing algorithms, but because they are all so dependent on the
training set, more advanced tools are required.
The fact that the image formats utilized to train the models were those characteristics
of the AI research field (PNG) rather than those of the radiology field (DICOM, NIfTI) is
noteworthy. Many of the articles analyzed needed authors with clinical backgrounds.
A different but related technical issue that may affect the performance of CADx
systems in practice is the need for physician training on interacting with and interpreting
the results of such systems for diagnostic decisions. This issue must be dealt with in all the
papers included in the review. In terms of research project relevance and the acceptance of
its findings, greater participation by doctors in the process may be advantageous.
9. Conclusions
A brain tumor is an abnormal growth of brain tissue that affects the brain’s ability
to function normally. The primary objective in medical image processing is to find ac-
curate and helpful information with the minimum possible errors by using algorithms.
The four steps involved in segmenting and categorizing brain tumors using MRI data are
preprocessing, picture segmentation, extracting features, and image classification. The
diagnosis, treatment strategy, and patient follow-up can all be greatly enhanced by au-
tomating the segmentation and categorization of brain tumors. It is still difficult to create
a fully autonomous system that can be deployed on clinical floors due to the appearance
of the tumor and its irregular size, form, and nature. The review’s primary goal is to
present the state-of-the-art in the field of brain cancer, which includes the pathophysiology
of the disease, imaging technologies, WHO classification standards for tumors, primary
methods of diagnosis, and CAD algorithms for brain tumor classifications using ML and
DL techniques. Automating the segmentation and categorization of brain tumors using
deep learning techniques has many advantages over region-growing and shallow ML
systems. DL algorithms’ powerful feature learning capabilities are primarily to blame for
this. Although DL techniques have made a substantial contribution, a general technique
is still needed. This study reviewed 53 studies that used ML and DL to classify brain
tumors based on MRI, and it examined the challenges and obstacles that CAD brain tumor
classification techniques now face in practical application and advancement—a thorough
examination of the variables that might impact classification accuracy. The MRI sequences
and web address of the online repository for the dataset are among the publicly available
databases that have been briefly listed in Table 4 and used in the experiments evaluated in
this paper.
Diagnostics 2023, 13, 3007 28 of 32
References
1. Watson, C.; Kirkcaldie, M.; Paxinos, G. The Brain: An Introduction to Functional Neuroanatomy. 2010. Available online:
http://ci.nii.ac.jp/ncid/BB04049625 (accessed on 22 May 2023).
2. Jellinger, K.A. The Human Nervous System Structure and Function, 6th edn. Eur. J. Neurol. 2009, 16, e136. [CrossRef]
3. DeAngelis, L.M. Brain tumors. N. Engl. J. Med. 2001, 344, 114–123. [CrossRef]
4. Louis, D.N.; Perry, A.; Wesseling, P.; Brat, D.J.; Cree, I.A.; Figarella-Branger, D.; Hawkins, C.; Ng, H.K.; Pfister, S.M.; Reifenberger,
G.; et al. The 2021 WHO Classification of Tumors of the Central Nervous System: A summary. Neuro-Oncology 2021, 23, 1231–1251.
[CrossRef]
5. Hayward, R.M.; Patronas, N.; Baker, E.H.; Vézina, G.; Albert, P.S.; Warren, K.E. Inter-observer variability in the measurement of
diffuse intrinsic pontine gliomas. J. Neuro-Oncol. 2008, 90, 57–61. [CrossRef]
6. Mahaley, M.S., Jr.; Mettlin, C.; Natarajan, N.; Laws, E.R., Jr.; Peace, B.B. National survey of patterns of care for brain-tumor
patients. J. Neurosurg. 1989, 71, 826–836. [CrossRef] [PubMed]
7. Sultan, H.H.; Salem, N.M.; Al-Atabany, W. Multi-Classification of Brain Tumor Images Using Deep Neural Network. IEEE Access
2019, 7, 69215–69225. [CrossRef]
8. Johnson, D.R.; Guerin, J.B.; Giannini, C.; Morris, J.M.; Eckel, L.J.; Kaufmann, T.J. 2016 Updates to the WHO Brain Tumor
Classification System: What the Radiologist Needs to Know. RadioGraphics 2017, 37, 2164–2180. [CrossRef] [PubMed]
9. Buckner, J.C.; Brown, P.D.; O’Neill, B.P.; Meyer, F.B.; Wetmore, C.J.; Uhm, J.H. Central Nervous System Tumors. Mayo Clin. Proc.
2007, 82, 1271–1286. [CrossRef] [PubMed]
10. World Health Organization: WHO, “Cancer”. July 2019. Available online: https://www.who.int/health-topics/cancer
(accessed on 30 March 2022).
11. Amyot, F.; Arciniegas, D.B.; Brazaitis, M.P.; Curley, K.C.; Diaz-Arrastia, R.; Gandjbakhche, A.; Herscovitch, P.; Hinds, S.R.; Manley,
G.T.; Pacifico, A.; et al. A Review of the Effectiveness of Neuroimaging Modalities for the Detection of Traumatic Brain Injury. J.
Neurotrauma 2015, 32, 1693–1721. [CrossRef]
12. Pope, W.B. Brain metastases: Neuroimaging. Handb. Clin. Neurol. 2018, 149, 89–112. [CrossRef]
13. Abd-Ellah, M.K.; Awad, A.I.; Khalaf, A.A.; Hamed, H.F. A review on brain tumor diagnosis from MRI images: Practical
implications, key achievements, and lessons learned. Magn. Reson. Imaging 2019, 61, 300–318. [CrossRef] [PubMed]
14. Ammari, S.; Pitre-Champagnat, S.; Dercle, L.; Chouzenoux, E.; Moalla, S.; Reuze, S.; Talbot, H.; Mokoyoko, T.; Hadchiti, J.;
Diffetocq, S.; et al. Influence of Magnetic Field Strength on Magnetic Resonance Imaging Radiomics Features in Brain Imaging, an
In Vitro and In Vivo Study. Front. Oncol. 2021, 10, 541663. [CrossRef] [PubMed]
15. Sahoo, L.; Sarangi, L.; Dash, B.R.; Palo, H.K. Detection and Classification of Brain Tumor Using Magnetic Resonance Images.
In Advances in Electrical Control and Signal Systems: Select Proceedings of AECSS, Bhubaneswar, India, 8–9 November 2019; Springer:
Singapore, 2020; Volume 665, pp. 429–441. [CrossRef]
16. Kaur, R.; Doegar, A. Localization and Classification of Brain Tumor using Machine Learning & Deep Learning Techniques. Int. J.
Innov. Technol. Explor. Eng. 2019, 8, 59–66.
17. The Radiology Assistant: Multiple Sclerosis 2.0. 1 December 2021. Available online: https://radiologyassistant.nl/
neuroradiology/multiple-sclerosis/diagnosis-and-differential-diagnosis-3#mri-protocol-ms-brain-protocol
(accessed on 22 May 2023).
18. Savoy, R.L. Functional magnetic resonance imaging (fMRI). In Encyclopedia of Neuroscience; Elsevier: Charlestown, MA, USA, 1999.
19. Luo, Q.; Li, Y.; Luo, L.; Diao, W. Comparisons of the accuracy of radiation diagnostic modalities in brain tumor. Medicine 2018,
97, e11256. [CrossRef]
20. Positron Emission Tomography (PET). Johns Hopkins Medicine. 20 August 2021. Available online: https://www.
hopkinsmedicine.org/health/treatment-tests-and-therapies/positron-emission-tomography-pet (accessed on 20 May 2023).
21. Mayfield Brain and Spine. SPECT Scan. 2022. Available online: https://mayfieldclinic.com/pe-spect.htm (accessed on
22 May 2023).
22. Sastry, R.; Bi, W.L.; Pieper, S.; Frisken, S.; Kapur, T.; Wells, W.; Golby, A.J. Applications of Ultrasound in the Resection of Brain
Tumors. J. Neuroimaging 2016, 27, 5–15. [CrossRef]
23. Nasrabadi, N.M. Pattern recognition and machine learning. J. Electron. Imaging 2007, 16, 49901.
24. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine learning for medical imaging. Radiographics 2017, 37, 505–515. [CrossRef]
25. Mohan, M.R.M.; Sulochana, C.H.; Latha, T. Medical image denoising using multistage directional median filter. In Proceed-
ings of the 2015 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2015], Nagercoil, India,
9–20 March 2015.
26. Borole, V.Y.; Nimbhore, S.S.; Kawthekar, S.S. Image processing techniques for brain tumor detection: A review. Int. J. Emerg.
Trends Technol. Comput. Sci. (IJETTCS) 2015, 4, 2.
Diagnostics 2023, 13, 3007 29 of 32
27. Ziedan, R.H.; Mead, M.A.; Eltawel, G.S. Selecting the Appropriate Feature Extraction Techniques for Automatic Medical Images
Classification. Int. J. 2016, 4, 1–9.
28. Amin, J.; Sharif, M.; Yasmin, M.; Fernandes, S.L. A distinctive approach in brain tumor detection and classification using MRI.
Pattern Recognit. Lett. 2017, 139, 118–127. [CrossRef]
29. Islam, A.; Reza, S.M.; Iftekharuddin, K.M. Multifractal texture estimation for detection and segmentation of brain tumors. IEEE
Trans. Biomed. Eng. 2013, 60, 3204–3215. [CrossRef]
30. Gurbină, M.; Lascu, M.; Lascu, D. Tumor detection and classification of MRI brain image using different wavelet transforms
and support vector machines. In Proceedings of the 2019 42nd International Conference on Telecommunications and Signal
Processing (TSP), Budapest, Hungary, 1–3 July 2019; pp. 505–508.
31. Xu, X.; Zhang, X.; Tian, Q.; Zhang, G.; Liu, Y.; Cui, G.; Meng, J.; Wu, Y.; Liu, T.; Yang, Z.; et al. Three-dimensional texture features
from intensity and high-order derivative maps for the discrimination between bladder tumors and wall tissues via MRI. Int. J.
Comput. Assist. Radiol. Surg. 2017, 12, 645–656. [CrossRef]
32. Kaplan, K.; Kaya, Y.; Kuncan, M.; Ertunç, H.M. Brain tumor classification using modified local binary patterns (LBP) feature
extraction methods. Med. Hypotheses 2020, 139, 109696. [CrossRef]
33. Afza, F.; Khan, M.S.; Sharif, M.; Saba, T. Microscopic skin laceration segmentation and classification: A framework of statistical
normal distribution and optimal feature selection. Microsc. Res. Tech. 2019, 82, 1471–1488. [CrossRef]
34. Lakshmi, A.; Arivoli, T.; Rajasekaran, M.P. A Novel M-ACA-Based Tumor Segmentation and DAPP Feature Extraction with
PPCSO-PKC-Based MRI Classification. Arab. J. Sci. Eng. 2017, 43, 7095–7111. [CrossRef]
35. Adair, J.; Brownlee, A.; Ochoa, G. Evolutionary Algorithms with Linkage Information for Feature Selection in Brain Computer
Interfaces. In Advances in Computational Intelligence Systems; Springer Nature: Cham, Switzerland, 2016; pp. 287–307.
36. Arakeri, M.P.; Reddy, G.R.M. Computeraided diagnosis system for tissue characterization of brain tumor on magnetic resonance
images. Signal Image Video Process. 2015, 9, 409–425. [CrossRef]
37. Wang, S.; Zhang, Y.; Dong, Z.; Du, S.; Ji, G.; Yan, J.; Phillips, P. Feed-forward neural network optimized by hybridization of PSO
and ABC for abnormal brain detection. Int. J. Imaging Syst. Technol. 2015, 25, 153–164. [CrossRef]
38. Abbasi, S.; Tajeripour, F. Detection of brain tumor in 3D MRI images using local binary patterns and histogram orientation
gradient. Neurocomputing 2017, 219, 526–535. [CrossRef]
39. Zöllner, F.G.; Emblem, K.E.; Schad, L.R. SVM-based glioma grading: Optimization by feature reduction analysis. Z. Med. Phys.
2012, 22, 205–214. [CrossRef]
40. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501.
[CrossRef]
41. Bhatele, K.R.; Bhadauria, S.S. Brain structural disorders detection and classification approaches: A review. Artif. Intell. Rev. 2019,
53, 3349–3401. [CrossRef]
42. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [CrossRef]
43. Hu, A.; Razmjooy, N. Brain tumor diagnosis based on metaheuristics and deep learning. Int. J. Imaging Syst. Technol. 2020, 31,
657–669. [CrossRef]
44. Tandel, G.S.; Balestrieri, A.; Jujaray, T.; Khanna, N.N.; Saba, L.; Suri, J.S. Multiclass magnetic resonance imaging brain tumor
classification using artificial intelligence paradigm. Comput. Biol. Med. 2020, 122, 103804. [CrossRef] [PubMed]
45. Sahaai, M.B. Brain tumor detection using DNN algorithm. Turk. J. Comput. Math. Educ. (TURCOMAT) 2021, 12, 3338–3345.
46. Hashemi, M. Enlarging smaller images before inputting into convolutional neural network: Zero-padding vs. interpolation. J. Big
Data 2019, 6, 98. [CrossRef]
47. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Briefings
Bioinform. 2017, 19, 1236–1246. [CrossRef]
48. Gorach, T. Deep convolutional neural networks—A review. Int. Res. J. Eng. Technol. (IRJET) 2018, 5, 439.
49. Ogundokun, R.O.; Maskeliunas, R.; Misra, S.; Damaševičius, R. Improved CNN Based on Batch Normalization and Adam
Optimizer. In Proceedings of the Computational Science and Its Applications–ICCSA 2022 Workshops, Malaga, Spain, 4–7 July
2022; Part V. pp. 593–604.
50. Ismael SA, A.; Mohammed, A.; Hefny, H. An enhanced deep learning approach for brain cancer MRI images classification using
residual networks. Artif. Intell. Med. 2020, 102, 101779. [CrossRef]
51. Baheti, P. A Comprehensive Guide to Convolutional Neural Networks. V7. Available online: https://www.v7labs.com/blog/
convolutional-neural-networks-guide (accessed on 24 April 2023).
52. Ramdlon, R.H.; Kusumaningtyas, E.M.; Karlita, T. Brain Tumor Classification Using MRI Images with K-Nearest Neighbor
Method. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019;
pp. 660–667. [CrossRef]
53. Gurusamy, R.; Subramaniam, V. A machine learning approach for MRI brain tumor classification. Comput. Mater. Contin. 2017, 53,
91–109.
54. Pohle, R.; Toennies, K.D. Segmentation of medical images using adaptive region growing. In Proceedings of the Medical Imaging
2001: Image Processing, San Diego, CA, USA, 4–10 November 2001; Volume 4322, pp. 1337–1346. [CrossRef]
55. Dey, N.; Ashour, A.S. Computing in medical image analysis. In Soft Computing Based Medical Image Analysis; Academic Press:
Cambridge, MA, USA, 2018; pp. 3–11.
Diagnostics 2023, 13, 3007 30 of 32
56. Hooda, H.; Verma, O.P.; Singhal, T. Brain tumor segmentation: A performance analysis using K-Means, Fuzzy C-Means and
Region growing algorithm. In Proceedings of the 2014 IEEE International Conference on Advanced Communications, Control
and Computing Technologies, Ramanathapuram, India, 8–10 May 2014; pp. 1621–1626.
57. Sharif, M.; Tanvir, U.; Munir, E.U.; Khan, M.A.; Yasmin, M. Brain tumor segmentation and classification by improved binomial
thresholding and multi-features selection. J. Ambient. Intell. Humaniz. Comput. 2018, 1–20. [CrossRef]
58. Shanthi, K.J.; Kumar, M.S. Skull stripping and automatic segmentation of brain MRI using seed growth and threshold techniques.
In Proceedings of the 2007 International Conference on Intelligent and Advanced Systems, Kuala Lumpur, Malaysia, 25–28
November 2007; pp. 422–426. [CrossRef]
59. Zhang, F.; Hancock, E.R. New Riemannian techniques for directional and tensorial image data. Pattern Recognit. 2010, 43,
1590–1606. [CrossRef]
60. Singh, N.P.; Dixit, S.; Akshaya, A.S.; Khodanpur, B.I. Gradient Magnitude Based Watershed Segmentation for Brain Tumor
Segmentation and Classification. In Advances in Intelligent Systems and Computing; Springer Nature: Cham, Switzerland, 2017;
pp. 611–619. [CrossRef]
61. Couprie, M.; Bertrand, G. Topological gray-scale watershed transformation. Vis. Geom. VI 1997, 3168, 136–146. [CrossRef]
62. Khan, M.S.; Lali, M.I.U.; Saba, T.; Ishaq, M.; Sharif, M.; Saba, T.; Zahoor, S.; Akram, T. Brain tumor detection and classification: A
framework of marker-based watershed algorithm and multilevel priority features selection. Microsc. Res. Tech. 2019, 82, 909–922.
[CrossRef]
63. Lotufo, R.; Falcao, A.; Zampirolli, F. IFT-Watershed from gray-scale marker. In Proceedings of the XV Brazilian Symposium on
Computer Graphics and Image Processing, Fortaleza, Brazil, 10 October 2003. [CrossRef]
64. Dougherty, E.R. An Introduction to Morphological Image Processing; SPIE Optical Engineering Press: Bellingham, WA, USA, 1992.
65. Kaur, D.; Kaur, Y. Various image segmentation techniques: A review. Int. J. Comput. Sci. Mob. Comput. 2014, 3, 809–814.
66. Aslam, A.; Khan, E.; Beg, M.S. Improved Edge Detection Algorithm for Brain Tumor Segmentation. Procedia Comput. Sci. 2015, 58,
430–437. [CrossRef]
67. Egmont-Petersen, M.; de Ridder, D.; Handels, H. Image processing with neural networks—A review. Pattern Recognit. 2002, 35,
2279–2301. [CrossRef]
68. Cui, B.; Xie, M.; Wang, C. A Deep Convolutional Neural Network Learning Transfer to SVM-Based Segmentation Method for
Brain Tumor. In Proceedings of the 2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT), Jinan,
China, 18–20 October 2019; pp. 1–5. [CrossRef]
69. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.
IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [CrossRef]
70. Ye, N.; Yu, H.; Chen, Z.; Teng, C.; Liu, P.; Liu, X.; Xiong, Y.; Lin, X.; Li, S.; Li, X. Classification of Gliomas and Germinomas of the
Basal Ganglia by Transfer Learning. Front. Oncol. 2022, 12, 844197. [CrossRef]
71. Biratu, E.S.; Schwenker, F.; Ayano, Y.M.; Debelee, T.G. A survey of brain tumor segmentation and classification algorithms. J.
Imaging 2021, 7, 179. [CrossRef]
72. Wikipedia Contributors. F Score. Wikipedia. 2023. Available online: https://en.wikipedia.org/wiki/F-score (accessed on
22 May 2023).
73. Brain Tumor Segmentation (BraTS) Challenge. Available online: http://www.braintumorsegmentation.org/ (accessed on
22 May 2023).
74. RIDER NEURO MRI—The Cancer Imaging Archive (TCIA) Public Access—Cancer Imaging Archive Wiki. Available online:
https://wiki.cancerimagingarchive.net/display/Public/RIDER+NEURO+MRI (accessed on 22 May 2023).
75. Harvard Medical School Data. Available online: http://www.med.harvard.edu/AANLIB/ (accessed on 16 March 2021).
76. The Cancer Genome Atlas. TCGA. Available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-GBM
(accessed on 22 May 2023).
77. The Cancer Genome Atlas. TCGA-LGG. Available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-LGG
(accessed on 22 May 2023).
78. Cheng, J. Figshare Brain Tumor Dataset. 2017. Available online: https://figshare.com/articles/dataset/brain_tumor_dataset/15
12427/5 (accessed on 13 May 2022).
79. IXI Dataset—Brain Development. Available online: https://brain-development.org/ixi-dataset/ (accessed on 22 May 2023).
80. Gordillo, N.; Montseny, E.; Sobrevilla, P. A new fuzzy approach to brain tumor segmentation. In Proceedings of the 2010 IEEE
International Conference, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [CrossRef]
81. Rajendran; Dhanasekaran, R. A hybrid Method Based on Fuzzy Clustering and Active Contour Using GGVF for Brain Tumor
Segmentation on MRI Images. Eur. J. Sci. Res. 2011, 61, 305–313.
82. Reddy, K.K.; Solmaz, B.; Yan, P.; Avgeropoulos, N.G.; Rippe, D.J.; Shah, M. Confidence guided enhancing brain tumor segmenta-
tion in multi-parametric MRI. In Proceedings of the 9th IEEE International Symposium on Biomedical Imaging, Barcelona, Spain,
2–5 May 2012; pp. 366–369. [CrossRef]
83. Almahfud, M.A.; Setyawan, R.; Sari, C.A.; Setiadi, D.R.I.M.; Rachmawanto, E.H. An Effective MRI Brain Image Segmentation
using Joint Clustering (K-Means and Fuzzy C-Means). In Proceedings of the 2018 International Seminar on Research of
Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 21–22 November 2018; pp. 11–16.
Diagnostics 2023, 13, 3007 31 of 32
84. Chen, W.; Qiao, X.; Liu, B.; Qi, X.; Wang, R.; Wang, X. Automatic brain tumor segmentation based on features of separated local
square. In Proceedings of the 2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017.
85. Gupta, N.; Mishra, S.; Khanna, P. Glioma identification from brain MRI using superpixels and FCM clustering. In Proceedings of
the 2018 Conference on Information and Communication Technology (CICT), Jabalpur, India, 26–28 October 2018. [CrossRef]
86. Razzak, M.I.; Imran, M.; Xu, G. Efficient Brain Tumor Segmentation with Multiscale Two-Pathway-Group Conventional Neural
Networks. IEEE J. Biomed. Health Inform. 2018, 23, 1911–1919. [CrossRef] [PubMed]
87. Myronenko, A.; Hatamizadeh, A. Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs. In Proceedings of the
International MICCAI Brainlesion Workshop, Singapore, 18 September 2020; pp. 82–89. [CrossRef]
88. Karayegen, G.; Aksahin, M.F. Brain tumor prediction on MR images with semantic segmentation by using deep learning network
and 3D imaging of tumor region. Biomed. Signal Process. Control. 2021, 66, 102458. [CrossRef]
89. Ullah, Z.; Usman, M.; Jeon, M.; Gwak, J. Cascade multiscale residual attention CNNs with adaptive ROI for automatic brain
tumor segmentation. Inf. Sci. 2022, 608, 1541–1556. [CrossRef]
90. Wisaeng, K.; Sa-Ngiamvibool, W. Brain Tumor Segmentation Using Fuzzy Otsu Threshold Morphological Algorithm. IAENG Int.
J. Appl. Math. 2023, 53, 1–12.
91. Zhang, Y.; Dong, Z.; Wu, L.; Wang, S. A hybrid method for MRI brain image classification. Expert Syst. Appl. 2011, 38, 10049–10053.
[CrossRef]
92. Yang, G.; Zhang, Y.; Yang, J.; Ji, G.; Dong, Z.; Wang, S.; Feng, C.; Wang, Q. Automated classification of brain images using
wavelet-energy and biogeography-based optimization. Multimed. Tools Appl. 2015, 75, 15601–15617. [CrossRef]
93. Tiwari, P.; Sachdeva, J.; Ahuja, C.K.; Khandelwal, N. Computer Aided Diagnosis System—A Decision Support System for Clinical
Diagnosis of Brain Tumours. Int. J. Comput. Intell. Syst. 2017, 10, 104–119. [CrossRef]
94. Sachdeva, J.; Kumar, V.; Gupta, I.; Khandelwal, N.; Ahuja, C.K. Segmentation, Feature Extraction, and Multiclass Brain Tumor
Classification. J. Digit. Imaging 2013, 26, 1141–1150. [CrossRef]
95. Jayachandran, A.; Dhanasekaran, R. Severity Analysis of Brain Tumor in MRI Images Using Modified Multitexton Structure
Descriptor and Kernel-SVM. Arab. J. Sci. Eng. 2014, 39, 7073–7086. [CrossRef]
96. El-Dahshan, E.-S.A.; Hosny, T.; Salem, A.-B.M. Hybrid intelligent techniques for MRI brain images classification. Digit. Signal
Process. 2010, 20, 433–441. [CrossRef]
97. Ullah, Z.; Farooq, M.U.; Lee, S.-H.; An, D. A hybrid image enhancement based brain MRI images classification technique. Med.
Hypotheses 2020, 143, 109922. [CrossRef] [PubMed]
98. Kang, J.; Ullah, Z.; Gwak, J. MRI-Based Brain Tumor Classification Using Ensemble of Deep Features and Machine Learning
Classifiers. Sensors 2021, 21, 2222. [CrossRef]
99. Díaz-Pernas, F.; Martínez-Zarzuela, M.; Antón-Rodríguez, M.; González-Ortega, D. A Deep Learning Approach for Brain Tumor
Classification and Segmentation Using a Multiscale Convolutional Neural Network. Healthcare 2021, 9, 153. [CrossRef] [PubMed]
100. Badža, M.M.; Barjaktarović, M. Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network. Appl.
Sci. 2020, 10, 1999. [CrossRef]
101. Ertosun, M.G.; Rubin, D.L. Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular
approach with ensemble of convolutional neural networks. In Proceedings of the AMIA Annual Symposium, San Francisco, CA,
USA, 14–18 November 2015; Volume 2015, pp. 1899–1908.
102. Khan, H.A.; Jue, W.; Mushtaq, M.; Mushtaq, M.U. Brain tumor classification in MRI image using convolutional neural network.
Math. Biosci. Eng. 2020, 17, 6203–6216. [CrossRef]
103. Özcan, H.; Emiroğlu, B.G.; Sabuncuoğlu, H.; Özdoğan, S.; Soyer, A.; Saygı, T. A comparative study for glioma classification using
deep convolutional neural networks. Math. Biosci. Eng. MBE 2021, 18, 1550–1572. [CrossRef]
104. Hao, R.; Namdar, K.; Liu, L.; Khalvati, F. A Transfer Learning–Based Active Learning Framework for Brain Tumor Classification.
Front. Artif. Intell. 2021, 4, 635766. [CrossRef]
105. Yang, Y.; Yan, L.-F.; Zhang, X.; Han, Y.; Nan, H.-Y.; Hu, Y.-C.; Hu, B.; Yan, S.-L.; Zhang, J.; Cheng, D.-L.; et al. Glioma Grading on
Conventional MR Images: A Deep Learning Study with Transfer Learning. Front. Neurosci. 2018, 12, 804. [CrossRef]
106. El Hamdaoui, H.; Benfares, A.; Boujraf, S.; Chaoui, N.E.H.; Alami, B.; Maaroufi, M.; Qjidaa, H. High precision brain tumor
classification model based on deep transfer learning and stacking concepts. Indones. J. Electr. Eng. Comput. Sci. 2021, 24, 167–177.
[CrossRef]
107. Khazaee, Z.; Langarizadeh, M.; Ahmadabadi, M.E.S. Developing an Artificial Intelligence Model for Tumor Grading and
Classification, Based on MRI Sequences of Human Brain Gliomas. Int. J. Cancer Manag. 2022, 15, e120638. [CrossRef]
108. Amou, M.A.; Xia, K.; Kamhi, S.; Mouhafid, M. A Novel MRI Diagnosis Method for Brain Tumor Classification Based on CNN
and Bayesian Optimization. Healthcare 2022, 10, 494. [CrossRef] [PubMed]
109. Alanazi, M.; Ali, M.; Hussain, J.; Zafar, A.; Mohatram, M.; Irfan, M.; AlRuwaili, R.; Alruwaili, M.; Ali, N.T.; Albarrak, A.M.
Brain Tumor/Mass Classification Framework Using Magnetic-Resonance-Imaging-Based Isolated and Developed Transfer
Deep-Learning Model. Sensors 2022, 22, 372. [CrossRef] [PubMed]
110. Rizwan, M.; Shabbir, A.; Javed, A.R.; Shabbr, M.; Baker, T.; Al-Jumeily, D. Brain Tumor and Glioma Grade Classification Using
Gaussian Convolutional Neural Network. IEEE Access 2022, 10, 29731–29740. [CrossRef]
111. Isunuri, B.V.; Kakarla, J. Three-class brain tumor classification from magnetic resonance images using separable convolution
based neural network. Concurr. Comput. Pract. Exp. 2021, 34, e6541. [CrossRef]
Diagnostics 2023, 13, 3007 32 of 32
112. Kaur, T.; Gandhi, T.K. Deep convolutional neural networks with transfer learning for automated brain image classification. J.
Mach. Vis. Appl. 2020, 31, 20. [CrossRef]
113. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A Deep Learning-Based Framework for Automatic Brain Tumors
Classification Using Transfer Learning. Circuits Syst. Signal Process. 2019, 39, 757–775. [CrossRef]
114. Deepa, S.; Janet, J.; Sumathi, S.; Ananth, J.P. Hybrid Optimization Algorithm Enabled Deep Learning Approach Brain Tumor
Segmentation and Classification Using MRI. J. Digit. Imaging 2023, 36, 847–868. [CrossRef]
115. Ahmmed, R.; Swakshar, A.S.; Hossain, M.F.; Rafiq, M.A. Classification of tumors and it stages in brain MRI using support
vector machine and artificial neural network. In Proceedings of the 2017 International Conference on Electrical, Computer and
Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 16–18 February 2017.
116. Sathi, K.A.; Islam, S. Hybrid Feature Extraction Based Brain Tumor Classification using an Artificial Neural Network. In
Proceedings of the 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Greater
Noida, India, 30–31 October 2020; pp. 155–160. [CrossRef]
117. Islam, R.; Imran, S.; Ashikuzzaman; Khan, M.A. Detection and Classification of Brain Tumor Based on Multilevel Segmentation
with Convolutional Neural Network. J. Biomed. Sci. Eng. 2020, 13, 45–53. [CrossRef]
118. Mohsen, H.; El-Dahshan, E.A.; El-Horbaty, E.M.; Salem, A.M. Classification using deep learning neural networks for brain tumors.
Future Comput. Inform. J. 2017, 3, 68–71. [CrossRef]
119. Babu, P.A.; Rao, B.S.; Reddy, Y.V.B.; Kumar, G.R.; Rao, J.N.; Koduru, S.K.R. Optimized CNN-based Brain Tumor Segmentation
and Classification using Artificial Bee Colony and Thresholding. Int. J. Comput. Commun. Control. 2023, 18, 577. [CrossRef]
120. Ansari, A.S. Numerical Simulation and Development of Brain Tumor Segmentation and Classification of Brain Tumor Using
Improved Support Vector Machine. Int. J. Intell. Syst. Appl. Eng. 2023, 11, 35–44.
121. Farajzadeh, N.; Sadeghzadeh, N.; Hashemzadeh, M. Brain tumor segmentation and classification on MRI via deep hybrid
representation learning. Expert Syst. Appl. 2023, 224, 119963. [CrossRef]
122. Padma, A.; Sukanesh, R. A wavelet based automatic segmentation of brain tumor in CT images using optimal statistical texture
features. Int. J. Image Process. 2011, 5, 552–563.
123. Padma, A.; Sukanesh, R. Automatic Classification and Segmentation of Brain Tumor in CT Images using Optimal Dominant Gray
level Run length Texture Features. Int. J. Adv. Comput. Sci. Appl. 2011, 2, 53–121. [CrossRef]
124. Ruba, T.; Tamilselvi, R.; Beham, M.P.; Aparna, N. Accurate Classification and Detection of Brain Cancer Cells in MRI and CT
Images using Nano Contrast Agents. Biomed. Pharmacol. J. 2020, 13, 1227–1237. [CrossRef]
125. Woźniak, M.; Siłka, J.; Wieczorek, M.W. Deep neural network correlation learning mechanism for CT brain tumor detection.
Neural Comput. Appl. 2021, 35, 14611–14626. [CrossRef]
126. Nanmaran, R.; Srimathi, S.; Yamuna, G.; Thanigaivel, S.; Vickram, A.S.; Priya, A.K.; Karthick, A.; Karpagam, J.; Mohanavel,
V.; Muhibbullah, M. Investigating the Role of Image Fusion in Brain Tumor Classification Models Based on Machine Learning
Algorithm for Personalized Medicine. Comput. Math. Methods Med. 2022, 2022, 7137524. [CrossRef]
127. Burns, A.; Iliffe, S. Alzheimer’s disease. BMJ 2009, 338, b158. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.