0% found this document useful (0 votes)
26 views32 pages

Diagnostics 13 03007

This document summarizes a review of recent advances in brain tumor diagnosis using AI-based classification techniques. It discusses the types of brain tumors and imaging modalities used for diagnosis such as MRI and CT scans. It describes challenges with tumor segmentation and the need for automated computer-assisted methods. The review examines algorithms used for tumor segmentation and classification applying machine learning and deep learning techniques to enhance diagnosis and improve patient outcomes.

Uploaded by

sahad adelmoumen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views32 pages

Diagnostics 13 03007

This document summarizes a review of recent advances in brain tumor diagnosis using AI-based classification techniques. It discusses the types of brain tumors and imaging modalities used for diagnosis such as MRI and CT scans. It describes challenges with tumor segmentation and the need for automated computer-assisted methods. The review examines algorithms used for tumor segmentation and classification applying machine learning and deep learning techniques to enhance diagnosis and improve patient outcomes.

Uploaded by

sahad adelmoumen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

diagnostics

Review
A Review of Recent Advances in Brain Tumor Diagnosis Based
on AI-Based Classification
Reham Kaifi 1,2,3

1 Department of Radiological Sciences, College of Applied Medical Sciences, King Saud bin Abdulaziz
University for Health Sciences, Jeddah City 22384, Saudi Arabia; [email protected]
2 King Abdullah International Medical Research Center, Jeddah City 22384, Saudi Arabia
3 Medical Imaging Department, Ministry of the National Guard—Health Affairs,
Jeddah City 11426, Saudi Arabia

Abstract: Uncontrolled and fast cell proliferation is the cause of brain tumors. Early cancer detection is
vitally important to save many lives. Brain tumors can be divided into several categories depending
on the kind, place of origin, pace of development, and stage of progression; as a result, tumor
classification is crucial for targeted therapy. Brain tumor segmentation aims to delineate accurately
the areas of brain tumors. A specialist with a thorough understanding of brain illnesses is needed
to manually identify the proper type of brain tumor. Additionally, processing many images takes
time and is tiresome. Therefore, automatic segmentation and classification techniques are required to
speed up and enhance the diagnosis of brain tumors. Tumors can be quickly and safely detected by
brain scans using imaging modalities, including computed tomography (CT), magnetic resonance
imaging (MRI), and others. Machine learning (ML) and artificial intelligence (AI) have shown
promise in developing algorithms that aid in automatic classification and segmentation utilizing
various imaging modalities. The right segmentation method must be used to precisely classify
patients with brain tumors to enhance diagnosis and treatment. This review describes multiple
types of brain tumors, publicly accessible datasets, enhancement methods, segmentation, feature
extraction, classification, machine learning techniques, deep learning, and learning through a transfer
to study brain tumors. In this study, we attempted to synthesize brain cancer imaging modalities
with automatically computer-assisted methodologies for brain cancer characterization in ML and DL
Citation: Kaifi, R. A Review of
frameworks. Finding the current problems with the engineering methodologies currently in use and
Recent Advances in Brain Tumor
Diagnosis Based on AI-Based
predicting a future paradigm are other goals of this article.
Classification. Diagnostics 2023, 13,
3007. https://doi.org/10.3390/ Keywords: brain tumors; magnetic resonance imaging; computed tomography; computer-aided
diagnostics13183007 diagnostic and detection; deep learning; machine learning

Academic Editors: Dechang Chen,


Wan Azani Mustafa and
Hiam Alquran
1. Introduction
Received: 23 June 2023 The human brain, which serves as the control center for all the body’s organs, is a
Revised: 14 September 2023
highly developed organ that enables a person to adapt to and withstand various environ-
Accepted: 19 September 2023
mental situations [1]. The human brain allows people to express themselves in words,
Published: 20 September 2023
carry out activities, and express thoughts and feelings. Cerebrospinal fluid (CSF), white
matter (WM), and gray matter (GM) are the three major tissue components of the human
brain. The gray matter regulates brain activity and comprises neurons and glial cells. The
Copyright: © 2023 by the author.
cerebral cortex is connected to other brain areas through white matter fibers comprising
Licensee MDPI, Basel, Switzerland. several myelinated axons. The corpus callosum, a substantial band of white matter fibers,
This article is an open access article connects the left and right hemispheres of the brain [2]. A brain tumor is a brain cell growth
distributed under the terms and that is out of control and aberrant. Any unanticipated development may affect human
conditions of the Creative Commons functioning since the human skull is a rigid and volume-restricted structure, depending
Attribution (CC BY) license (https:// on the area of the brain involved. Additionally, it might spread to other organs, further
creativecommons.org/licenses/by/ jeopardizing human functions [3]. Early cancer detection makes the ability to plan effective
4.0/). treatment possible, which is crucial for the healthcare sector [4]. Cancer is difficult to cure,

Diagnostics 2023, 13, 3007. https://doi.org/10.3390/diagnostics13183007 https://www.mdpi.com/journal/diagnostics


Diagnostics 2023, 13, 3007 2 of 32

and the odds of survival are significantly reduced if it spreads to nearby cells. Undoubtedly,
many lives could be preserved if cancer was detected at its earliest stage using quick and
affordable diagnostic methods. Both invasive and noninvasive approaches may be utilized
to diagnose brain cancer. An incision is made during a biopsy to extract a lesion sample for
analysis. It is regarded as the gold standard for the diagnosis of cancer, where pathologists
examine several cell characteristics of the tumor specimen under a microscope to verify
the malignancy.
Noninvasive techniques include physical inspections of the body and imaging modal-
ities employed for imaging the brain [5]. In comparison to brain biopsy, other imaging
modalities, such as CT scans and MRI images, are more rapid and secure. Radiologists
use these imaging techniques to identify brain problems, evaluate the development of
diseases, and plan surgeries [6]. However, brain scans or image interpretation to diagnose
illnesses are prone to inter-reader variability and accuracy, which depends on the medical
practitioner’s competency [5]. It is crucial to accurately identify the type of brain disorder
to reduce diagnostic errors. Utilizing computer-aided diagnostic (CAD) technologies can
improve accuracy. The fundamental idea behind CAD is to offer a computer result as an
additional guide to help radiologists interpret images and shorten the reading time for
images. This enhances the accuracy and stability of radiological diagnosis [7]. Several
CAT-based artificial intelligence techniques, such as machine learning (ML) and deep
learning (DL), are described in this review for diagnosing tissues and segmenting tumors.
The segmentation process is a crucial aspect of image processing. This approach includes a
procedure for extracting the area that helps determine whether a region is infected. Using
MRI images to segment brain tumors presents various challenges, including image noise,
low contrast, loss borders, shifting intensities inside tissues, and tissue-type variation.
The most complex and crucial task in many medical image applications is detecting
and segmenting brain tumors because it often requires much data and information. Tumors
come in a variety of shapes and sizes. Automatic or semiautomatic detection/segmentation,
helped by AI, is currently crucial in medical diagnostics. The medical professionals must
authenticate the boundaries and areas of the brain cancer and ascertain where precisely it
rests and the exact impacted locations before therapies such as chemotherapy, radiation, or
brain surgery. This review examines the output from various algorithms that are used in
segmenting and detecting brain tumors.
The review is structured as follows: Types of brain tumors are described in Section 2.
The imaging modalities utilized in brain imaging are discussed in Section 3. The review
algorithms used in the study are provided in Section 4. A review of the relevant state-of-
the-art is provided in Section 5. The review is discussed in Section 6. The work’s conclusion
is presented in Section 7.

2. Types of Brain Tumors


The main three parts of the brain are the brain stem, cerebrum, and cerebellum [1].
The cerebellum is the second-largest component of the brain and manages bodily motor
activities, including balance, posture, walking, and general coordination of movements. It
is positioned behind the brain and connected to the brain stem. Internal white matter, tiny
but deeply positioned volumes of gray matter, and a very thin gray matter outer cortex
can all be found in the cerebellum and cerebrum. The brainstem links to the spinal cord.
It is situated at the brain’s base. Vital bodily processes, including motor, sensory, cardiac,
repositories, and reflexes, are all under the control of the brainstem. Its three structural
components are the medulla oblongata, pons, and midbrain [2]. A brain tumor is the
medical term for an unexpected growth of brain cells [8]. According to the tumor’s location,
the kind of tissue involved, and whether they are malignant or benign, scientists have
categorized several types of brain tumors based on the location of the origin (primary
or secondary) and additional contributing elements [9]. The World Health Organization
(WHO) categorized brain tumors into 120 kinds. This categorization is based on the cell’s
origin and behavior, ranging from less aggressive to greater aggressive. Even certain tumor
Diagnostics 2023, 13, 3007 3 of 32

forms are rated, with grades I being the least malignant (e.g., meningiomas, pituitary
tumors) and IV being the most malignant. Despite differences in grading systems that rely
on the kind of tumor, this denotes the pace of growth [10]. The most frequent type of brain
tumor in adults is glioma, which may be classified into HGG and LGG. The WHO further
categorized LGG into I–II grade tumors and HGG into III–IV grade. To reduce diagnosing
errors, accurate identification of the specific type of brain disorder is crucial for treatment
planning. A summary of various types of brain tumors is provided in Table 1.

Table 1. Types of brain tumors.

Types of Tumors Based on Type Comment


Benign Less aggressive and grows slowly
Nature Life-threatening and rapidly
Malignant
expanding
Primary tumor Originates in the brain directly
This tumor develops in another
Origin
Secondary tumor area of the body like lung and
breast before migrating to the brain
Basically, regular in shape, and they
Grade I
develop slowly
Appear strange to the view and
Grade II
Grading grow more slowly
These tumors grow more quickly
Grade III
than grade II cancers
Grade IV Reproduced with greater rate
Malignant but do not invade
Stage 0
neighboring cells
Stage 1
Progression stage Stage 2 Malignant and quickly spreading
Stage 3
The malignancy invades every part
Stage 4
of the body

3. Imaging Modalities
For many years, the detection of brain abnormalities has involved the use of several
medical imaging methods. The two brain imaging approaches are structural and functional
scanning [11]. Different measurements relating to brain anatomy, tumor location, traumas,
and other brain illnesses compose structural imaging [12]. The finer-scale metabolic alter-
ations, lesions, and visualization of brain activity are all picked up by functional imaging
methods. Techniques including CT, MRI, SPECT, positron emission tomography (PET),
(FMRI), and ultrasound (US) are utilized to localize brain tumors for their size, location as
well as shape, and other characteristics [13].

3.1. MRI
MRI is a noninvasive procedure that utilizes nonionizing, safe radiation [14] to display
the 3D anatomical structure of any region of the body without the need for cutting the
tissue. To acquire images, it employs RF pulses and an intense magnetic field [15].
The body is intended to be positioned within an intense magnetic field. The water
molecules of the human body are initially in their equilibrium state when the magnets
are off. The magnetic field is then activated by moving the magnets. The body’s water
molecules align with the magnetic field’s direction under the effect of this powerful mag-
netic field [14]. Protons are stimulated to spin opposing the magnetic field and realign
Diagnostics 2023, 13, 3007 4 of 32
Diagnostics 2023, 13, x FOR PEER REVIEW

by the application of a high RF energy pulse to the body in the magnetic field’s direction.
Protons are stimulated to spin opposing the magnetic field and realign by the application
which
of a high the scanner
RF energy pulse detects
to the bodyandin thetransforms
magnetic field’sinto visual
direction. images
When [16]. The tis
the RF energy
pulse is stopped, the water molecules return to their state of equilibrium and align with
determines
Diagnostics 2023, 13, x FOR PEER REVIEW the amount of RF energy the water molecules can use.
the magnetic field once more [14]. This causes the water molecules to produce RF energy,
4 ofAs
32 we can

1, healthy
which brain
the scanner has and
detects white matter
transforms into(WM), gray [16].
visual images matter (GM),
The tissue and CSF, a
structure
structural
determines theMRI amountscan
of RF[17].
energyThe primary
the water difference
molecules can use. Asbetween
we can see inthese
Figuretissues
1, i
which
healthythe scanner
brain detects
has white and(WM),
matter transforms into visual
gray matter (GM), images
and CSF,[16]. The tissue
according structure
to a structural
MRI scanthe
determines
MRI scan [17].
is amount
based of onRFthe
The primary
amount
energy thebetween
difference
of water
water molecules
they contain,
can use.
these tissues in aAs
with
we can see
structural MRI
WM
in scan
constitut
Figure is
and
1,
based GM
healthy
on thecontaining
brain has white
amount 80%
of watermatterwater.
they (WM), The
contain,gray CSF
with WMfluid
matter (GM),isand
almost
constituting CSF, entirely
according
70% water and to
GMcompose
a
shown in Figure 1.
containing
structural 80%
MRI water.
scan The
[17]. CSF
The fluid
primary is almost entirely
difference composed
between these of water,
tissues in as
a shown
structuralin
Figure
MRI 1. is based on the amount of water they contain, with WM constituting 70% water
scan
and GM containing 80% water. The CSF fluid is almost entirely composed of water, as
shown in Figure 1.

Figure 1. Healthy brain MRI image showing white matter (WM), gray matter (GM), and CSF [17].
Figure 1. Healthy brain MRI image showing white matter (WM), gray matter (GM), and CSF [17].
Figure 1. Healthy brain MRI image showing white matter (WM), gray matter (GM),
Figure
Figure 22 illustrates
illustratesthe
thefundamental
fundamentalMRI MRI planes
planesused to to
used visualize
visualizethethe
anatomy of the
anatomy of
brain: axial,
Figure
the brain: coronal, and
axial,2coronal, sagittal.
illustrates Tl, T2, and
the fundamental
and sagittal. FLAIR
Tl, T2, and FLAIRMRI
MRIMRIsequences
planes are most
usedare
sequences often
tomost em‐ the a
visualize
often
ployed
employedforfor
brain analysis [14].
[14].AATl‐weighted
Tl-weightedscan scancan
candistinguish
distinguishbetween
between gray
brain: axial,brain analysis
coronal,
matter.T2-weighted
and
T2‐weightedimaging
imaging
sagittal. Tl, T2,
is water‐content
and FLAIR
sensitive and
MRI gray and
sequences
is therefore
are m
white matter. is water-content sensitive and is therefore ideallyideally
suited
ployed
suited forwhere
brain
to conditions
to conditions analysis
where
water [14].within
water accumulates
accumulates A Tl‐weighted
within of thescan
the tissues
the tissues of thecan
brain. distinguish betw
brain.
white matter. T2‐weighted imaging is water‐content sensitive and is the
suited to conditions where water accumulates within the tissues of the brai

Figure
Figure 2.
2. Fundamental
Fundamental MRI
MRI planes:
planes: (a)
(a) coronal,
coronal, (b)
(b) sagittal,
sagittal, and
and (c)
(c) axial.
axial.

In pathology, FLAIR is utilized to differentiate between CSF and abnormalities in the


Gray-level intensity values in pixel spaces form an image during an MRI scan. The
brain. Gray‐level
values of the gray‐level
gray-level intensity are dependent on the cell density. On T1 and T2 images
of a tumor brain, the intensity level of the tumorous tissues differs [16]. The properties of
various MRI sequences are shown in Table 2.
Figure 2. Fundamental MRI planes: (a) coronal, (b) sagittal, and (c) axial.
Most tumors show low or medium gray intensity on T1‐w. On T2‐w, most tumors
exhibit bright intensity [17]. Examples of MRI tumor intensity level are shown in Figure
3. In pathology, FLAIR is utilized to differentiate between CSF and abnor
brain. Gray‐level intensity values in pixel spaces form an image during an M
values of the gray‐level intensity are dependent on the cell density. On T1 a
of a tumor brain, the intensity level of the tumorous tissues differs [16]. The
various MRI sequences are shown in Table 2.
Diagnostics 2023, 13, 3007 5 of 32

Table 2. Properties of various MRI sequences.


Diagnostics 2023, 13, x FOR PEER REVIEW 5 of 32
T1 T2 Flair
White Matter Bright Dark Dark
Gray Matter Gray Dark Dark
CSF Dark Bright Dark
Tumor Dark Bright Bright

Diagnostics 2023, 13, x FOR PEER REVIEWMost tumors show low or medium gray intensity on T1-w. On T2-w, most tumors 5 of 32
exhibit bright intensity [17]. Examples of MRI tumor intensity level are shown in Figure 3.

Figure 3. MRI brain tumor: (a) FLAIR image, (b) T1 image, and (c) T2 image [17].

Table 2. Properties of various MRI sequences.

T1 T2 Flair
White Matter Bright Dark Dark
Gray Matter Gray Dark Dark
CSF Dark Bright Dark
Tumor Dark Bright Bright
Figure3.3.MRI
Figure MRIbrain
braintumor:
tumor: (a)
(a) FLAIR
FLAIRimage,
image,(b)
(b)T1
T1image,
image,and
and(c)
(c)T2
T2image
image[17].
[17].

Another
TableAnother type of
2. Properties MRI MRI
identified as functional magnetic resonance imaging (fMRI)
typeofofvarious sequences.
MRI identified as functional magnetic resonance imaging (fMRI) [18]
[18] measures changes in blood oxygenation to interpret brain activity. An area of the
measures changes in blood oxygenation to interpret brain activity. An area of the brain that
brain that is more active begins T1 to use more blood andT2 oxygen. As a result,Flair an fMRI cor‐
is more active begins to use more blood and oxygen. As a result, an fMRI correlates the
relates the location
White Matter and mentalBright process to map the continuingDark activity in theDark brain.
location and mental process to map the continuing activity in the brain.
Gray Matter Gray Dark Dark
3.2.CT
3.2. CT CSF Dark Bright Dark
CT scanners provide finely
Tumor provide finely detailed
CT scanners detailed images of
Dark images of the interior the interior of the
Bright of the body using body ausing
Bright a re‐
revolving
volving
X-ray beamX‐ray
andbeam
a rowand a row of detectors.
of detectors. On a computer,
On a computer, specific algorithms
specific algorithms are used toare used
process
to process
the Another
images the images
type
captured captured
offrom
MRI variousfromangles
identified variousto angles
create to
as functional create cross‐sectional
magnetic resonance
cross-sectional images images
imaging
of of the
the (fMRI)
entire
entire
[18]
body body
measures
[19]. [19]. However,
changes
However, a CTinscana CT
blood scan
offercan
canoxygenation
more offer
tomore precise
interpret
precise images of images
brain of spine,
theactivity.
skull, the
Anskull,
area
and ofspine,
the
other
and
bone other bone
brainstructures
that is more structures
close close
to abegins
active to a
brain tumor, brain tumor,
as shown
use more as
bloodinand shown
Figure in Figure
4. Patients
oxygen. 4. Patients
typically
As a result, typically
an fMRIreceive
cor‐
receivethe
contrast
relates contrast
injections injections
location toand to highlight
highlight
mental processaberrant
aberrant tissues.
to map the tissues. The may
Thecontinuing
patient patient mayinoccasionally
occasionally
activity take dyetake
the brain. to
dye to improve
improve theirWhen
their image. image.anWhen
MRI is anunavailable,
MRI is unavailable, and the
and the patient haspatient has an im‐
an implantation
plantation
like
3.2. aCT like a pacemaker,
pacemaker, a CT scan may a CTbescan may be to
performed performed
diagnose to diagnose
a brain tumor.a brain tumor. The
The benefits of
benefits
using CT of using
scanning CT scanning
are low are
cost, low cost,
improved improved
tissue tissue
classificationclassification
detection,
CT scanners provide finely detailed images of the interior of the body using a re‐ detection,
quick quick
imaging,
imaging,
and
volving and more
moreX‐ray beam widespread
widespread a row ofavailability.
availability.
and The radiation
detectors. The
On a radiation
risk in a CT risk
computer, scan in
is a100
specific CT scangreater
times
algorithms is 100
are times
than
used
greater
in a than
standard in a
X-raystandard
diagnosisX‐ray
[19].diagnosis [19].
to process the images captured from various angles to create cross‐sectional images of the
entire body [19]. However, a CT scan can offer more precise images of the skull, spine,
and other bone structures close to a brain tumor, as shown in Figure 4. Patients typically
receive contrast injections to highlight aberrant tissues. The patient may occasionally take
dye to improve their image. When an MRI is unavailable, and the patient has an im‐
plantation like a pacemaker, a CT scan may be performed to diagnose a brain tumor. The
benefits of using CT scanning are low cost, improved tissue classification detection, quick
imaging, and more widespread availability. The radiation risk in a CT scan is 100 times
greater than in a standard X‐ray diagnosis [19].

Figure4.4.CT
Figure CTbrain
braintumor.
tumor.

3.3. PET
An example of a nuclear medicine technique that analyzes the metabolic activity of
biological tissues is positron emission tomography (PET) [20]. Therefore, to help evaluate
the tissue being studied, a small amount of a radioactive tracer is utilized throughout the
procedure. Fluorodeoxyglucose (FDG) is a popular PET agent for imaging the brain. To
Diagnostics 2023, 13, 3007 6 of 32

3.3. PET
An example of a nuclear medicine technique that analyzes the metabolic activity of
Diagnostics 2023, 13, x FOR PEER REVIEW
biologicaltissues is positron emission tomography (PET) [20]. Therefore, to help evaluate 6 of 32
the tissue being studied, a small amount of a radioactive tracer is utilized throughout the
procedure. Fluorodeoxyglucose (FDG) is a popular PET agent for imaging the brain. To
provide
providemore
moreconclusive
conclusiveinformation
informationon onmalignant
malignant (cancerous)
(cancerous) tumors
tumors and
and other
other lesions,
lesions,
PET
PET may also be utilized in conjunction with other diagnostic procedures like CT
may also be utilized in conjunction with other diagnostic procedures like CT or
or MRI.
MRI.
PET
PETscans
scansananorgan
organor ortissue
tissuebybyutilizing
utilizingaascanning
scanningdevice
devicetotofind
findphotons
photonsreleased
releasedby byaa
radionuclide
radionuclide atat that
that site
site[20].
[20]. The
Thechemical
chemical compounds
compounds that
that are
arenormally
normally utilized
utilized by
bythe
the
specific organ or tissue throughout its metabolic process are combined with a
specific organ or tissue throughout its metabolic process are combined with a radioactiveradioactive
atom
atomtotocreate
createthe
thetracer
tracerused
usedin inPET
PETscans,
scans,as
asshown
shownininFigure
Figure5.5.

Figure5.5.PET
Figure PETbrain
braintumor.
tumor.

3.4.
3.4. SPECT
SPECT
A
Anuclear
nuclearimaging
imagingexamination
examinationcalled
calleda asingle-photon
single‐photonemission
emission computed
computed tomogra-
tomog‐
phy (SPECT) combines CT with a radioactive tracer. The tracer is what
raphy (SPECT) combines CT with a radioactive tracer. The tracer is what enables medical enables medical
professionals
professionals to to observe thethe blood
bloodflow
flowtototissues
tissuesand and organs
organs [21].
[21]. A tracer
A tracer is injected
is injected into
into
the the patient’s
patient’s bloodstream
bloodstream prior
prior to to theSPECT
the SPECTscan. scan.TheThe radiolabeled
radiolabeled tracer
tracer generates
generates
gamma
gammarays raysthat
thatthethe
CTCT
scanner cancan
scanner detect sincesince
detect it is radiolabeled. Gamma-ray
it is radiolabeled. information
Gamma‐ray infor‐
ismation
gathered by the computer
is gathered and shown
by the computer on shown
and the CT on cross-sections. A 3D representation
the CT cross‐sections. A 3D repre‐ of
the brain can be created by adding these cross-sections back together [21].
sentation of the brain can be created by adding these cross‐sections back together [21].
3.5. Ultrasound
3.5. Ultrasound
An ultrasound is a specialized imaging technique that provides details that can be
An ultrasound is a specialized imaging technique that provides details that can be
useful in cancer diagnosis, especially for soft tissues. It is frequently employed as the
useful in cancer diagnosis, especially for soft tissues. It is frequently employed as the ini‐
initial step in the typical cancer diagnostic procedure [22]. One advantage of ultrasound
tial step in the typical cancer diagnostic procedure [22]. One advantage of ultrasound is
is that a test can be completed swiftly and affordably without subjecting the patient to
that a test can be completed swiftly and affordably without subjecting the patient to ra‐
radiation. However, ultrasound cannot independently confirm a cancer diagnosis and is
diation. However, ultrasound cannot independently confirm a cancer diagnosis and is
unable to generate images with the precise level of resolution or detail like a CT or MRI
unable to generate images with the precise level of resolution or detail like a CT or MRI
scan. A medical expert gently moves a transducer throughout the patient’s skin across
scan. A medical expert gently moves a transducer throughout the patient’s skin across the
the region of the body being examined during a conventional ultrasound examination.
region of the body being examined during a conventional ultrasound examination. A
A succession of high-frequency sounds is generated by the transducer, which “bounce
succession of high‐frequency
off” the patient’s sounds
interior organs. Theisensuing
generated by thereturn
echoes transducer,
to the which “bounce
ultrasound off”
device,
the patient’s
which interior organs.
then transforms Thewaves
the sound ensuing
intoechoes returnthat
a 2D image to the
mayultrasound
be observed device, which
in real-time
then transforms the sound waves into a 2D image that may be observed in
on a monitor. According to [22], US probes have been applied in brain tumor resection. real‐time on a
monitor. According to [22], US probes have been applied in brain tumor resection.
According to the degree of density inside the tissue being assessed, the shape and strength Ac‐
cording to the degree of density inside the tissue being assessed, the shape
of ultrasonic echoes can change. An ultrasound can detect tumors that may be malignant and strength
of ultrasonic
because solid echoes
massescan
andchange. An ultrasound
fluid-filled cysts bounce can detect
sound tumors
waves that may be malignant
differently.
because solid masses and fluid‐filled cysts bounce sound waves differently.

4. Classification and Segmentation Method


As was stated in the introduction, brain tumors are a leading cause of death world‐
wide. Computer‐aided detection and diagnosis refer to software that utilizes DL, ML,
and computer vision for analyzing radiological and pathological images. It has been cre‐
ated to assist radiologists in diagnosing human disease in various body regions, includ‐
ing applications for brain tumors. This review explored different CAT‐based artificial
Diagnostics 2023, 13, 3007 7 of 32

4. Classification and Segmentation Method


Diagnostics 2023, 13, x FOR PEER REVIEWAswas stated in the introduction, brain tumors are a leading cause of death world-
7 of 32
wide. Computer-aided detection and diagnosis refer to software that utilizes DL, ML, and
computer vision for analyzing radiological and pathological images. It has been created to
assist radiologists in diagnosing human disease in various body regions, including appli-
intelligence approaches, including ML and DL, for automatically classifying and seg‐
cations for brain tumors. This review explored different CAT-based artificial intelligence
menting tumors.
approaches, including ML and DL, for automatically classifying and segmenting tumors.
4.1.Classification
4.1. ClassificationMethods
Methods
AAclassification
classificationisisananapproach
approach in in which
which related
related datasets
datasets are grouped
are grouped together
together accord-ac‐
cording to common features. A classifier in classification is a model created
ing to common features. A classifier in classification is a model created for predicting thefor predicting
the unique
unique features
features of aof a class
class label.
label. Predicting
Predicting thethe desired
desired class
class forfor eachtype
each typeofofdata
dataisisthe
the
fundamental goal
fundamental goal of
ofclassification.
classification. Deep
Deep learning
learning and
and machine
machine learning
learning techniques
techniques are are
usedfor
used forthe
theclassification
classificationof ofmedical
medicalimages.
images.TheThekey
keydistinction
distinctionbetween
betweenthethetwo
twotypes
typesisis
theapproach
the approachfor forobtaining
obtainingthe thefeatures
featuresused
usedininthe
theclassification
classificationprocess.
process.

4.1.1.
4.1.1.Machine
MachineLearning
Learning
ML
ML isis aa branch
branch ofof AI
AI that
that allows
allows computers
computers to to learn
learn without
without beingbeing explicitly
explicitly pro-
pro‐
grammed.
grammed.Classifying
Classifyingmedical
medicalimages,
images,including
includinglesions,
lesions,into
intovarious
variousgroups
groupsusing
usinginput
input
features
featureshas
hasbecome
becomeone oneofofthe
thelatest
latestapplications
applicationsofof
ML.ML.There
There areare
two types
two of ML
types of MLalgo-
al‐
rithms: supervised learning and unsupervised learning [23]. ML algorithms
gorithms: supervised learning and unsupervised learning [23]. ML algorithms learn from learn from
labeled
labeleddata
datain insupervised
supervisedlearning.
learning. Unsupervised
Unsupervised learning
learning is is the
the process
processby bywhich
whichML ML
systems
systemsattempt
attemptto tocomprehend
comprehendthe theinterdata
interdatarelationship
relationshipusing
usingunlabeled
unlabeleddata.
data.ML
MLhas has
been employed to analyze brain cancers in the context of brain imaging
been employed to analyze brain cancers in the context of brain imaging [24]. The main [24]. The main
stages
stagesofofMLMLclassification
classificationare
areimage
imagepreprocessing,
preprocessing,feature
featureextraction,
extraction, feature
feature selection,
selection,
and classification. Figure 6 illustrates the process architecture.
and classification. Figure 6 illustrates the process architecture.

Figure6.6.ML
Figure MLblock
blockdiagram.
diagram.

1.1. Data Acquisition


Data Acquisition
Aspreviously
As previouslynoted,
noted,we wecan
cancollect
collect brain
brain cancer
cancer images
images using
using several
several imaging
imaging mo‐
modal-
dalities
ities suchsuch as MRI,
as MRI, CT, andCT,PET.
andThis
PET.technique
This technique effectively
effectively visualizes
visualizes aberrantaberrant brain
brain tissues.
tissues.
2. Preprocessing
2. Preprocessing
Preprocessing is a very important stage in the medical field. Normally, noise enhance-
ment Preprocessing
or reduction inis images
a very occurs
important stage
during in the medicalMedical
preprocessing. field. Normally, noise en‐
noise significantly
hancement or reduction in images occurs during preprocessing. Medical noise
reduces image quality, making them diagnostically inefficient. To properly classify medical signifi‐
cantly reduces image quality, making them diagnostically inefficient. To properly classify
medical images, the preprocessing stage must be effective enough to eliminate as much
noise as possible without affecting essential image components [25]. This procedure is
carried out using a variety of approaches, including cropping, image scaling, histogram
equalization, filtering using a median filter, and image adjusting [26].
Diagnostics 2023, 13, 3007 8 of 32

images, the preprocessing stage must be effective enough to eliminate as much noise as
possible without affecting essential image components [25]. This procedure is carried out
using a variety of approaches, including cropping, image scaling, histogram equalization,
filtering using a median filter, and image adjusting [26].
3. Feature extraction
The process of converting images into features based on several image characteristics in
the medical field is known as feature extraction. These features carry the same information
as the original images but are entirely different. This technique has the advantages of
enhancing classifier accuracy, decreasing overfitting risk, allowing users to analyze data,
and speeding up training [27]. Texture, contrast, brightness, shape, gray level co-occurrence
matrix (GLCM) [28], Gabor transforms [29], wavelet-based features [30], 3D Haralick
features [31], and histogram of local binary patterns (LBP) [32] are some of the examples of
the various types of features.
4. Feature selection
The technique attempts to arrange the features in ascending order of importance
or relevance, with the top features being mostly employed in classification. As a result,
multiple feature selection techniques are needed to reduce redundant information to
discriminate between relevant and nonrelated features [33], such as PCA [34], genetic
algorithm (GA) [35], and ICA [36].
5. ML algorithm
Machine learning aims to divide the input information into separate groups based
on common features or patterns of behavior. KNN [35], ANN [37], RF [38], and SVM [39]
are examples of supervised methods. These techniques include two stages: training and
testing. During training, the data are manually labeled using human involvement. The
model is first constructed in this step, after which it is utilized to determine the classes that
are unlabeled in the testing stage. Application of the KNN algorithm works by finding
the points that are closest to each other by computing the distance between them using
one of several different approaches, including the Hamming, Manhatten, Euclidean, and
Minkowski distances [35].
The support vector machine (SVM) technique is frequently employed for classification
tasks. Every feature forming a data point in this approach, which represents a coordinate,
is formed in a distinct n-space. As a result, the objective of the SVM method is to identify
a boundary or line across a space with n dimensions, referred to as a hyperplane that
separates classes [39]. There are numerous ways to create different hyperplanes, but the
one with the maximum margin is the best. The maximum margin is the separation between
the most extreme data points inside a class, often known as the support vectors.

4.1.2. Extreme Learning Machine (ELM)


Another new field that uses less computing than neural networks is evolutionary
machine learning (EML). It is based on the real-time classification and regression technique
known as the single-layer feed-forward neural network (SLFFNN). The input-to-hidden
layer weights in the ELM are initialized randomly, whereas the hidden-to-output layer
weights are trained to utilize the Moore–Penrose inverse method [40] to obtain a least-
squares solution. As a result, classification accuracy is increased while net complexity,
training time, and learning speed are all reduced.
Additionally, the hidden layer weights provide the network the capacity to multitask
similar to other ML techniques such as KNN, SVM, and Bayesian networks [40]. As shown
in Figure 7, the ELM network is composed of three levels, all of which are connected.
Weights between the hidden and output layers can only vary, but the weights between the
input and hidden layers are initially fixed at random and remain so during training.
Diagnostics 2023, 13, x FOR PEER REVIEW 9 of 32
Diagnostics 2023,13,
Diagnostics2023, 13,3007
x FOR PEER REVIEW 99 of
of 32
32

Figure 7. Extreme learning machine.


Figure7.7.Extreme
Figure Extremelearning
learningmachine.
machine.
4.1.3. Deep Learning (DL)
4.1.3. Deep
DeepLearning
4.1.3.Beginning a few(DL)
Learning (DL)
years ago, deep learning, a branch of machine learning, has been
utilized extensively
Beginning
Beginning aafewfew to create
years
yearsago, automatic,
ago, deepdeep semiautomatic,
learning,
learning, a branch
a branchofand hybrid
machine
of machine models
learning,
learning, that
has canbeen
been
has ac‐
uti-
lized extensively
curately
utilized detect and
extensivelyto create
segmentautomatic,
to create tumors semiautomatic,
automatic,in the shortest and
semiautomatic, periodhybrid models
possible
and hybrid [41].thatDLcan
models can accurately
thatlearn
can theac‐
detect
featuresand segment
that are tumors
significant inforthe a shortest
problem period
by possible
utilizing a [41]. DL
training
curately detect and segment tumors in the shortest period possible [41]. DL can learn the can
corpus learnwiththe features
sufficient
that are significant
diversity
features and are
that for aDeep
quality. problem
significant for aby
learning utilizing
[42] has
problem byaachieved
training aexcellent
utilizing corpus
trainingwith
successsufficient
corpus in
with diversity
tackling the
sufficient
and quality.
issues
diversityof ML Deep
andby learning
combining
quality. [42]
Deep the has achieved
feature
learning excellent
extraction
[42] has achieved success
and selection in
excellentphasestackling
successinto the
in the issues
training
tackling of
the
ML by combining
process
issues of[43].
ML Deep the feature is
learning
by combining extraction
the featureand
motivated byselection phases
the comprehension
extraction and into the
selection training
of neural
phases into process
networks [43].
the trainingthat
Deep
exist
processlearning
within
[43].the is human
Deepmotivated
learning by
brain. the comprehension
DL models
is motivated byare often
the ofrepresented
neural networks
comprehension as neural
of thatnetworks
a sequence existofwithin
layers
that
the human
generated brain.
by a DL
weightedmodelssum are
of often represented
information from as
the a sequence
previous
exist within the human brain. DL models are often represented as a sequence of layers of
layer.layersThe generated
data are by
rep‐
aresented
weighted sum
generatedbybythe of information
first layer,sum
a weighted while from the previous
the output isfrom
of information layer.
representedThe
the previous data
by the are
layer. represented
lastThelayer data[44].by the
areDeep
rep‐
first layer,models
learning
resented while
by the the
firstoutput
can tackle is represented
layer,extremely
while thedifficult
outputby problems
therepresented
is last layer
while [44].
often
by Deep
the lastlearning
requiring layerless models
human
[44]. Deep
can tacklemodels
interaction
learning extremely
than can difficult
conventional problems
tackle extremely while often
ML techniques
difficult requiring
because
problems several
while less human
layers
often make interaction
requiring it less
possiblethan
human to
conventional
duplicate
interactioncomplexML techniques because
mapping functions.
than conventional several layers make it possible to
ML techniques because several layers make it possible to duplicate complex
mapping The functions.
duplicate most
complex commonmapping DL model
functions. used for the categorization and segmentation of im‐
The
ages The most common
is a convolution
most common DL
neural model
DL network used
model used forfor
(CNN). thethecategorization
In hierarchicaland
acategorization segmentation
manner,
and CNN analyzes
segmentation of images the
of im‐
isspatial
a convolution
ages is arelationshipneural
convolutionofneural network (CNN).
pixels.network
Convoluting In
(CNN).a hierarchical
theInimages manner,
with learned
a hierarchical CNN
manner, analyzes
filters
CNN the
creates
analyzesspatial
a hier‐
the
relationship
archy of pixels. Convoluting thethis
images with learned This filtersconvolution
creates a hierarchy ofis
spatialofrelationship
feature maps, whichConvoluting
of pixels. is how isthe
accomplished.
images with learned filters creates function
a hier‐
feature
performedmaps, which
in several is how this is
layers issuch accomplished.
that isthe This convolution
features are function is performed
archy of feature maps, which how this accomplished. Thistranslation‐
convolutionand distor‐
function is
in several layersand
tion‐invariant suchhence
that the features
accurate to are
a translation-
high degree and
[45]. distortion-invariant
Figure 8 illustrates and
the hence
main
performed in several layers such that the features are translation‐ and distor‐
accurate
process to DL.
in a high degree [45]. Figure 8 illustrates the main process in DL.
tion‐invariant and hence accurate to a high degree [45]. Figure 8 illustrates the main
process in DL.

Figure8.8.DL
Figure DLblock
blockdiagram.
diagram.
Figure 8. DL block diagram.
Diagnostics 2023, 13, 3007 10 of 32

Preprocessing is primarily used to eliminate unnecessary variation from the input


image and make training the model easier. More actions are required to extend beyond
neural network models’ limits, such as resizing normalization. All images must be resized
before being entered into CNN classification models since DL requires inputs of a constant
size [46]. Images that are greater than the desired size can be reduced by downscaling,
interpolation, or cutting the background pixels [46].
Many images are required for CNN-based classification. Data augmentation is one
of the most important data strategies for addressing issues with unequal distribution and
data paucity [47].
CNN’s architecture is composed of three primary layers: convolutional, pooling, and
fully connected. The first layer is the main layer that is able to extract image features such as
edges and boundaries. Based on the desired prediction results, this layer may automatically
learn many filters in parallel for the training dataset. The first layer creates features, but
the second layer oversees data reduction, which minimizes the size of those features and
reduces the demand for computing resources. Every neuron in the final layer, which is a
completely connected layer, is coupled to every neuron in the first layer. The layer serves as
a classifier to classify the acquired feature vector of previous layers [48,49]. The approach
that CNN uses is similar to how various neural networks work: it continually modifies its
weights by taking an error from the output and inserting it as output to improve filters and
weights. In addition, CNN standardizes the output utilizing a SoftMax function [50]. Many
types of CNN architecture exist, including ResNet, AlexNet, and cascade-CNN, among
others [51].

4.2. Segmentation Method


Brain tumor segmentation, which has been employed in some research, is an important
step in improving disease diagnosis, evaluation, treatment plans, and clinical trials. The
purpose of segmentation in tumor classification is to detect the tumor location from brain
scans, improve representation, and allow quantitative evaluations of image structures
during the feature extraction step [52]. Brain tumor segmentation can be accomplished in
two ways: manually and completely automatically [53].
Manual tumor segmentation from brain scans is a difficult and time-consuming proce-
dure. Furthermore, the artifacts created during the imaging procedure result in poor-quality
images that are difficult to analyze. Additionally, due to uneven lesions, geographical
flexibility, and unclear borders, manual detection of brain tumors is challenging. This sec-
tion discusses several automated brain tumor segmentation strategies to help radiologists
overcome these issues.

4.2.1. Region-Based Segmentation


A region in an image is a collection of related pixels that comply with specific ho-
mogeneity requirements, such as shape, texture, and pixel intensity values [54]. In a
region-based segmentation, the image is divided into disparate areas to precisely identify
the target region [55]. When grouping pixels together, the region-based segmentation
takes into consideration the pixel values, such as gray-level variance and difference, as
well as their spatial closeness, such as the Euclidean distance or region density. K-means
clustering [56] and FCM [56] are the most techniques used in this method.

4.2.2. Thresholding Methods


The thresholding approach is a straightforward and effective way to separate the
necessary region [57], but finding an optimum threshold in low-contrast images may
be challenging.
Based on picture intensity, threshold values are chosen using histogram analysis [58].
There are two types of thresholding techniques: local and global. The global thresholding
approach is the best choice for segmentation if the objects and the background have highly
uniform brightness or intensity. The Gaussian distribution approach may be used to
Diagnostics 2023, 13, 3007 11 of 32

obtain the ideal threshold value [59]. Otsu thresholding [38] is the popular method among
these techniques.

4.2.3. Watershed Techniques


The intensities of the image are analyzed using watershed techniques [60]. Topological
watershed [61], marker-based watershed [62], and image IFT watershed [63] are a few
examples of watershed algorithms.

4.2.4. Morphological-Based Method


The morphology technique relies on the morphology of image features. It is mostly
used for extracting details from images based on shape representation. Dougherty [64]
defines dilation and erosion as two basic operations. Dilation is used to increase the size of
an image. Erosion reduces the size of images.

4.2.5. Edge-Based Method


Edge detection is performed using variations in image intensity. Pixels at an edge are
those where the image’s function abruptly changes. Edge-based segmentation techniques
include those by Sobel, Roberts, Prewitt, and Canny [65]. Reference [66] offers an enhanced
edge detection approach for tumor segmentation. The development of an automated
image-dependent thresholding is combined with the Sobel operator to identify the edges of
the brain tumor.

4.2.6. Neural-Networks-Based Method


Neuronal network-based segmentation techniques employ computer models of artifi-
cial neural networks consisting of weighted connections between processing units (called
neurons). At the connections, the weights act as multipliers. To acquire the coefficient
values, training is necessary. The segmentation of medical images and other fields has
made use of a variety of neural network designs. Some of the techniques utilized in the
segmentation process include the multilayer perceptron (MLP), Hopfield neural networks
(HNN) [67], back-propagation learning algorithm, SVM-based segmentation [68], and
self-organizing maps (SOM) neural network [67].

4.2.7. DL-Based Segmentation


The primary strategy used in the DL-based segmentation of brain tumors technique
is to pass an image through a series of deep learning structures before performing input
image segmentation based on the deep features [69]. Many deep learning methods, such as
deep CNNs, CNN, and others, have been suggested for segmenting brain tumors.
A deep learning system called semantic segmentation [70] arranges pixels in an
image according to semantic categories. The objective is to create a dense pixel-by-pixel
segmentation map of the image, and each pixel is given an assigned category or entity.

4.3. Performance Evaluation


An important component of every research work involves evaluating the classification
and segmentation performance. The primary goal of this evaluation is to measure and
analyze the model’s capability for segmentation or diagnostic purposes. Segmentation is a
crucial step in improving the diagnostic process, as we mentioned before, but for this to
occur, the segmentation process must be as accurate as feasible. Additionally, to evaluate
the diagnostic approach utilized while taking complexity and time into account [71].
True positive (TP), true negative (TN), false positive (FP), and false negative (FN) are
the main four elements in any analysis or to evaluate any segmentation or classification
algorithm. A pixel that is accurately predicted to be assigned to the specified class in a
segmentation method is represented by TP and TN based on the ground truth. Furthermore,
FP is a result when the model predicts a pixel wrongly as not belonging to a specific class. A
Diagnostics 2023, 13, 3007 12 of 32

false negative (FN) results when the model wrongly predicts a pixel belonging to a certain
class [71].
TP in classification tasks refers to an image that is accurately categorized into a positive
category based on the ground truth. Similar to this, the TN result occurs when the model
properly classifies an image in the negative category. As opposed to that, FP results occur
when the model wrongly assigns an image in the positive class while the actual datum is
in the negative category. FN results occur when the model misclassifies an image while it
belongs in the positive category. Through the four elements mentioned above, different
performance measures enable us to expand the analysis.
Accuracy (ACC) measures a model’s ability to correctly categorize all pixels/classes,
whether they are positive or negative. Sensitivity (SEN) shows the percentage of accurately
predicted positive images/pixels among all actual positive samples. It evaluates a model’s
ability to recognize relevant samples or pixels. The percentage of actual negatives that were
predicted is known as specificity (SPE). It indicates a percentage of classes or pixels that
could not be accurately recognized [71].
The precision (PR) or positive predictive value (PPV) measures how frequently the
model correctly predicts the class or pixel. It provides the precise percentage of positively
expected results from models. The most often used statistic that combines SEN and
precision is the F1 score [72]. It refers to the two-dimensional harmonic mean.
The Jaccard index (JI), also known as intersection over union (IoU), calculates the
percentage of overlap between the model’s prediction output and the annotation ground-
truth mask.
The spatial overlap between the segmented region of the model and the ground-
truth tumor region is measured by the Dice similarity coefficient (DSC). A DSC value
of zero means there is no spatial overlap between the annotated model result and the
actual tumor location, whereas a value of one means there is complete spatial overlap. The
receiver characteristics curve is summarized by the area under the curve (AUC), which
compares SEN to the false positive rate as a measure of a classifier’s ability to discriminate
between classes.
The similarity between the segmentation produced by the model and the expert-
annotated ground truth is known as the similarity index (SI). It describes how the identifica-
tion of the tumor region is comparable to that of the input image [71]. Table 3 summarizes
different performance equations.

Table 3. Performance equation.

Parameter Equation
ACC ( TP + TN )/( TP + FN + FP + TN )
SEN TP/( TP + FN )
SPE TN/( TN + FP)
PR TP/( TP + FP)
F1_SCORE 2 ∗ PR ∗ SEN/( PR + SEN )
DCS 2 ∗ TP/(2 ∗ TP + FP + FN
Jaccard TP/( TP + FP + FN )

5. Literature Review
5.1. Article Selection
The major goal of this study is to review and understand brain tumor classification
and detection strategies developed worldwide between 2010 and 2023. This present study
aims to review the most popular techniques for detecting brain cancer that have been made
available globally, in addition to looking at how successful CAD systems are in this process.
We did not target any one publisher specifically, but we utilized articles from a variety
of sources to account for the diversity of knowledge in a particular field. We collected
Diagnostics 2023, 13, 3007 13 of 32

appropriate articles from several internet scientific research article libraries. We searched
the pertinent publications using IEEE Explore, Medline, ScienceDirect, Google Scholar,
and ResearchGate.
Each time, the filter choice for the year (2010 to 2023) was chosen so that only papers
from the chosen period were presented. Most frequently, we used terms like “detection
of MRI images using deep learning,” “classification of brain tumor from CT/MRI images
using deep learning,” “detection and classification of brain tumor using deep learning,”
“CT brain tumor,” “PET brain tumor,” etc. This study offers an analysis of 53 chosen
publications.

5.2. Publicly Available Datasets


The researchers tested the proposed methods on several publicly accessible datasets.
In this part, several significant and difficult datasets are covered. The most difficult MRI
datasets are BRATS. Table 4 presents a summary of the dataset names.

Table 4. Summary of the dataset.

Dataset MRI Sequences Source


BRATS T1, T2, FLAIR [73]
RIDER T1, T2, FLAIR [74]
Harvard T2 [75]
TCGA T1, T2, FLAIR [76,77]
Figshare T1 [78]
IXI T1, T2 [79]

5.3. Related Work


In addition to the several techniques for segmenting brain tumors that we already
highlighted, this section presents a summary of studies that use artificial intelligence to
classify brain tumors.

5.3.1. MRI Brain Tumor Segmentation


This section describes the various machine learning, deep learning, region growth,
thresholding, and literature-proposed brain tumor segmentation strategies.
To segment brain tumors, Gordillo et al. [80] utilized fuzzy logic structure, which they
built utilizing features extracted from MR images and expert knowledge. This system
learns unsupervised and is fully automated. With trials conducted on two different forms
of brain tumors, glioblastoma multiform and meningioma, the result of segmentation using
this approach is shown to be satisfactory, with the lowest accuracy of 71% and a maximum
of 93%.
Employing fuzzy c-means clustering on MRI, Rajendran [81] presented logic analyzing
for segmenting brain tumors. The region-based technique that iteratively progresses
toward the ultimate tumor border was initialized using the tumor type output of fuzzy
clustering. Using 15 MR images with manual segmentation ground truth available, tests
were conducted on this approach to determine its effectiveness. The overall result was
suitable, with a sensitivity of 96.37% and an average Jaccard coefficient value of 83.19%.
An SVM classifier was applied by Kishore et al. to categorize tumor pixels using
vectors of features from MR images, such as mean intensity and LBP. Level sets and region-
growing techniques were used for the segmentation. The experiments on their suggested
methods used MR images with tumor regions manually defined by 11 different participants.
Their suggested methods are effective, with a DSC score of 0.69 [82].
A framework for segmenting tumorous MRI 3D images was presented by Abbasi
and Tajeripour [38]. The first phase improves the input image’s contrast using bias field
correction. The data capacity is reduced using the multilevel Otsu technique in the second
Diagnostics 2023, 13, 3007 14 of 32

phase. LBP in three orthogonal planes and an enhanced histogram of images are employed
in the third stage, the feature extraction step. Lastly, the random forest is employed as a
classifier for distinguishing tumorous areas since it can work flawlessly with large inputs
and has a high level of segmentation accuracy. The overall outcome was acceptable, with a
mean Jaccard value of 87% and a DSC of 93%.
By combining two K-means and FCM-clustering approaches, Almahfud et al. [83]
suggest a technique for segmenting human brain MRI images to identify brain cancers.
Because K-means is more susceptible to color variations, it can rapidly and effectively
discover optima and local outliers. So that the cluster results are better and the calculation
procedure is simpler, the K-means results are clustered once more with FCM to categorize
the convex contour based on the border. To increase accuracy, morphology and noise
reduction procedures are also suggested. Sixty-two brain MRI scans were used in the study,
and the accuracy rate was 91.94%.
According to Pereira et al. [69], an automated segmentation technique based on CNN
architecture was proposed, which explores small three-by-three kernels. Given the smaller
number of weights in the network, using small kernels enables the creation of more intricate
architectures and helps prevent overfitting. Additionally, they looked at the use of intensity
normalizing as an initial processing step, which, when combined with data augmentation,
was highly successful in segmenting brain tumors in MRI images. Their suggestion was
verified using the BRATS database, yielding Dice similarity coefficient values of 0.88, 0.83,
and 0.77 for the Challenge dataset for the whole, core, and enhancing areas.
According to the properties of a separated local square, they suggested a unique
approach for segmenting brain tumors [84]. The suggested procedure essentially consists
of three parts. An image was divided into homogenous sections with roughly comparable
properties and sizes using the super-pixel segmentation technique in the first stage. The
second phase was the extraction of gray statistical features and textural information. In
the last phase of building the segmentation model, super-pixels were identified as either
tumor areas or nontumor regions using SVM. They used 20 images from the BRATS dataset,
where a DSC of 86.12% was attained, to test the suggested technique.
The CAD system suggested by Gupta et al. [85] offers a noninvasive method for
the accurate tumor segmentation and detection of gliomas. The system takes advantage
of the super pixels’ combined properties and the FCM-clustering technique. The sug-
gested CAD method recorded 98% accuracy for glioma detection in both low-grade and
high-grade tumors.
Brain tumor segmentation using the CNN-based data transfer to SVM classifier ap-
proach was proposed by Cui et al. [68]. Two cascaded phases comprise their algorithm.
They trained CNN in the initial step to understand the mapping of the image region to
the tumor label region. In the testing phase, they passed the testing image and CNN’s
anticipated label output to an SVM classifier for precise segmentation. Tests and evalua-
tions show that the suggested structure outperforms separate SVM-based or CNN-based
segmentation, while DSC achieved 86.12%.
The two-pathway-group CNN architecture described by Razzak et al. is a novel
approach for brain tumor segmentation that simultaneously takes advantage of local and
global contextual traits. This approach imposes equivariance in the 2PG-CNN model
to prevent instability and overfitting parameter sharing. The output of a basic CNN is
handled as an extra source and combined at the last layer of the 2PG CNN, where the
cascade architecture was included. When a group CNN was embedded into a two-route
architecture for model validation using BRATS datasets, the results were DSC 89.2%, PR
88.22%, and SEN 88.32% [86].
A semantic segmentation model for the segmentation of brain tumors from multi-
modal 3D MRIs for the BRATS dataset was published in [87]. After experimenting with
several normalizing techniques, they discovered that group-norm and instance-norm per-
formed equally well. Additionally, they have tested with more advanced methods of data
augmentation, such as random histogram pairing, linear image transformations, rotations,
Diagnostics 2023, 13, 3007 15 of 32

and random image filtering, but these have yet to show any significant benefit. Further,
raising the network depth had no positive effect on performance. However, increasing the
number of filters consistently produced better results. Their BRATS end testing dataset
values were 0.826, 0.882, and 0.837 for overall Dice coefficient or improved tumor core,
entire tumor, and tumor center, respectively.
CNN was used by Karayegen and Aksahin [88] to offer a semantic segmentation
approach for autonomously segmenting brain tumors on BRATS image datasets that
include images from four distinct imaging modalities (T1, T1C, T2, and FLAIR). This
technique was effectively used, and images were shown in a variety of planes, including
sagittal, coronal, and axial, to determine the precise tumor location and parameters such as
height, breadth, and depth. In terms of tumor prediction, evaluation findings of semantic
segmentation carried out using networks are incredibly encouraging. The mean IoU and
mean prediction ratio were both calculated to be 86.946 and 91.718, respectively.
A novel, completely automatic method for segmenting brain tumor regions was
proposed by Ullah et al. [89] using multiscale residual attention CNN (MRA-UNet). To
maintain the sequential information, MRA-UNet uses three sequential slices as its input.
By employing multiscale learning in a cascade path, it can make use of the adaptable
region of interest strategy and precisely segment improved and core tumor regions. In the
BRATS-2020 dataset, their method produced novel outcomes with an overall Dice score of
90.18%.
A new technique for segmenting brain tumors using the fuzzy Otsu thresholding
morphology (FOTM) approach was presented by Wisaeng and Sa-Ngiamvibool [90]. The
values from each single histogram in the original MRI image were modified by using a
color normalizing preprocessing method in conjunction with histogram specification. The
findings unambiguously demonstrate that image gliomas, image meningiomas, and image
pituitary have average accuracy indices of 93.77%, 94.32%, and 94.37%, respectively. A
summary of MRI brain tumor segmentation is provided in Table 5.

Table 5. MRI brain tumor segmentation.

Ref. Scan Year Technique Method Performance Metrics Result


[80] MRI 2010 region-based FCM Acc 93.00%
[81] MRI 2011 region-based FCM Jaccard 83.19%
[82] MRI 2012 NN LBP with SVM DSC 69.00%
[69] MRI 2016 DL CNN DSC 88.00%

[84] MRI 2017 NN GLCM with DSC 86.12%


SVM
87%
[38] MRI 2017 NN LBP with RF Jaccard and DSC and
93%
[85] MRI 2018 region-based FCM Acc 98.00%

MRI 2018 FCM and Acc 91.94%


[83] region-based k-mean

MRI 2019 CNN with DSC 88.00%


[68] DL and NN SVM

MRI 2019 DL Two-path DSC 89.20%


[86] CNN
[87] MRI 2019 DL semantic Acc 88.20%
[88] MRI 2021 DL semantic IoU 91.72%
[89] MRI 2022 DL MRA-UNet DSC 98.18%
Fuzzy Otsu
[90] MRI 2023 region-based Acc 94.37%
Threshold
Diagnostics 2023, 13, 3007 16 of 32
Diagnostics 2023, 13, x FOR PEER REVIEW 16 of 32

5.3.2. MRI Brain Tumor Classification Using ML


5.3.2. MRI Brain Tumor Classification Using ML
The automated classification of brain cancers using MRI images has been the subject
The automated
of several classification
studies. Cleaning data,offeature
brain cancers usingand
extraction, MRIfeature
images selection
has been theare subject
the basic
of several studies. Cleaning data, feature extraction, and feature
steps in the machine learning (ML) process that have been used for this purpose.selection are theBuilding
basic
steps in the machine learning (ML) process that have been used for this purpose. Building
an ML model based on labeled samples is the last step. A summary of MRI brain tumor
an ML model based on labeled samples is the last step. A summary of MRI brain tumor
classification using ML is provided in Table 6.
classification using ML is provided in Table 6.
An NN-based technique to categorize a given MR brain image as either normal or
An NN‐based technique to categorize a given MR brain image as either normal or
abnormal is presented in [91]. In this method, features were first extracted from images
abnormal is presented in [91]. In this method, features were first extracted from images
using the wavelet transform, and then the dimensionality of the features was reduced using
using the wavelet transform, and then the dimensionality of the features was reduced
PCA methodology. The reduced features were routed to a back-propagation NN that uses
using PCA methodology. The reduced features were routed to a back‐propagation NN
a scaled conjugate gradient (SCG) to determine the best weights for the NN. This technique
that uses a scaled conjugate gradient (SCG) to determine the best weights for the NN.
was used on 66 images, 18 of which were normal and 48 abnormal. On training and test
This technique was used on 66 images, 18 of which were normal and 48 abnormal. On
images, the classification accuracy was 100%.
training and test images, the classification accuracy was 100%.
An automated and efficient CAD method based on ensemble classifiers was proposed
An automated and efficient CAD method based on ensemble classifiers was pro‐
by Arakeri and Reddy [36] for the classification of brain cancers on MRI images as benign
posed by Arakeri and Reddy [36] for the classification of brain cancers on MRI images as
or malignant.
benign A tumor’s
or malignant. texture,
A tumor’s shape,
texture, and border
shape, properties
and border were
properties extracted
were extractedandandused
as a representation. The ICA approach was used to select the most significant
used as a representation. The ICA approach was used to select the most significant fea‐ features.
The ensemble
tures. classifier,
The ensemble consisting
classifier, of SVM,
consisting ANN,
of SVM, and kNN
ANN, and kNNclassifiers, is trained
classifiers, using
is trained
these
using these features to describe the tumor. A dataset consisting of 550 patients’ T1‐ andT2-
features to describe the tumor. A dataset consisting of 550 patients’ T1- and
weighted
T2‐weightedMR MRimages waswas
images usedused
for the experiments.
for the experiments.With an accuracy
With an accuracy of 99.09%
of 99.09%(sensitivity
(sen‐
100% and specificity 98.21%), the experimental findings demonstrated
sitivity 100% and specificity 98.21%), the experimental findings demonstrated that that the suggested
the
classification approach approach
suggested classification achieves achieves
strong agreement with the
strong agreement combined
with the combinedclassifier and is
classifier
extremely successful
and is extremely in the in
successful identification of brain
the identification tumors.
of brain Figure
tumors. 9 illustrates
Figure thethe
9 illustrates CAD
method basedbased
CAD method on ensemble classifiers.
on ensemble classifiers.

Figure 9. CAD method based on ensemble classifiers.


Figure 9. CAD method based on ensemble classifiers.

In [92],
In [92],the
theauthors
authorssuggested
suggestedaanovel,
novel,wavelet-energy-based
wavelet‐energy‐basedmethodmethodfor forautomatically
automati‐
cally classifying MR images of the human brain into normal or abnormal.
classifying MR images of the human brain into normal or abnormal. The classifier The classifier
was
was SVM, and biogeography‐based optimization (BBO) was utilized
SVM, and biogeography-based optimization (BBO) was utilized to enhance the SVM’s to enhance the
SVM’s weights.
weights. They succeeded
They succeeded in achieving
in achieving 99% precision
99% precision and accuracy.
and 97% 97% accuracy.
Amin et al. [28] suggest an automated technique to distinguish
Amin et al. [28] suggest an automated technique to distinguish between
between malignant
malignant
and benign brain MRI images. The segmentation of potential lesions has usedvariety
and benign brain MRI images. The segmentation of potential lesions has used a of of
a variety
methodologies. Then, considering shape, texture, and intensity, a feature set was selected
methodologies. Then, considering shape, texture, and intensity, a feature set was selected
for every candidate lesion. The SVM classifier is then used on the collection of features to
for every candidate lesion. The SVM classifier is then used on the collection of features
compare the proposed framework’s precision using various cross‐validations. Three
to compare the proposed framework’s precision using various cross-validations. Three
benchmark datasets, including Harvard, Rider, and Local, are used to verify the sug‐
benchmark datasets, including Harvard, Rider, and Local, are used to verify the suggested
gested technique. For the procedure, the average accuracy was 97.1%, the area under the
technique. For the procedure, the average accuracy was 97.1%, the area under the curve
curve was 0.98, the sensitivity was 91.9%, and the specificity was 98.0%.
was 0.98, the sensitivity was 91.9%, and the specificity was 98.0%.
A suitable CAD approach toward classifying brain tumors is proposed in [93]. The
A suitable CAD approach toward classifying brain tumors is proposed in [93]. The
database includes meningioma, astrocytoma, normal brain areas, and primary brain tu‐
database includes meningioma, astrocytoma, normal brain areas, and primary brain tumors.
mors. The radiologists selected 20 × 20 regions of interest (ROIs) for every image in the
The radiologists selected 20 × 20 regions of interest (ROIs) for every image in the dataset.
dataset. Altogether, these ROI(s) were used to extract 371 intensity and texture features.
Altogether, these ROI(s) were used to extract 371 intensity and texture features. These three
These three classes were divided using the ANN classifier. Overall classification accuracy
classes were divided using the ANN classifier. Overall classification accuracy was 92.43%.
was 92.43%.
Four hundred twenty-eight T1 MR images from 55 individuals were used in a varied
dataset for multiclass brain tumor classification [94]. A based-on content active contour
model extracted 856 ROIs. These ROIs were used to extract 218 intensity and texture
Diagnostics 2023, 13, 3007 17 of 32

features. PCA was employed in this study to reduce the size of the feature space. The ANN
was then used to classify these six categories. The classification accuracy was seen to have
reached 85.5%.
A unique strategy for classifying brain tumors in MRI images was proposed in [95]
by employing improved structural descriptors and hybrid kernel-SVM. To better classify
the image and improve the texture feature extraction process using statistical parameters,
they used GLCM and histograms to derive the texture feature from every region. Different
kernels were combined to create a hybrid kernel SVM classifier to enhance the classification
process. They applied this technique to only axial T1 brain MRI images—93% accuracy for
their suggested strategy.
A hybrid system composed of two ML techniques was suggested in [96] for classifying
brain tumors. For this, 70 brain MR images overall (60 abnormal, 10 normal) were taken
into consideration. DWT was used to extract features from the images. Using PCA, the
total number of features was decreased. Following feature extraction, feed-forward back-
propagation ANN and KNN were applied individually on the decreased features. The
back-propagation learning method for updating weights is covered by FP-ANN. KNN has
already been covered. Using KNN and FP-ANN, this technique achieves 97% and 98%
accuracy, respectively [96].
A strategy for classifying brain MRI images was presented in [97]. Initially, they used
an enhanced image improvement method that comprises two distinct steps: noise removal
and contrast enhancement using histogram equalization. Then, using a DWT to extract
features from an improved MR brain image, they further decreased these features by mean
and standard deviation. Finally, they developed a sophisticated deep neural network
(DNN) to classify the brain MRI images as abnormal or normal, and their strategy achieved
95.8%.

Table 6. MRI brain tumor classification using ML.

Scan Year Feature Feature Acc.


Ref. Extraction Selection Classification

[96] MRI 2010 GLCM PCA ANN and KNN 98% and
97%

[91] MRI 2011 Wavelet PCA Back-propagation 100.00%


NN

[94] MRI 2013 Intensity and PCA ANN 85.50%


texture
[95] MRI 2014 GLCM - SVM 93.00%

[36] MRI 2015 Texture and ICA SVM 99.09%


shape
[92] MRI 2015 Wavelet - SVM 97.00%

[28] MRI 2017 Texture and - SVM 97.10%


shape

[93] MRI 2017 Intensity and - ANN 92.43%


texture
Mean and
[97] MRI 2020 DWT standard DNN 95.8%
deviation

5.3.3. MRI Brain Tumor Classification Using DL


Difficulties remain in categorizing brain cancers from an MRI scan, despite encour-
aging developments in the field of ML algorithms for the classification of brain tumors
into their different types. These difficulties are mostly the result of the ROI detection;
typical labor-intensive feature extraction methods could be more effective [98]. Owing
to the nature of deep learning, the categorization of brain tumors is now a data-driven
problem rather than a challenge based on manually created features [99]. CNN is one of
the deep learning models that is frequently utilized in brain tumor classification tasks and
has produced a significant result [100].
Diagnostics 2023, 13, 3007 18 of 32

Diagnostics 2023, 13, x FOR PEER REVIEW 18


Diagnostics 2023, 13, x FOR PEER REVIEW 18
According to a study [101], the CNN algorithm can be used to divide the sever-
ity of gliomas into two categories: low severity or high severity, as well as multiple
severity (Grades II, III, and IV). Accuracy rates of 71% and 96% were reached by
grades of severity (Grades II, III, and IV). Accuracy rates of 71% and 96% were reached by
severity
classifier.(Grades II, III, and IV). Accuracy rates of 71% and 96% were reached by
the classifier. classifier.
A DL approach based on a CNN was proposed by Sultan et al. [7] to classify d
A DL approachA based
DL of on a CNN
approach was on
based proposed
a CNN by
wasSultan
proposedet al. by
[7] Sultan
to classify
al.different
et The [7] to classify di
ent kinds brain tumors using two publicly available datasets. proposed meth
kinds of brain tumors
ent using
kinds of two
brain publicly
tumors available
using two datasets.
publicly The proposed
available method’s
datasets. The block
proposed meth
block diagram is presented in Figure 10. The first divides cancers into meningioma
diagram is presented
block in Figure is
diagram 10.presented
The first divides cancers into meningioma, pituitary, and
tuitary, and glioma tumors.inTheFigure 10.one
other The first divides cancers
distinguishes among intoGrade meningioma
II, III, an
glioma tumors. tuitary,
The otherandone distinguishes
glioma tumors. among
The Grade
other one II,distinguishes
III, and IV gliomas.
among The
Grade firstII, III, and
gliomas. The first and second datasets, which each have 233 and 73 patients, conta
and second datasets, which each
gliomas. have
and 233 and 73 patients, containeacha have
combined total73ofpatients,
3064
combinedThe first
total of 3064 second
and 516datasets,
T1 images.which
The suggested 233 and
network configurationconta achi
and 516 T1 images. The suggested
combined network configuration achieves the best overallconfiguration
accuracy,
total of 3064 and 516 T1 images. The suggested network
the best overall accuracy, 96.13% and 98.7%, for the two studies, which results inachi sig
96.13% and 98.7%,
the for the two studies, which results in significant
for the performance [7]. results in sig
cantbest overall
performance accuracy,
[7]. 96.13% and 98.7%, two studies, which
cant performance [7].

Figure 10. A blockFigure 10. Ashowing


schematic block schematic showing
the suggested the suggested
approach. approach.
Reprinted Reprinted
(adapted) (adapted) with pe
with permission
Figure 10. [7].
sion from A block schematic
Copyright 2019 showing
IEEE. the suggested approach. Reprinted (adapted) with per
from [7]. Copyright 2019 IEEE.
sion from [7]. Copyright 2019 IEEE.
Similarly,
Similarly, ref. [102] showed ref.
how [102] showedbrain
to classify how MRI
to classify brain MRI
scan images scan images
into malignant andinto malig
and Similarly,
benign ref. [102]
using CNN showed how in
algorithms to conjunction
classify brainwith
MRIaugmenting
scan imagesdata into and
maligim
benign using CNN algorithms in conjunction with augmenting data and image processing.
and benign using
processing. They CNN algorithms
evaluated the in conjunction
effectiveness of with CNN
their augmenting
model data
with and im
pretra
They evaluated the effectiveness of their CNN model with pretrained VGG-16, Inception-
processing.
VGG‐16, They evaluated
Inception‐v3, the effectiveness of their CNN model with pretra
v3, and ResNet-50 models using the and ResNet‐50
transfer learning models using
methodology. the transfer
Even thoughlearning
themethodo
VGG‐16, Inception‐v3, and ResNet‐50 models using the transfer learning methodol
experiment wasEvencarriedthough
out on the experiment
a relatively wasdataset,
small carried the
out results
on a relatively small
reveal that thedataset,
model’sthe result
Even though the experiment was carried outstrong
on a relatively small dataset, the result
accuracy result is quite strong and has a very low complexity rate, as it obtained complexity
veal that the model’s accuracy result is quite and has a very low 100% rat
veal
it that
obtainedthe model’s
100% accuracy
accuracy, result
compared is quite
to strong
VGG‐16’s and has
96%, a very low
ResNet‐50’s complexity
89%, and rat
In
accuracy, compared to VGG-16’s 96%, ResNet-50’s 89%, and Inception-V3’s 75%. The
it obtained
tion‐V3’s 75%.100%
The accuracy,
structure compared
of the to VGG‐16’s
suggested CNN 96%, ResNet‐50’s
architecture is shown 89%,
in and In
Figure 1
structure of the suggested CNN architecture is shown in Figure 11.
tion‐V3’s 75%. The structure of the suggested CNN architecture is shown in Figure 11

Figure 11. Proposed method. Reprinted (adapted) with permission from [102]. Copyright
Figure
Figure 11. Proposed 11. Proposed
Mathematical
method. method.
Biosciences
Reprinted Reprinted
with (adapted)
and Engineering.
(adapted) with
permission permission
from from [102].
[102]. Copyright 2020Copyright
Mathematical Biosciences and Engineering.
Mathematical Biosciences and Engineering.
For accurate glioma grade prediction, researchers developed a custom
For accurate For grade
CNN‐based
glioma accurate
deep glioma researchers
learning
prediction, grade [103]
model prediction, researchers
and evaluated
developed developed
the performance
a customized CNN-based a custom
using Alex
CNN‐based
GoogleNet,
deep learning model deep learning
andevaluated
[103] and SqueezeNet model
the by [103] and evaluated
transfer learning.
performance the performance
Based onGoogleNet,
using AlexNet, gliomaAlex
using
104 clinicaland pat
SqueezeNet byGoogleNet,
with (50 LGGs
transfer andand
SqueezeNet
learning. 54 onby
HGGs),
Based transfer
they
104 learning.
trained
clinical Based
and evaluated
glioma onthe
patients 104 clinical
models.
with glioma
The
(50 LGGs pati
training
with (50 LGGs using
was expanded and 54a variety
HGGs),of they
datatrained and evaluated
augmentation theAmodels.
methods. five‐foldThe training
cross‐valida
was expanded using a variety of data augmentation methods. A five‐fold cross‐valida
Diagnostics 2023, 13, 3007 19 of 32

Diagnostics 2023, 13, x FOR PEER REVIEW 19 of 32


and 54 HGGs), they trained and evaluated the models. The training data was expanded
using a variety of data augmentation methods. A five-fold cross-validation procedure
was used towas
procedure assess
usedeach model’s
to assess eachperformance. According
model’s performance. to the study’s
According findings,
to the study’s their
find‐
specially created
ings, their speciallydeep CNNdeep
created modelCNN outperformed the pretrained
model outperformed modelsmodels
the pretrained by an equal
by an or
greater percentage.
equal or The custom
greater percentage. Themodel’s
custom accuracy, sensitivity,
model’s accuracy, F1 score,F1specificity,
sensitivity, and AUC
score, specificity,
values
and AUCwere, respectively,
values 0.971, 0.980,
were, respectively, 0.970,
0.971, 0.963,
0.980, and0.963,
0.970, 0.989.and 0.989.
AAnovel
noveltransfer
transferlearning-based
learning‐basedactive
activelearning
learningparadigm
paradigmfor forclassifying
classifyingbrain
braintumors
tu‐
mors
was was proposed
proposed by Ruqian
by Ruqian et al.Figure
et al. [104]. [104]. Figure 12 describes
12 describes the workflow
the workflow for learning.
for active active
learning.
On the MRIOn the MRI
training training
dataset dataset
of 203 of 203
patients andpatients and the
the baseline baselinedataset
validation validation
of 66dataset
patients,
of 66used
they patients, they used a technique
a 2D slice-based 2D slice‐based technique
to train to trainthe
and fine-tune andmodel.
fine‐tune thesuggested
Their model.
Their suggested
approach allowed approach
the modelallowed the model
to obtain an areato obtain
underan area
the under
curve the curve
(ROC) (ROC) of
of 82.89%. The
82.89%. Thebuilt
researchers researchers builtdataset
a balanced a balanced
anddataset
ran theandsameranprocess
the same onprocess on it to
it to further further
investigate
investigate
the robustness theofrobustness of their
their strategy. strategy. to
Compared Compared to the AUC
the baseline’s baseline’s AUC of
of 78.48%, the78.48%,
model’s
the model’s
AUC was 82%. AUC was 82%.

Figure 12. Workflow of the suggested active learning framework based on transfer learning. Re‐
Figure 12. Workflow of the suggested active learning framework based on transfer learning.
printed (adapted) with permission from [104]. Copyright 2021 Frontiers in Artificial Intelligence.
Reprinted (adapted) with permission from [104]. Copyright 2021 Frontiers in Artificial Intelligence.
A total of 131 patients with glioma were enrolled [105]. A rectangular ROI was used
A total of 131 patients with glioma were enrolled [105]. A rectangular ROI was used to
to segment tumor images, and this ROI contained around 80% of the tumor. The test da‐
segment tumor images, and this ROI contained around 80% of the tumor. The test dataset
taset was then created by randomly selecting 20% of the patient‐level data. Models pre‐
was then created by randomly selecting 20% of the patient-level data. Models previously
viously trained on the expansive natural image database ImageNet were applied to MRI
trained on the expansive natural image database ImageNet were applied to MRI images,
images, and then AlexNet and GoogleNet were developed from scratch and fine‐tuned.
and then AlexNet and GoogleNet were developed from scratch and fine-tuned. Five-fold
Five‐fold cross‐validation (CV) was used on the patient‐level split to evaluate the classi‐
cross-validation (CV) was used on the patient-level split to evaluate the classification task.
fication task. The averaged performance metrics for validation accuracy, test accuracy,
The averaged performance metrics for validation accuracy, test accuracy, and test AUC
and test AUC from the five‐fold CV of GoogleNet were, respectively, 0.867, 0.909, and
from the five-fold CV of GoogleNet were, respectively, 0.867, 0.909, and 0.939.
0.939.
Hamdaoui
Hamdaoui et et al.
al. [106]
[106] proposed
proposed anan intelligent
intelligentmedical
medicaldecision‐support
decision-support system
system forfor
identifying and categorizing brain tumors using images from the risk of malignancy
identifying and categorizing brain tumors using images from the risk of malignancy in‐ index.
They
dex. employed
They employeddeep deep
transfer learning
transfer principles
learning to avoid
principles the the
to avoid scarcity of training
scarcity of trainingdata
required to construct the CNN model. For this, they selected seven CNN
data required to construct the CNN model. For this, they selected seven CNN architec‐architectures that
had
tures that had already been trained on an ImageNet dataset that they carefully fitted on of
already been trained on an ImageNet dataset that they carefully fitted on (MRI) data
brain
(MRI) tumors gathered
data of from the
brain tumors BRATSfrom
gathered database, as shown
the BRATS in Figure
database, 13. Justinthe
as shown prediction
Figure 13.
Diagnostics 2023, 13, x FOR PEER REVIEW 20 of 32
Diagnostics 2023, 13, 3007 20 of 32

Just the prediction that received the highest score among the predictions made by the
that
sevenreceived the highest
pretrained CNNs score among the
is produced predictions
to increase made
their by the
model’s seven pretrained
accuracy. CNNs
They evaluated
is
theproduced to increase
effectiveness their model’s
of the primary accuracy.
two‐class model,They evaluated
which includesthe effectiveness
LGG and HGG of the
brain
primary two-class model, which includes LGG and HGG brain cancers, using a
cancers, using a ten‐way cross‐validation method. The test precision, F1 score, test preci‐ ten-way
cross-validation method. The
sion, and test sensitivity for test
theirprecision,
suggestedF1 model
score, test
wereprecision,
98.67%,and test sensitivity
98.06%, 98.33%, and for
their suggested model
98.06%, respectively. were 98.67%, 98.06%, 98.33%, and 98.06%, respectively.

Figure 13. Proposed process for deep transfer learning. Reprinted (adapted) with permission from
Figure 13. Proposed process for deep transfer learning. Reprinted (adapted) with permission
[106]. Copyright 2021 Indonesian Journal of Electrical Engineering and Computer Science.
from [106]. Copyright 2021 Indonesian Journal of Electrical Engineering and Computer Science.
A new
A new AIAI diagnosis
diagnosis modelmodel called
called EfficientNetB0
EfficientNetB0 was was created
created by by Khazaee
Khazaee et et al. [107]
al. [107]
to assess and categorize human brain gliomas utilizing sequences
to assess and categorize human brain gliomas utilizing sequences from MR images. They from MR images. They
used aa common
used commondatasetdataset(BRATS-2019)
(BRATS‐2019)toto validate
validate thethe
new new AI model,
AI model, andandtheythey
showedshowed
that
thatAI
the the AI components—CNN
components—CNN and transfer
and transfer learning—provided
learning—provided outstanding outstanding
performance perfor‐
for
mance for categorizing and grading glioma
categorizing and grading glioma images, with 98.8% accuracy. images, with 98.8% accuracy.
In [70],
In [70], the
the researchers
researchers developed
developed aa model model using
using transfer
transfer learning
learning and and pretrained
pretrained
ResNet18 to identify basal basal ganglia
ganglia germinomas
germinomas more more accurately.
accurately. In this retrospective
analysis, 73 patients with basal ganglioma were enrolled. Based Based on on both
both T1 T1 and
and T2T2 data,
data,
brain tumors
tumors were were manually
manuallysegmented.
segmented.ToTocreate createthethetumor
tumor classification
classification model,
model,thethe
T1
sequence
T1 sequence waswas utilized.
utilized.Transfer
Transfer learning
learning and
anda a2D 2Dconvolutional
convolutionalnetwork networkwere were used.
used.
Five‐fold cross‐validation
Five-fold cross-validation was wasusedusedtototraintrainthe
themodel,
model,and andit it
resulted
resulted in ina mean
a mean AUCAUC of
88%.
of 88%.
Researchers suggested an effective hyperparameter optimization method for CNN
based on Bayesian
Bayesianoptimization
optimization[108]. [108].ThisThismethod
method waswas assessed
assessed byby categorizing
categorizing 30643064
T1
T1 images
images intointo
threethree
typestypes of brain
of brain cancerscancers (glioma,
(glioma, pituitary,
pituitary, and and meningioma).
meningioma). Five
Five pop‐
popular
ular deep deep pretrained
pretrained models
models are are compared
compared to the
to the improved
improved CNN’sCNN’s performance
performance us-
using
ing transfer
transfer learning.
learning. TheirTheir
CNNCNN achievedachieved
98.70%98.70% validation
validation accuracy accuracy after applying
after applying Bayes‐
Bayesian optimization.
ian optimization.
A novel
novel generated
generatedtransfer
transferDL DL model
model waswasdeveloped
developed by Alanazi
by Alanazi et al.et[109] for the
al. [109] forearly
the
diagnosis
early diagnosisof brain cancers
of brain into their
cancers intodifferent categories,
their different such as
categories, meningioma,
such as meningioma, pituitary,
pi‐
and
tuitary,glioma. Several Several
and glioma. layers oflayers
the models were first
of the models wereconstructed from scratch
first constructed from to test the
scratch to
performance of standalone CNN models performed for
test the performance of standalone CNN models performed for brain MRI images. The brain MRI images. The weights
of the neurons
weights of the were
neurons thenwere
revised
thenusing the transfer
revised using the learning
transferapproach
learningtoapproach
categorize to brain
cate‐
MRI images into tumor subclasses using the 22-layer, isolated
gorize brain MRI images into tumor subclasses using the 22‐layer, isolated CNN model. CNN model. Consequently,
the transfer-learned
Consequently, model that was created
the transfer‐learned model had thatanwasaccuracy
created ratehadof 95.75%.
an accuracy rate of
95.75%. Rizwan et al. [110] suggested a method to identify various BT classes using Gaussian-
CNNRizwanon two et datasets.
al. [110]One of the datasets
suggested a method is employed
to identifytovarious
categorize lesions using
BT classes into pituitary,
Gaussi‐
glioma, and meningioma. The other distinguishes between
an‐CNN on two datasets. One of the datasets is employed to categorize lesions the three glioma classes
into(II, III,
pitu‐
and IV). The first and second datasets, respectively, have 233 and
itary, glioma, and meningioma. The other distinguishes between the three glioma classes 73 victims from a total of
3064 and 516 images on T1 enhanced images. For the two datasets,
(II, III, and IV). The first and second datasets, respectively, have 233 and 73 victims from a the suggested method
has
totalan ofaccuracy
3064 andof516 99.8% andon
images 97.14%.
T1 enhanced images. For the two datasets, the suggested
A seven-layer CNN was suggested in [111] to assist with the three-class categorization
method has an accuracy of 99.8% and 97.14%.
of brain MR images. To decrease computing time, separable convolution was used. The
A seven‐layer CNN was suggested in [111] to assist with the three‐class categoriza‐
suggested separable CNN model achieved 97.52% accuracy on a publicly available dataset
tion of brain MR images. To decrease computing time, separable convolution was used.
of 3064 images.
Diagnostics 2023, 13, 3007 21 of 32

Several pretrained CNNs were utilized in [112], including GoogleNet, Alexnet, Resnet50,
Resnet101, VGG-16, VGG-19, InceptionResNetV2, and Inceptionv3. To accommodate
additional image categories, the final few layers of these networks were modified. Data
from the clinical, Harvard, and Figshare repositories were widely used to assess these
models. The dataset was divided into training and testing halves in a 60:40 ratio. The
validation on the test set demonstrates that, compared to other proposed models, the
Alexnet with transfer learning demonstrated the best performance in the shortest time. The
suggested method obtained accuracies of 100%, 94%, and 95.92% using three datasets and
is more generic because it does not require any manually created features.
The suggested framework [113] describes three experiments that classified brain
malignancies such as meningiomas, gliomas, and pituitary tumors using three designs of
CNN (AlexNet, VGGNet, and GoogleNet). Using the MRI slices of the brain tumor dataset
from Figshare, each study then investigates transfer learning approaches like fine-tuning
and freezing. The data augmentation approaches are applied to the MRI slices for results
generalization, increasing dataset samples, and minimizing the risk of overfitting. The fine-
tuned VGG16 architecture attained the best accuracy at 98.69% in terms of categorization
in the proposed studies.
An effective hybrid optimization approach was used in [114] for the segmentation and
classification of brain tumors. To improve categorization, the CNN features were extracted.
The suggested chronological Jaya honey badger algorithm (CJHBA) was used to train the
deep residual network (DRN), which was used to conduct the classification by using the
retrieved features as input. The Jaya algorithm, the honey badger algorithm (HBA), and
the chronological notion are all combined in the proposed CJHBA. Using BRATS-2018, the
performance is assessed. The highest accuracy is 92.10%. A summary of MRI brain tumor
classification using DL is provided in Table 7.

Table 7. MRI brain tumor classification using DL.

Performance
Ref. Scan Year Technique Method Result
Metrics
[101] MRI 2015 DL Custom-CNN 96.00% Acc
[7] MRI 2019 DL Custom-CNN 98.70% Acc
96%
[102] MRI 2020 DL VGG-16, Inception-v3, ResNet-50 75% Acc
89%
[103] MRI 2021 DL AlexNet, GoogleNet, SqueezeNet 97.10% Acc
[104] MRI 2021 DL Custom-CNN 82.89% ROC
[105] MRI 2018 DL AlexNet 90.90% Test acc
98.67% precision,
98.06% F1 score,
[106] MRI 2021 DL multi-CNN structure
98.33% precision,
98.06% sensitivity
[107] MRI 2022 DL EfficientNetB0 98.80% Acc
[70] MRI 2022 DL ResNet18 88.00% AUC
[108] MRI 2022 DL Custom-CNN 98.70% Acc
[109] MRI 2022 DL Custom-CNN 95.75% Acc
[110] MRI 2022 DL Gaussian-CNN 99.80% Acc
[111] MRI 2020 DL seven-layer CNN 97.52% Acc
[112] MRI 2021 DL Alexnet 100.00% Acc
[113] MRI 2019 DL VGG16 98.69% Acc
[114] MRI 2023 DL CNN 92.10% Acc
Diagnostics 2023, 13, 3007 22 of 32

5.3.4. Hybrid Techniques


Hybrid strategies use multiple approaches to achieve high accuracy, emphasizing
each approach’s benefits while minimizing the drawbacks. The first method employed a
segmentation technique to identify the part of the brain that was infected, and the second
method for classification. Hybrid techniques are summarized in Table 8.
The proposed integrated SVM and ANN-based method for classification can be dis-
covered in [115]. The FCM method is used to segment the brain MRI images initially, where
the updated membership and k value diverge from the standard method. Two types of
characteristics have been retrieved from segmented images to distinguish and categorize tu-
mors. Using SVM, the first category of statistical features was used to differentiate between
normal or abnormal brain MRI images. This SVM technique has an accuracy rate of 97.44%.
Area, perimeter, orientation, and eccentricity were additional criteria used to distinguish
between the tumor and various malignant stages I through IV. The tumor categories and
stages of malignant tumors are classified through the ANN back-propagation technique.
This suggested strategy has a 97.37% accuracy rate for categorizing tumor stages.
A hybrid segmentation strategy using ANN was suggested in [116] to enhance the
brain tumor’s classification outcomes. First, the tumor region was segmented using
skull stripping and thresholding. The segmented tumor was subsequently recognized
using the canny algorithm, and the features of the identified tumor cell region were then
used as the input of the ANN for classification; 98.9% accuracy can be attained with the
provided strategy.
A system that can identify and categorize the different types of tumors as well as
detect them in T1 and T2 image sequences was proposed by Ramdlon et al. [52]. Only the
axial section of the MRI results, which are divided into three classifications (Glioblastoma,
Astrocytoma, and Oligodendroglioma), are used for the data analysis using this method.
Basic image processing techniques were used to identify the tumor region, including image
enhancement, binarization, morphology, and watershed. Following the shape extraction
feature segmentation, the KNN classifier was used to classify tumors; 89.5% of tumors were
correctly classified.
Gurbina et al. [30] described the suggested integrated DWT and SVM classification
methods. The initial segmentation of the brain MRI images was performed using Ostu’s
approach. The DWT features were obtained from segmented images to identify and
categorize tumors. Brain MRI images were divided into benign and malignant categories
using an SVM classifier. This SVM method has a 99% accuracy rate.
The objective of the study in [117] is multilevel segmentation for effective feature
extraction and brain tumor classification from MRI data. The authors used thresholding,
the watershed algorithm, and morphological methods for segmentation after preprocessing
the MRI image data. Through CNN, features are extracted, and SVM classed the tumor
images as malignant or noncancerous. The proposed algorithm has an overall accuracy
of 87.4%.
The classification of brain tumors into three types—glioblastoma, sarcoma, and
metastatic—has been proposed by the authors of [118]. The authors first used FCM cluster-
ing to segment the brain tumor and then DWT to extract the features. PCA was then used
to minimize the characteristics. Using six layers of DNN, categorization was completed.
The suggested method displays 98% accuracy.
The method presented by Babu et al. [119] focused on categorizing and segmenting
brain cancers from MRI images. Four processes compose the procedure: image denoising,
segmentation of tumor, extracting features, and hybrid classification. They used the wavelet-
based method to extract features after employing the thresholding process to remove tumors
from brain MRI images. The final hybrid categorization was performed using CNN. The
experiment’s findings showed that the approach had a segmentation accuracy of 95.23%,
but the suggested optimized CNN had a classification accuracy of 99%.
Improved SVM was suggested as a novel algorithm by Ansari [120]. They recom-
mended four steps for identifying and classifying brain tumors using MRI data: preprocess-
Diagnostics 2023, 13, 3007 23 of 32

ing, segmentation of images, extracting features, and image categorization. They segmented
tumors using a fuzzy clustering approach and extracted key features using GLCM. In the
classification stage, improved SVM was finally used. The suggested approach has an 88%
accuracy rate.
A fully automated system for segmenting and diagnosing brain tumors was proposed
by Farajzadeh et al. [121]. This is accomplished by first applying five distinct preprocessing
techniques to an MR image, passing the images through a DWT, and then extracting six
local attributes from the image. The processed images are then delivered to an NN, which
subsequently extracts higher-order attributes from them. Another NN then weighs the
features and concatenates them with the initial MR image. The hybrid U-Net is then fed with
the concatenated data to segment the tumor and classify the image. For segmenting and
categorizing brain tumors, they attained accuracy rates of 98.93% and 98.81%, respectively.

Table 8. Hybrid techniques.

Segmentation Feature
Ref. Year Classifier Accuracy
Method Extraction
shape and SVM and 97.44% and
[115] 2017 FCM
statistical ANN 97.37%
DWT and
[118] 2017 FCM CNN 98.00%
PCA
[52] 2019 watershed shape KNN 89.50%
[30] 2019 Ostu’s DWT SVM 99.00%
thresholding and
[117] 2020 CNN SVM 87.4%.
watershed
GLCM and
[116] 2020 canny ANN 98.90%
Gabor
[119] 2023 thresholding wavelet CNN 99.00%
Improved
[120] 2023 fuzzy clustering GLCM 88.00%
SVM
[121] 2023 U-Net DWT CNN 98.93%

5.3.5. Various Segmentation and Classification Methods Employing CT Images


Wavelet statistical texture features (WST) and wavelet co-occurrence texture features
(WCT) were combined to segment brain tumors in CT images [122] automatically. After
utilizing GA to choose the best texture features, two different NN classifiers were tested to
segment the region of a tumor. This approach is shown to provide good outcomes with an
accuracy rate of above 97%. Architecture of NN is shown in Figure 14.
For the segmentation and classification of cancers in brain CT images utilizing SVM
with GA feature selection, a novel dominating feature extraction methodology was pre-
sented in [123]. They used FCM and K-means during the segmentation step and GLCM
and WCT during the feature extraction stage. This approach is shown to provide positive
results with an accuracy rate of above 98%.
An improved semantic segmentation model for CT images was suggested in [124].
Additionally, classification is used in the suggested work. In the suggested architecture, the
semantic segmentation network, which has several convolutional layers and pooling layers,
was used to first segment the brain image. Then, using the GoogleNet model, the tumor
was divided into three groups: meningioma, glioma, and pituitary tumor. The overall
accuracy achieved with this strategy was 99.6%.
5.3.5. Various Segmentation and Classification Methods Employing CT Images
Wavelet statistical texture features (WST) and wavelet co‐occurrence texture features
(WCT) were combined to segment brain tumors in CT images [122] automatically. After
Diagnostics 2023, 13, 3007
utilizing GA to choose the best texture features, two different NN classifiers were24tested
of 32
to segment the region of a tumor. This approach is shown to provide good outcomes with
an accuracy rate of above 97%. Architecture of NN is shown in Figure 14.

Figure14.
Figure 14.Architecture
ArchitectureofofNN.
NN.

A unique correlation learning technique utilizing CNN and ANN was proposed by
Woniak et al. [125]. CNN used the support neural network to determine the best filters for
the convolution and pooling layers. Consequently, the main neural classification improved
efficiency and learns more quickly. Results indicated that the CLM model can achieve 96%
accuracy, 95% precision, and 95% recall.
The contribution of image fusion to an enhanced brain tumor classification framework
was examined by Nanmaran et al. [126], and this new fusion-based tumor categoriza-
tion model can be more successfully applied to personalized therapy. A distinct cosine
transform-based (DCT) fusion technique is utilized to combine MRI and SPECT images of
benign and malignant class brain tumors. With the help of the features extracted from fused
images, SVM, KNN, and decision trees were set to test. When using features extracted from
fused images, the SVM classifier outperformed KNN and decision tree classifiers with an
overall accuracy of 96.8%, specificity of 93%, recall of 94%, precision of 95%, and F1 score
of 91%. Table 9 provides different segmentation and classification methods employing
CT images.

Table 9. Various segmentation and classification methods employing CT images.

Feature
Ref. Year Type Segmentation Feature Extraction Classification Result
Selection
[122] 2011 CT NN WCT and WST GA - 97.00%
FCM and
[123] 2011 CT GLCM and WCT GA SVM 98.00%
k-mean
[124] 2020 CT Semantic - - GoogleNet 99.60%
[125] 2021 CT - - - CNN 96.00%
[126] 2022 SPECT/MRI - DCT - SVM 96.80%

6. Discussion
Most brain tumor segmentation and classification strategies are presented in this
review. The quantitative efficiency of numerous conventional ML- and DL-based algorithms
is covered in this article. Figure 15 displays the total number of publications published
between 2010 and 2022 used in this review. Figure 16 displays the total number of articles
published that perform classification, segmentation, or both.
Diagnostics 2023, 13,
Diagnostics x FOR
2023, PEER REVIEW
13, 3007 25 of 25
32 of 32
Diagnostics 2023, 13, x FOR PEER REVIEW 25 of 32

Number
Number of
of articles
articles for
for each
each year
year from
from 2010
2010 to
to 2022.
2022.
2022
2022
2021
2021
2020
2020
2019
2019
2018
2018
2017
2017
2016
2016
2015
2015
2014
2014
2013
2013
2012
2012
2011
2011
2010
2010
2009
2009
0 5 10 15 20 25 30 35 40 45 50
0 5 10 15 20 25 30 35 40 45 50

Figure 15.15.
Number
Numberof articles
articlespublished from2010
2010 to 2022.
Figure
Figure 15. Number ofofarticles published from
published from 2010toto
2022.
2022.

segmentation
segmentation

classification and segmentation


classification and segmentation

classification
classification

0 2 4 6 8 10 12 14 16 18 20 22 24 26
0 2 4 6 8 10 12 14 16 18 20 22 24 26

Figure 16. Number of articles published that perform classification, segmentation, or both.
Figure 16.16.
Figure Number
Numberofofarticles
articlespublished thatperform
published that perform classification,
classification, segmentation,
segmentation, or both.
or both.
Brain
Braintumor
tumorsegmentation
segmentation uses traditionalimage
uses traditional image segmentation
segmentation methods
methods like region
like region
Brain tumor segmentation uses traditional image segmentation methods like region
growth and unsupervised machine learning.Noise,
Noise,low low image quality, and the initial
growth and unsupervised machine learning. Noise, low image quality, and initial
growth and unsupervised machine learning. image quality, and the the initial
seed point
seed pointareareitsitsbiggest
biggest challenges. Theclassification
challenges. The classification of of pixels
pixels intointo multiple
multiple classesclasses
has has
seed point are its biggest challenges. The classification of pixels into multiple classes has
been accomplished
been accomplished in the thesecond
second generation
generation of segmentation
of segmentation methods methods using unsuper‐
using unsupervised
been accomplished in the second generation of segmentation methods using unsuper‐
ML,ML,
vised suchsuch
as FCM as and
FCM K-means. These techniques
and K‐means. are, nevertheless,
These techniques quite noise sensitive.
are, nevertheless, quite noise
vised ML, such
Pixel-level as FCM and K‐means.
classification-based segmentation These techniques
approaches are,conventional
utilizing nevertheless, quite noise
supervised
sensitive. Pixel‐level classification‐based segmentation approaches utilizing conventional
sensitive.
ML have Pixel‐level
been classification‐based segmentation approaches utilizing
whichconventional
supervised ML presented
have beento presented
overcome this
to difficulty.
overcomeFeature engineering,
this difficulty. Feature extracts
engineering,
supervised ML have been presented to overcome this difficulty. Feature
the tumor-descriptive pieces of information for the model’s training, is frequently used in engineering,
which extracts the tumor‐descriptive pieces of information for the model’s training, is
which extractswith
conjunction thethese
tumor‐descriptive pieces ofpostprocessing
techniques. Additionally, information helpsfor the model’s
further improvetraining,
the is
frequently
results ofused in conjunction
supervised with these
machine learning techniques.
segmentation. Additionally,
Through postprocessing
the pipeline of its compo-helps
frequently used in conjunction with these techniques. Additionally, postprocessing helps
further improve
nent parts, the results
the deep of supervised
learning-based approachmachine learning
accomplishes segmentation.
an end-to-end Through
segmentation of the
further improve the results of supervised machine learning segmentation. Through the
pipeline
tumorsof its an
using component
MRI image.parts, the deep
These models learning‐based
frequently eliminate the approach accomplishes
requirement for manu- an
pipeline of its component parts, the deep learning‐based approach accomplishes an
end‐to‐end
ally built segmentation of tumorsextracting
features by automatically using an tumorMRI image. These
descriptive models frequently
information. However,elim‐
end‐to‐end segmentation of tumors using an MRI image. These models frequently elim‐
their
inate theapplication
requirement in thefor
medical domains
manually is limited
built features by by
the automatically
need for a big dataset for training
extracting tumor de‐
inate the requirement
the models for
and the complexity manually built features by automatically extracting tumor de‐
scriptive information. However,of their
understanding
application them.
in the medical domains is limited by
scriptive information. However, their application in the medical domains is limited by
the need for a big dataset for training the models and the complexity of understanding
the need for a big dataset for training the models and the complexity of understanding
them.
them.
In addition to the segmentation of the brain cancer region from the MRI scan, the
In addition to the segmentation of the brain cancer region from the MRI scan, the
classification of the tumor into its appropriate type is crucial for diagnosis and treatment
Diagnostics 2023, 13, 3007 26 of 32

In addition to the segmentation of the brain cancer region from the MRI scan, the
classification of the tumor into its appropriate type is crucial for diagnosis and treatment
planning, which in today’s medical practice necessitates a biopsy process. Several ap-
proaches that use shallow ML and DL have been put forth for classifying brain tumors.
Type shallow ML techniques frequently include preprocessing, ROI identification, and
feature extraction steps. Extracting descriptive information is a difficult task because of the
inherent noise sensitivity associated with MRI image collection as well as differences in the
shape, size, and position of tumor tissue cells. As a result, deep learning algorithms are
currently the most advanced method for classifying many types of brain cancers, includ-
ing astrocytomas, gliomas, meningiomas, and pituitary tumors. This review has covered
several classifications of brain tumors.
The noisy nature of an MRI image is one of the most frequent difficulties in ML-
based segmentation and classification of brain tumors. To increase the precision of brain
tumor segmentation and classification models, noise estimation and denoising MRI images
is a vital preprocessing operation. As a result, several methods, including the median
filter [115], Wiener filter and DWT [30], and DL-based methods [117], have been suggested
for denoising MRI images.
Large amounts of data are needed for DL models to operate effectively, but there need
to be more datasets available. Data augmentation aids in expanding small datasets and
creating a powerful generalized model. A common augmentation method for MRI images
has yet to be developed. Although many methods have been presented by researchers,
their primary goal is to increase the number of images. Most of the time, they ignore the
connections between space and texture. An identical augmentation technique is required
for comparative analysis to be conducted on its foundation.

7. General Problems and Challenges


Features are first manually extracted for ML, and are then fed into the ML-based
differentiation system. Continuous variation within image classes makes utilizing ML-
based algorithms for image classification challenging. Furthermore, the feature extraction
methods’ usage of modern distance metrics makes it impossible to determine the similarity
between two images.
Deep learning analyzes several parameters and optimizes them to extract and select
features on its own. However, the system lacks intelligence in feature selection and typi-
cally pools, which reduces parameters and eliminates features that could be useful to the
entire system.
Furthermore, DL models need data, and those data are coupled with millions or tril-
lions of parameters. Therefore, enormous amounts of memory and GPU-based computers
are required in the current environment. However, because of their high cost, these devices
are not available to everyone. Consequently, many researchers need to create models that
fit within their available budgets, which significantly impacts the quality of their study.
The noisy nature of an MRI image is one of the most frequent difficulties in ML-based
brain tumor detection and classification. Preprocessing is necessary to remove all forms of
noise from data and make it more suitable for the task at hand. Preprocessing difficulties
exist in all the available datasets. However, the BRATS datasets have problems, such
as motion artifacts and noise. There is no established preprocessing standard currently.
People employ subpar application software, causing the image quality to decrease rather
than improve.

7.1. Brain Cancer and Other Brain Disorders


7.1.1. Stroke
Hemorrhagic strokes come from blood vessel injury or aberrant vascular structure,
while ischemic strokes occur when the brain’s blood supply is cut off. Although the fact
that strokes and brain tumors are two distinct illnesses, the connections associated with
them have been studied [127].
Diagnostics 2023, 13, 3007 27 of 32

They discovered that stroke patients are more likely than other cancer types to acquire
brain cancer. Another intriguing conclusion of the study is that women between the ages
of 40 and 60 and elderly stroke patients are more likely to acquire brain cancer.

7.1.2. Alzheimer’s Disease


Short-term loss of memory is an initial symptom of Alzheimer’s disease (AD), a chronic
neurodegenerative illness that may become worse over time as the disease progresses [108].
Despite AD and cancer being two distinct diseases, several studies have found a connection
between them. According to the research, there is an inverse association between cancer
and Alzheimer’s disease. They discovered that patients who had cancer had a 33% lower
risk of Alzheimer’s disease than individuals who had not had cancer throughout the course
of a mean follow-up of 10 years. Another intriguing finding of the study was that people
with AD had a 61% lower risk of developing cancer.

8. Future Directions
The main applications of CADx systems are in educating and training; clinical practice
is not one of them. CADx-based systems still need to be widely used in clinics. The
absence of established techniques for assessing CADx systems in a practical environment
is one cause of this. The performance metrics outlined in this study provide a helpful and
necessary baseline for comparing algorithms, but because they are all so dependent on the
training set, more advanced tools are required.
The fact that the image formats utilized to train the models were those characteristics
of the AI research field (PNG) rather than those of the radiology field (DICOM, NIfTI) is
noteworthy. Many of the articles analyzed needed authors with clinical backgrounds.
A different but related technical issue that may affect the performance of CADx
systems in practice is the need for physician training on interacting with and interpreting
the results of such systems for diagnostic decisions. This issue must be dealt with in all the
papers included in the review. In terms of research project relevance and the acceptance of
its findings, greater participation by doctors in the process may be advantageous.

9. Conclusions
A brain tumor is an abnormal growth of brain tissue that affects the brain’s ability
to function normally. The primary objective in medical image processing is to find ac-
curate and helpful information with the minimum possible errors by using algorithms.
The four steps involved in segmenting and categorizing brain tumors using MRI data are
preprocessing, picture segmentation, extracting features, and image classification. The
diagnosis, treatment strategy, and patient follow-up can all be greatly enhanced by au-
tomating the segmentation and categorization of brain tumors. It is still difficult to create
a fully autonomous system that can be deployed on clinical floors due to the appearance
of the tumor and its irregular size, form, and nature. The review’s primary goal is to
present the state-of-the-art in the field of brain cancer, which includes the pathophysiology
of the disease, imaging technologies, WHO classification standards for tumors, primary
methods of diagnosis, and CAD algorithms for brain tumor classifications using ML and
DL techniques. Automating the segmentation and categorization of brain tumors using
deep learning techniques has many advantages over region-growing and shallow ML
systems. DL algorithms’ powerful feature learning capabilities are primarily to blame for
this. Although DL techniques have made a substantial contribution, a general technique
is still needed. This study reviewed 53 studies that used ML and DL to classify brain
tumors based on MRI, and it examined the challenges and obstacles that CAD brain tumor
classification techniques now face in practical application and advancement—a thorough
examination of the variables that might impact classification accuracy. The MRI sequences
and web address of the online repository for the dataset are among the publicly available
databases that have been briefly listed in Table 4 and used in the experiments evaluated in
this paper.
Diagnostics 2023, 13, 3007 28 of 32

Funding: This research received no external funding.


Institutional Review Board Statement: Not applicable.
Conflicts of Interest: The author declares no conflict of interest.

References
1. Watson, C.; Kirkcaldie, M.; Paxinos, G. The Brain: An Introduction to Functional Neuroanatomy. 2010. Available online:
http://ci.nii.ac.jp/ncid/BB04049625 (accessed on 22 May 2023).
2. Jellinger, K.A. The Human Nervous System Structure and Function, 6th edn. Eur. J. Neurol. 2009, 16, e136. [CrossRef]
3. DeAngelis, L.M. Brain tumors. N. Engl. J. Med. 2001, 344, 114–123. [CrossRef]
4. Louis, D.N.; Perry, A.; Wesseling, P.; Brat, D.J.; Cree, I.A.; Figarella-Branger, D.; Hawkins, C.; Ng, H.K.; Pfister, S.M.; Reifenberger,
G.; et al. The 2021 WHO Classification of Tumors of the Central Nervous System: A summary. Neuro-Oncology 2021, 23, 1231–1251.
[CrossRef]
5. Hayward, R.M.; Patronas, N.; Baker, E.H.; Vézina, G.; Albert, P.S.; Warren, K.E. Inter-observer variability in the measurement of
diffuse intrinsic pontine gliomas. J. Neuro-Oncol. 2008, 90, 57–61. [CrossRef]
6. Mahaley, M.S., Jr.; Mettlin, C.; Natarajan, N.; Laws, E.R., Jr.; Peace, B.B. National survey of patterns of care for brain-tumor
patients. J. Neurosurg. 1989, 71, 826–836. [CrossRef] [PubMed]
7. Sultan, H.H.; Salem, N.M.; Al-Atabany, W. Multi-Classification of Brain Tumor Images Using Deep Neural Network. IEEE Access
2019, 7, 69215–69225. [CrossRef]
8. Johnson, D.R.; Guerin, J.B.; Giannini, C.; Morris, J.M.; Eckel, L.J.; Kaufmann, T.J. 2016 Updates to the WHO Brain Tumor
Classification System: What the Radiologist Needs to Know. RadioGraphics 2017, 37, 2164–2180. [CrossRef] [PubMed]
9. Buckner, J.C.; Brown, P.D.; O’Neill, B.P.; Meyer, F.B.; Wetmore, C.J.; Uhm, J.H. Central Nervous System Tumors. Mayo Clin. Proc.
2007, 82, 1271–1286. [CrossRef] [PubMed]
10. World Health Organization: WHO, “Cancer”. July 2019. Available online: https://www.who.int/health-topics/cancer
(accessed on 30 March 2022).
11. Amyot, F.; Arciniegas, D.B.; Brazaitis, M.P.; Curley, K.C.; Diaz-Arrastia, R.; Gandjbakhche, A.; Herscovitch, P.; Hinds, S.R.; Manley,
G.T.; Pacifico, A.; et al. A Review of the Effectiveness of Neuroimaging Modalities for the Detection of Traumatic Brain Injury. J.
Neurotrauma 2015, 32, 1693–1721. [CrossRef]
12. Pope, W.B. Brain metastases: Neuroimaging. Handb. Clin. Neurol. 2018, 149, 89–112. [CrossRef]
13. Abd-Ellah, M.K.; Awad, A.I.; Khalaf, A.A.; Hamed, H.F. A review on brain tumor diagnosis from MRI images: Practical
implications, key achievements, and lessons learned. Magn. Reson. Imaging 2019, 61, 300–318. [CrossRef] [PubMed]
14. Ammari, S.; Pitre-Champagnat, S.; Dercle, L.; Chouzenoux, E.; Moalla, S.; Reuze, S.; Talbot, H.; Mokoyoko, T.; Hadchiti, J.;
Diffetocq, S.; et al. Influence of Magnetic Field Strength on Magnetic Resonance Imaging Radiomics Features in Brain Imaging, an
In Vitro and In Vivo Study. Front. Oncol. 2021, 10, 541663. [CrossRef] [PubMed]
15. Sahoo, L.; Sarangi, L.; Dash, B.R.; Palo, H.K. Detection and Classification of Brain Tumor Using Magnetic Resonance Images.
In Advances in Electrical Control and Signal Systems: Select Proceedings of AECSS, Bhubaneswar, India, 8–9 November 2019; Springer:
Singapore, 2020; Volume 665, pp. 429–441. [CrossRef]
16. Kaur, R.; Doegar, A. Localization and Classification of Brain Tumor using Machine Learning & Deep Learning Techniques. Int. J.
Innov. Technol. Explor. Eng. 2019, 8, 59–66.
17. The Radiology Assistant: Multiple Sclerosis 2.0. 1 December 2021. Available online: https://radiologyassistant.nl/
neuroradiology/multiple-sclerosis/diagnosis-and-differential-diagnosis-3#mri-protocol-ms-brain-protocol
(accessed on 22 May 2023).
18. Savoy, R.L. Functional magnetic resonance imaging (fMRI). In Encyclopedia of Neuroscience; Elsevier: Charlestown, MA, USA, 1999.
19. Luo, Q.; Li, Y.; Luo, L.; Diao, W. Comparisons of the accuracy of radiation diagnostic modalities in brain tumor. Medicine 2018,
97, e11256. [CrossRef]
20. Positron Emission Tomography (PET). Johns Hopkins Medicine. 20 August 2021. Available online: https://www.
hopkinsmedicine.org/health/treatment-tests-and-therapies/positron-emission-tomography-pet (accessed on 20 May 2023).
21. Mayfield Brain and Spine. SPECT Scan. 2022. Available online: https://mayfieldclinic.com/pe-spect.htm (accessed on
22 May 2023).
22. Sastry, R.; Bi, W.L.; Pieper, S.; Frisken, S.; Kapur, T.; Wells, W.; Golby, A.J. Applications of Ultrasound in the Resection of Brain
Tumors. J. Neuroimaging 2016, 27, 5–15. [CrossRef]
23. Nasrabadi, N.M. Pattern recognition and machine learning. J. Electron. Imaging 2007, 16, 49901.
24. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine learning for medical imaging. Radiographics 2017, 37, 505–515. [CrossRef]
25. Mohan, M.R.M.; Sulochana, C.H.; Latha, T. Medical image denoising using multistage directional median filter. In Proceed-
ings of the 2015 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2015], Nagercoil, India,
9–20 March 2015.
26. Borole, V.Y.; Nimbhore, S.S.; Kawthekar, S.S. Image processing techniques for brain tumor detection: A review. Int. J. Emerg.
Trends Technol. Comput. Sci. (IJETTCS) 2015, 4, 2.
Diagnostics 2023, 13, 3007 29 of 32

27. Ziedan, R.H.; Mead, M.A.; Eltawel, G.S. Selecting the Appropriate Feature Extraction Techniques for Automatic Medical Images
Classification. Int. J. 2016, 4, 1–9.
28. Amin, J.; Sharif, M.; Yasmin, M.; Fernandes, S.L. A distinctive approach in brain tumor detection and classification using MRI.
Pattern Recognit. Lett. 2017, 139, 118–127. [CrossRef]
29. Islam, A.; Reza, S.M.; Iftekharuddin, K.M. Multifractal texture estimation for detection and segmentation of brain tumors. IEEE
Trans. Biomed. Eng. 2013, 60, 3204–3215. [CrossRef]
30. Gurbină, M.; Lascu, M.; Lascu, D. Tumor detection and classification of MRI brain image using different wavelet transforms
and support vector machines. In Proceedings of the 2019 42nd International Conference on Telecommunications and Signal
Processing (TSP), Budapest, Hungary, 1–3 July 2019; pp. 505–508.
31. Xu, X.; Zhang, X.; Tian, Q.; Zhang, G.; Liu, Y.; Cui, G.; Meng, J.; Wu, Y.; Liu, T.; Yang, Z.; et al. Three-dimensional texture features
from intensity and high-order derivative maps for the discrimination between bladder tumors and wall tissues via MRI. Int. J.
Comput. Assist. Radiol. Surg. 2017, 12, 645–656. [CrossRef]
32. Kaplan, K.; Kaya, Y.; Kuncan, M.; Ertunç, H.M. Brain tumor classification using modified local binary patterns (LBP) feature
extraction methods. Med. Hypotheses 2020, 139, 109696. [CrossRef]
33. Afza, F.; Khan, M.S.; Sharif, M.; Saba, T. Microscopic skin laceration segmentation and classification: A framework of statistical
normal distribution and optimal feature selection. Microsc. Res. Tech. 2019, 82, 1471–1488. [CrossRef]
34. Lakshmi, A.; Arivoli, T.; Rajasekaran, M.P. A Novel M-ACA-Based Tumor Segmentation and DAPP Feature Extraction with
PPCSO-PKC-Based MRI Classification. Arab. J. Sci. Eng. 2017, 43, 7095–7111. [CrossRef]
35. Adair, J.; Brownlee, A.; Ochoa, G. Evolutionary Algorithms with Linkage Information for Feature Selection in Brain Computer
Interfaces. In Advances in Computational Intelligence Systems; Springer Nature: Cham, Switzerland, 2016; pp. 287–307.
36. Arakeri, M.P.; Reddy, G.R.M. Computeraided diagnosis system for tissue characterization of brain tumor on magnetic resonance
images. Signal Image Video Process. 2015, 9, 409–425. [CrossRef]
37. Wang, S.; Zhang, Y.; Dong, Z.; Du, S.; Ji, G.; Yan, J.; Phillips, P. Feed-forward neural network optimized by hybridization of PSO
and ABC for abnormal brain detection. Int. J. Imaging Syst. Technol. 2015, 25, 153–164. [CrossRef]
38. Abbasi, S.; Tajeripour, F. Detection of brain tumor in 3D MRI images using local binary patterns and histogram orientation
gradient. Neurocomputing 2017, 219, 526–535. [CrossRef]
39. Zöllner, F.G.; Emblem, K.E.; Schad, L.R. SVM-based glioma grading: Optimization by feature reduction analysis. Z. Med. Phys.
2012, 22, 205–214. [CrossRef]
40. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501.
[CrossRef]
41. Bhatele, K.R.; Bhadauria, S.S. Brain structural disorders detection and classification approaches: A review. Artif. Intell. Rev. 2019,
53, 3349–3401. [CrossRef]
42. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [CrossRef]
43. Hu, A.; Razmjooy, N. Brain tumor diagnosis based on metaheuristics and deep learning. Int. J. Imaging Syst. Technol. 2020, 31,
657–669. [CrossRef]
44. Tandel, G.S.; Balestrieri, A.; Jujaray, T.; Khanna, N.N.; Saba, L.; Suri, J.S. Multiclass magnetic resonance imaging brain tumor
classification using artificial intelligence paradigm. Comput. Biol. Med. 2020, 122, 103804. [CrossRef] [PubMed]
45. Sahaai, M.B. Brain tumor detection using DNN algorithm. Turk. J. Comput. Math. Educ. (TURCOMAT) 2021, 12, 3338–3345.
46. Hashemi, M. Enlarging smaller images before inputting into convolutional neural network: Zero-padding vs. interpolation. J. Big
Data 2019, 6, 98. [CrossRef]
47. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Briefings
Bioinform. 2017, 19, 1236–1246. [CrossRef]
48. Gorach, T. Deep convolutional neural networks—A review. Int. Res. J. Eng. Technol. (IRJET) 2018, 5, 439.
49. Ogundokun, R.O.; Maskeliunas, R.; Misra, S.; Damaševičius, R. Improved CNN Based on Batch Normalization and Adam
Optimizer. In Proceedings of the Computational Science and Its Applications–ICCSA 2022 Workshops, Malaga, Spain, 4–7 July
2022; Part V. pp. 593–604.
50. Ismael SA, A.; Mohammed, A.; Hefny, H. An enhanced deep learning approach for brain cancer MRI images classification using
residual networks. Artif. Intell. Med. 2020, 102, 101779. [CrossRef]
51. Baheti, P. A Comprehensive Guide to Convolutional Neural Networks. V7. Available online: https://www.v7labs.com/blog/
convolutional-neural-networks-guide (accessed on 24 April 2023).
52. Ramdlon, R.H.; Kusumaningtyas, E.M.; Karlita, T. Brain Tumor Classification Using MRI Images with K-Nearest Neighbor
Method. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019;
pp. 660–667. [CrossRef]
53. Gurusamy, R.; Subramaniam, V. A machine learning approach for MRI brain tumor classification. Comput. Mater. Contin. 2017, 53,
91–109.
54. Pohle, R.; Toennies, K.D. Segmentation of medical images using adaptive region growing. In Proceedings of the Medical Imaging
2001: Image Processing, San Diego, CA, USA, 4–10 November 2001; Volume 4322, pp. 1337–1346. [CrossRef]
55. Dey, N.; Ashour, A.S. Computing in medical image analysis. In Soft Computing Based Medical Image Analysis; Academic Press:
Cambridge, MA, USA, 2018; pp. 3–11.
Diagnostics 2023, 13, 3007 30 of 32

56. Hooda, H.; Verma, O.P.; Singhal, T. Brain tumor segmentation: A performance analysis using K-Means, Fuzzy C-Means and
Region growing algorithm. In Proceedings of the 2014 IEEE International Conference on Advanced Communications, Control
and Computing Technologies, Ramanathapuram, India, 8–10 May 2014; pp. 1621–1626.
57. Sharif, M.; Tanvir, U.; Munir, E.U.; Khan, M.A.; Yasmin, M. Brain tumor segmentation and classification by improved binomial
thresholding and multi-features selection. J. Ambient. Intell. Humaniz. Comput. 2018, 1–20. [CrossRef]
58. Shanthi, K.J.; Kumar, M.S. Skull stripping and automatic segmentation of brain MRI using seed growth and threshold techniques.
In Proceedings of the 2007 International Conference on Intelligent and Advanced Systems, Kuala Lumpur, Malaysia, 25–28
November 2007; pp. 422–426. [CrossRef]
59. Zhang, F.; Hancock, E.R. New Riemannian techniques for directional and tensorial image data. Pattern Recognit. 2010, 43,
1590–1606. [CrossRef]
60. Singh, N.P.; Dixit, S.; Akshaya, A.S.; Khodanpur, B.I. Gradient Magnitude Based Watershed Segmentation for Brain Tumor
Segmentation and Classification. In Advances in Intelligent Systems and Computing; Springer Nature: Cham, Switzerland, 2017;
pp. 611–619. [CrossRef]
61. Couprie, M.; Bertrand, G. Topological gray-scale watershed transformation. Vis. Geom. VI 1997, 3168, 136–146. [CrossRef]
62. Khan, M.S.; Lali, M.I.U.; Saba, T.; Ishaq, M.; Sharif, M.; Saba, T.; Zahoor, S.; Akram, T. Brain tumor detection and classification: A
framework of marker-based watershed algorithm and multilevel priority features selection. Microsc. Res. Tech. 2019, 82, 909–922.
[CrossRef]
63. Lotufo, R.; Falcao, A.; Zampirolli, F. IFT-Watershed from gray-scale marker. In Proceedings of the XV Brazilian Symposium on
Computer Graphics and Image Processing, Fortaleza, Brazil, 10 October 2003. [CrossRef]
64. Dougherty, E.R. An Introduction to Morphological Image Processing; SPIE Optical Engineering Press: Bellingham, WA, USA, 1992.
65. Kaur, D.; Kaur, Y. Various image segmentation techniques: A review. Int. J. Comput. Sci. Mob. Comput. 2014, 3, 809–814.
66. Aslam, A.; Khan, E.; Beg, M.S. Improved Edge Detection Algorithm for Brain Tumor Segmentation. Procedia Comput. Sci. 2015, 58,
430–437. [CrossRef]
67. Egmont-Petersen, M.; de Ridder, D.; Handels, H. Image processing with neural networks—A review. Pattern Recognit. 2002, 35,
2279–2301. [CrossRef]
68. Cui, B.; Xie, M.; Wang, C. A Deep Convolutional Neural Network Learning Transfer to SVM-Based Segmentation Method for
Brain Tumor. In Proceedings of the 2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT), Jinan,
China, 18–20 October 2019; pp. 1–5. [CrossRef]
69. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.
IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [CrossRef]
70. Ye, N.; Yu, H.; Chen, Z.; Teng, C.; Liu, P.; Liu, X.; Xiong, Y.; Lin, X.; Li, S.; Li, X. Classification of Gliomas and Germinomas of the
Basal Ganglia by Transfer Learning. Front. Oncol. 2022, 12, 844197. [CrossRef]
71. Biratu, E.S.; Schwenker, F.; Ayano, Y.M.; Debelee, T.G. A survey of brain tumor segmentation and classification algorithms. J.
Imaging 2021, 7, 179. [CrossRef]
72. Wikipedia Contributors. F Score. Wikipedia. 2023. Available online: https://en.wikipedia.org/wiki/F-score (accessed on
22 May 2023).
73. Brain Tumor Segmentation (BraTS) Challenge. Available online: http://www.braintumorsegmentation.org/ (accessed on
22 May 2023).
74. RIDER NEURO MRI—The Cancer Imaging Archive (TCIA) Public Access—Cancer Imaging Archive Wiki. Available online:
https://wiki.cancerimagingarchive.net/display/Public/RIDER+NEURO+MRI (accessed on 22 May 2023).
75. Harvard Medical School Data. Available online: http://www.med.harvard.edu/AANLIB/ (accessed on 16 March 2021).
76. The Cancer Genome Atlas. TCGA. Available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-GBM
(accessed on 22 May 2023).
77. The Cancer Genome Atlas. TCGA-LGG. Available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-LGG
(accessed on 22 May 2023).
78. Cheng, J. Figshare Brain Tumor Dataset. 2017. Available online: https://figshare.com/articles/dataset/brain_tumor_dataset/15
12427/5 (accessed on 13 May 2022).
79. IXI Dataset—Brain Development. Available online: https://brain-development.org/ixi-dataset/ (accessed on 22 May 2023).
80. Gordillo, N.; Montseny, E.; Sobrevilla, P. A new fuzzy approach to brain tumor segmentation. In Proceedings of the 2010 IEEE
International Conference, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [CrossRef]
81. Rajendran; Dhanasekaran, R. A hybrid Method Based on Fuzzy Clustering and Active Contour Using GGVF for Brain Tumor
Segmentation on MRI Images. Eur. J. Sci. Res. 2011, 61, 305–313.
82. Reddy, K.K.; Solmaz, B.; Yan, P.; Avgeropoulos, N.G.; Rippe, D.J.; Shah, M. Confidence guided enhancing brain tumor segmenta-
tion in multi-parametric MRI. In Proceedings of the 9th IEEE International Symposium on Biomedical Imaging, Barcelona, Spain,
2–5 May 2012; pp. 366–369. [CrossRef]
83. Almahfud, M.A.; Setyawan, R.; Sari, C.A.; Setiadi, D.R.I.M.; Rachmawanto, E.H. An Effective MRI Brain Image Segmentation
using Joint Clustering (K-Means and Fuzzy C-Means). In Proceedings of the 2018 International Seminar on Research of
Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 21–22 November 2018; pp. 11–16.
Diagnostics 2023, 13, 3007 31 of 32

84. Chen, W.; Qiao, X.; Liu, B.; Qi, X.; Wang, R.; Wang, X. Automatic brain tumor segmentation based on features of separated local
square. In Proceedings of the 2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017.
85. Gupta, N.; Mishra, S.; Khanna, P. Glioma identification from brain MRI using superpixels and FCM clustering. In Proceedings of
the 2018 Conference on Information and Communication Technology (CICT), Jabalpur, India, 26–28 October 2018. [CrossRef]
86. Razzak, M.I.; Imran, M.; Xu, G. Efficient Brain Tumor Segmentation with Multiscale Two-Pathway-Group Conventional Neural
Networks. IEEE J. Biomed. Health Inform. 2018, 23, 1911–1919. [CrossRef] [PubMed]
87. Myronenko, A.; Hatamizadeh, A. Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs. In Proceedings of the
International MICCAI Brainlesion Workshop, Singapore, 18 September 2020; pp. 82–89. [CrossRef]
88. Karayegen, G.; Aksahin, M.F. Brain tumor prediction on MR images with semantic segmentation by using deep learning network
and 3D imaging of tumor region. Biomed. Signal Process. Control. 2021, 66, 102458. [CrossRef]
89. Ullah, Z.; Usman, M.; Jeon, M.; Gwak, J. Cascade multiscale residual attention CNNs with adaptive ROI for automatic brain
tumor segmentation. Inf. Sci. 2022, 608, 1541–1556. [CrossRef]
90. Wisaeng, K.; Sa-Ngiamvibool, W. Brain Tumor Segmentation Using Fuzzy Otsu Threshold Morphological Algorithm. IAENG Int.
J. Appl. Math. 2023, 53, 1–12.
91. Zhang, Y.; Dong, Z.; Wu, L.; Wang, S. A hybrid method for MRI brain image classification. Expert Syst. Appl. 2011, 38, 10049–10053.
[CrossRef]
92. Yang, G.; Zhang, Y.; Yang, J.; Ji, G.; Dong, Z.; Wang, S.; Feng, C.; Wang, Q. Automated classification of brain images using
wavelet-energy and biogeography-based optimization. Multimed. Tools Appl. 2015, 75, 15601–15617. [CrossRef]
93. Tiwari, P.; Sachdeva, J.; Ahuja, C.K.; Khandelwal, N. Computer Aided Diagnosis System—A Decision Support System for Clinical
Diagnosis of Brain Tumours. Int. J. Comput. Intell. Syst. 2017, 10, 104–119. [CrossRef]
94. Sachdeva, J.; Kumar, V.; Gupta, I.; Khandelwal, N.; Ahuja, C.K. Segmentation, Feature Extraction, and Multiclass Brain Tumor
Classification. J. Digit. Imaging 2013, 26, 1141–1150. [CrossRef]
95. Jayachandran, A.; Dhanasekaran, R. Severity Analysis of Brain Tumor in MRI Images Using Modified Multitexton Structure
Descriptor and Kernel-SVM. Arab. J. Sci. Eng. 2014, 39, 7073–7086. [CrossRef]
96. El-Dahshan, E.-S.A.; Hosny, T.; Salem, A.-B.M. Hybrid intelligent techniques for MRI brain images classification. Digit. Signal
Process. 2010, 20, 433–441. [CrossRef]
97. Ullah, Z.; Farooq, M.U.; Lee, S.-H.; An, D. A hybrid image enhancement based brain MRI images classification technique. Med.
Hypotheses 2020, 143, 109922. [CrossRef] [PubMed]
98. Kang, J.; Ullah, Z.; Gwak, J. MRI-Based Brain Tumor Classification Using Ensemble of Deep Features and Machine Learning
Classifiers. Sensors 2021, 21, 2222. [CrossRef]
99. Díaz-Pernas, F.; Martínez-Zarzuela, M.; Antón-Rodríguez, M.; González-Ortega, D. A Deep Learning Approach for Brain Tumor
Classification and Segmentation Using a Multiscale Convolutional Neural Network. Healthcare 2021, 9, 153. [CrossRef] [PubMed]
100. Badža, M.M.; Barjaktarović, M. Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network. Appl.
Sci. 2020, 10, 1999. [CrossRef]
101. Ertosun, M.G.; Rubin, D.L. Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular
approach with ensemble of convolutional neural networks. In Proceedings of the AMIA Annual Symposium, San Francisco, CA,
USA, 14–18 November 2015; Volume 2015, pp. 1899–1908.
102. Khan, H.A.; Jue, W.; Mushtaq, M.; Mushtaq, M.U. Brain tumor classification in MRI image using convolutional neural network.
Math. Biosci. Eng. 2020, 17, 6203–6216. [CrossRef]
103. Özcan, H.; Emiroğlu, B.G.; Sabuncuoğlu, H.; Özdoğan, S.; Soyer, A.; Saygı, T. A comparative study for glioma classification using
deep convolutional neural networks. Math. Biosci. Eng. MBE 2021, 18, 1550–1572. [CrossRef]
104. Hao, R.; Namdar, K.; Liu, L.; Khalvati, F. A Transfer Learning–Based Active Learning Framework for Brain Tumor Classification.
Front. Artif. Intell. 2021, 4, 635766. [CrossRef]
105. Yang, Y.; Yan, L.-F.; Zhang, X.; Han, Y.; Nan, H.-Y.; Hu, Y.-C.; Hu, B.; Yan, S.-L.; Zhang, J.; Cheng, D.-L.; et al. Glioma Grading on
Conventional MR Images: A Deep Learning Study with Transfer Learning. Front. Neurosci. 2018, 12, 804. [CrossRef]
106. El Hamdaoui, H.; Benfares, A.; Boujraf, S.; Chaoui, N.E.H.; Alami, B.; Maaroufi, M.; Qjidaa, H. High precision brain tumor
classification model based on deep transfer learning and stacking concepts. Indones. J. Electr. Eng. Comput. Sci. 2021, 24, 167–177.
[CrossRef]
107. Khazaee, Z.; Langarizadeh, M.; Ahmadabadi, M.E.S. Developing an Artificial Intelligence Model for Tumor Grading and
Classification, Based on MRI Sequences of Human Brain Gliomas. Int. J. Cancer Manag. 2022, 15, e120638. [CrossRef]
108. Amou, M.A.; Xia, K.; Kamhi, S.; Mouhafid, M. A Novel MRI Diagnosis Method for Brain Tumor Classification Based on CNN
and Bayesian Optimization. Healthcare 2022, 10, 494. [CrossRef] [PubMed]
109. Alanazi, M.; Ali, M.; Hussain, J.; Zafar, A.; Mohatram, M.; Irfan, M.; AlRuwaili, R.; Alruwaili, M.; Ali, N.T.; Albarrak, A.M.
Brain Tumor/Mass Classification Framework Using Magnetic-Resonance-Imaging-Based Isolated and Developed Transfer
Deep-Learning Model. Sensors 2022, 22, 372. [CrossRef] [PubMed]
110. Rizwan, M.; Shabbir, A.; Javed, A.R.; Shabbr, M.; Baker, T.; Al-Jumeily, D. Brain Tumor and Glioma Grade Classification Using
Gaussian Convolutional Neural Network. IEEE Access 2022, 10, 29731–29740. [CrossRef]
111. Isunuri, B.V.; Kakarla, J. Three-class brain tumor classification from magnetic resonance images using separable convolution
based neural network. Concurr. Comput. Pract. Exp. 2021, 34, e6541. [CrossRef]
Diagnostics 2023, 13, 3007 32 of 32

112. Kaur, T.; Gandhi, T.K. Deep convolutional neural networks with transfer learning for automated brain image classification. J.
Mach. Vis. Appl. 2020, 31, 20. [CrossRef]
113. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A Deep Learning-Based Framework for Automatic Brain Tumors
Classification Using Transfer Learning. Circuits Syst. Signal Process. 2019, 39, 757–775. [CrossRef]
114. Deepa, S.; Janet, J.; Sumathi, S.; Ananth, J.P. Hybrid Optimization Algorithm Enabled Deep Learning Approach Brain Tumor
Segmentation and Classification Using MRI. J. Digit. Imaging 2023, 36, 847–868. [CrossRef]
115. Ahmmed, R.; Swakshar, A.S.; Hossain, M.F.; Rafiq, M.A. Classification of tumors and it stages in brain MRI using support
vector machine and artificial neural network. In Proceedings of the 2017 International Conference on Electrical, Computer and
Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 16–18 February 2017.
116. Sathi, K.A.; Islam, S. Hybrid Feature Extraction Based Brain Tumor Classification using an Artificial Neural Network. In
Proceedings of the 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Greater
Noida, India, 30–31 October 2020; pp. 155–160. [CrossRef]
117. Islam, R.; Imran, S.; Ashikuzzaman; Khan, M.A. Detection and Classification of Brain Tumor Based on Multilevel Segmentation
with Convolutional Neural Network. J. Biomed. Sci. Eng. 2020, 13, 45–53. [CrossRef]
118. Mohsen, H.; El-Dahshan, E.A.; El-Horbaty, E.M.; Salem, A.M. Classification using deep learning neural networks for brain tumors.
Future Comput. Inform. J. 2017, 3, 68–71. [CrossRef]
119. Babu, P.A.; Rao, B.S.; Reddy, Y.V.B.; Kumar, G.R.; Rao, J.N.; Koduru, S.K.R. Optimized CNN-based Brain Tumor Segmentation
and Classification using Artificial Bee Colony and Thresholding. Int. J. Comput. Commun. Control. 2023, 18, 577. [CrossRef]
120. Ansari, A.S. Numerical Simulation and Development of Brain Tumor Segmentation and Classification of Brain Tumor Using
Improved Support Vector Machine. Int. J. Intell. Syst. Appl. Eng. 2023, 11, 35–44.
121. Farajzadeh, N.; Sadeghzadeh, N.; Hashemzadeh, M. Brain tumor segmentation and classification on MRI via deep hybrid
representation learning. Expert Syst. Appl. 2023, 224, 119963. [CrossRef]
122. Padma, A.; Sukanesh, R. A wavelet based automatic segmentation of brain tumor in CT images using optimal statistical texture
features. Int. J. Image Process. 2011, 5, 552–563.
123. Padma, A.; Sukanesh, R. Automatic Classification and Segmentation of Brain Tumor in CT Images using Optimal Dominant Gray
level Run length Texture Features. Int. J. Adv. Comput. Sci. Appl. 2011, 2, 53–121. [CrossRef]
124. Ruba, T.; Tamilselvi, R.; Beham, M.P.; Aparna, N. Accurate Classification and Detection of Brain Cancer Cells in MRI and CT
Images using Nano Contrast Agents. Biomed. Pharmacol. J. 2020, 13, 1227–1237. [CrossRef]
125. Woźniak, M.; Siłka, J.; Wieczorek, M.W. Deep neural network correlation learning mechanism for CT brain tumor detection.
Neural Comput. Appl. 2021, 35, 14611–14626. [CrossRef]
126. Nanmaran, R.; Srimathi, S.; Yamuna, G.; Thanigaivel, S.; Vickram, A.S.; Priya, A.K.; Karthick, A.; Karpagam, J.; Mohanavel,
V.; Muhibbullah, M. Investigating the Role of Image Fusion in Brain Tumor Classification Models Based on Machine Learning
Algorithm for Personalized Medicine. Comput. Math. Methods Med. 2022, 2022, 7137524. [CrossRef]
127. Burns, A.; Iliffe, S. Alzheimer’s disease. BMJ 2009, 338, b158. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like