Brain Tumor Classification
Paramjeet Singh
Department of Computer Science and Engineering,
Apex Institute of Technology, Chandigarh University, Mohali,
Punjab, India
[email protected]
Pranjal Sharma
Department of Computer Science and Engineering,
Apex Institute of Technology, Chandigarh University, Mohali,
Punjab, India
[email protected]
Mansi Kajal (Associate Pofessor)
Department of Computer Science and Engineering,
Apex Institute of Technology, Chandigarh University, Mohali,
Punjab, India
[email protected]Abstract— Brain tumors pose a significant challenge in the field Keywords— Brain Tumor classification, Computer-aided
of oncology, necessitating accurate and timely classification for diagnosis, Convolutional neural network, Deep Learning, Brain
effective treatment planning. This research paper presents a image
comprehensive approach to brain tumor classification, leveraging
advanced imaging techniques and machine learning algorithms. I. INTRODUCTION
The study explores the integration of multimodal data, including Brain Tumors, a complex and diverse group of neoplasms,
magnetic resonance imaging (MRI), computed tomography (CT), pose significant challenges in the field of oncology. Their
and molecular biomarkers, to enhance the accuracy of accurate and timely classification is crucial for effective
classification. The proposed methodology involves preprocessing
treatment planning and prognosis. The traditional methods of
raw imaging data, extracting relevant features, and utilizing
state-of-the-art machine learning models for robust tumor
brain tumor classification, which primarily rely on
classification. A key focus is placed on the incorporation of deep histopathological techniques, have limitations in terms of
learning architectures, such as convolutional neural networks accuracy, reproducibility and predictive power. The advent of
(CNNs) and recurrent neural networks (RNNs), to capture advanced imaging techniques and machine learning algorithms
intricate patterns within the imaging data and improve has opened up new avenues for brain tumor classification.
classification accuracy. The paper explores the application of These technologies have the potential to overcome the
machine learning techniques for brain tumor classification. limitations of traditional methods and significantly enhance the
Supervised learning algorithms, including support vector accuracy of tumor classification. Medical imaging modalities
machines (SVM), random forests (RF), and artificial neural such as magnetic resonance imaging (MRI) and computed
networks (ANN), are examined in terms of their ability to extract tomography (CT) provide detailed anatomical information
meaningful features from multimodal imaging data. about brain tumors, enabling the extraction of quantitative
Unsupervised learning techniques, such as clustering algorithms, features related to tumor morphology, texture, and spatial
are also explored for uncovering hidden patterns within tumor distribution. Machine learning algorithms, particularly deep
datasets. Furthermore, the research paper investigates the impact learning models, have demonstrated remarkable capabilities in
of incorporating advanced imaging features, such as texture
learning complex patterns from imaging data and accurately
analysis, diffusion tensor imaging (DTI), and perfusion imaging,
classifying tumors into distinct subtypes based on these
into the classification framework. These features provide
additional information about tumor heterogeneity, features. This research paper aims to explore the
microstructural changes, and blood flow characteristics, methodologies, challenges, and advancements in brain tumor
respectively, thereby enhancing the discriminative power of the classification using medical imaging and machine learning
classification models. The paper addresses the challenges techniques. By reviewing existing literature and discussing
associated with brain tumor classification, including the limited recent developments in the field, this paper will provide
availability of annotated datasets, inter-observer variability, and insights into the current state-of-the-art approaches for brain
the presence of rare tumor subtypes. Strategies for mitigating tumor classification. Additionally, it will highlight the potential
these challenges, such as data augmentation techniques and benefits of non-invasive and automated classification systems
ensemble learning approaches, are discussed.
XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE
in improving diagnostic accuracy, treatment efficacy, and imaging studies, capturing dynamic changes in tumor
patient outcomes in neuro-oncology. morphology over time. Despite these advancements,
limitations include the need for large annotated datasets,
II. RESEARCH CHALLENEGES challenges in feature extraction and interpretation,
Despite the promising potential of deep learning, computational complexity, and model overfitting. Further
support vector machines (SVM), artificial neural networks research is needed to address these limitations and enhance the
(ANN), recurrent neural networks (RNN), and convolutional robustness and clinical applicability of brain tumor
neural networks (CNN) in brain tumor classification, several classification methods.
challenges persist in their implementation and optimization.
One significant challenge is the availability and quality of A. Abbreviations and Acronyms
labeled data for training and validating machine learning Define abbreviations and acronyms the first time they are
models. Building robust datasets with diverse tumor types, used in the text, even after they have been defined in the
sizes, and imaging characteristics is essential for ensuring the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc,
generalization and reliability of classification algorithms. and rms do not have to be defined. Do not use abbreviations in
However, acquiring large-scale annotated datasets can be the title or heads unless they are unavoidable.
time-consuming, expensive, and subject to variability in B. Units
labeling criteria among different experts. Use either SI (MKS) or CGS as primary units. (SI units
Another challenge lies in feature extraction and are encouraged.) English units may be used as
representation from medical imaging data. While deep secondary units (in parentheses). An exception would
learning models have demonstrated the ability to automatically be the use of English units as identifiers in trade, such
learn discriminative features from raw image data, designing as “3.5-inch disk drive”.
effective feature extraction pipelines for traditional machine
learning algorithms like SVM and ANN remains a complex Avoid combining SI and CGS units, such as current in
amperes and magnetic field in oersteds. This often leads
task. Extracting informative features that capture relevant
to confusion because equations do not balance
tumor characteristics while minimizing noise and irrelevant
dimensionally. If you must use mixed units, clearly state
information is crucial for the performance of classification the units for each quantity that you use in an equation.
models.
Additionally, model interpretability and transparency are Do not mix complete spellings and abbreviations of
significant challenges in deep learning-based approaches. The units: “Wb/m2” or “webers per square meter”, not
black-box nature of deep neural networks makes it challenging “webers/m2”. Spell out units when they appear in text:
to understand how decisions are made, limiting their adoption “. . . a few henries”, not “. . . a few H”.
in clinical settings where interpretability is paramount. Use a zero before decimal points: “0.25”, not “.25”. Use
Developing techniques for explaining model predictions and “cm3”, not “cc”. (bullet list)
visualizing learned features is essential for building trust and
facilitating the integration of deep learning models into C. Equations
clinical practice. The equations are an exception to the prescribed
The computational complexity and resource specifications of this template. You will need to determine
requirements of deep learning models pose challenges for real- whether or not your equation should be typed using either the
time and resource-constrained applications. Training deep Times New Roman or the Symbol font (please no other font).
neural networks often requires substantial computational To create multileveled equations, it may be necessary to treat
resources and time, making it impractical for deployment in the equation as a graphic and insert it into the text after your
low-resource settings or on devices with limited processing paper is styled.
capabilities. Number equations consecutively. Equation numbers, within
parentheses, are to position flush right, as in (1), using a right
III. RELATED WORK tab stop. To make your equations more compact, you may use
Several studies have investigated the application of deep the solidus ( / ), the exp function, or appropriate exponents.
learning, support vector machines (SVM), artificial neural Italicize Roman symbols for quantities and variables, but not
networks (ANN), recurrent neural networks (RNN), and Greek symbols. Use a long dash rather than a hyphen for a
convolutional neural networks (CNN) for brain tumor minus sign. Punctuate equations with commas or periods when
classification. Deep learning models, particularly CNNs, have they are part of a sentence, as in:
shown promising results in automatically learning
discriminative features from medical imaging data, achieving ab
high accuracy in tumor classification tasks. SVM and ANN
approaches have also been widely explored, leveraging Note that the equation is centered using a center tab stop.
Be sure that the symbols in your equation have been defined
handcrafted features extracted from imaging data to classify
before or immediately following the equation. Use “(1)”, not
brain tumors with notable success. Additionally, RNNs have
“Eq. (1)” or “equation (1)”, except at the beginning of a
been employed for temporal sequence analysis in longitudinal sentence: “Equation (1) is . . .”
D. Some Common Mistakes patient survival outcomes, further pushing the boundaries of
The word “data” is plural, not singular. brain tumor classification research.
By providing a standardized platform for algorithm
The subscript for the permeability of vacuum 0, and development and evaluation, BRATS2019 facilitates
other common scientific constants, is zero with collaboration and benchmarking among researchers
subscript formatting, not a lowercase letter “o”. worldwide. The dataset empowers the development of novel
In American English, commas, semicolons, periods, deep learning, support vector machine (SVM), artificial neural
question and exclamation marks are located within network (ANN), recurrent neural network (RNN), and
quotation marks only when a complete thought or name convolutional neural network (CNN) models, leading to
is cited, such as a title or full quotation. When quotation advancements in diagnostic accuracy, treatment planning, and
marks are used, instead of a bold or italic typeface, to patient outcomes in neuro-oncology.
highlight a word or phrase, punctuation should appear Despite its strengths, BRATS2019 also presents challenges,
outside of the quotation marks. A parenthetical phrase including the need for robust feature extraction methods,
or statement at the end of a sentence is punctuated model interpretability, and scalability to heterogeneous tumor
outside of the closing parenthesis (like this). (A types. Addressing these challenges requires interdisciplinary
parenthetical sentence is punctuated within the collaboration and innovation, paving the way for more
parentheses.) accurate, efficient, and clinically relevant brain tumor
A graph within a graph is an “inset”, not an “insert”. classification algorithms.
The word alternatively is preferred to the word
“alternately” (unless you really mean something that A. Authors and Affiliations
alternates). The template is designed for, but not limited to, six
authors. A minimum of one author is required for all
Do not use the word “essentially” to mean conference articles. Author names should be listed starting
“approximately” or “effectively”. from left to right and then moving down to the next line. This is
In your paper title, if the words “that uses” can the author sequence that will be used in future citations and by
accurately replace the word “using”, capitalize the “u”; indexing services. Names should not be listed in columns nor
if not, keep using lower-cased. group by affiliation. Please keep your affiliations as succinct as
possible (for example, do not differentiate among departments
Be aware of the different meanings of the homophones of the same organization).
“affect” and “effect”, “complement” and “compliment”,
“discreet” and “discrete”, “principal” and “principle”. 1) For papers with more than six authors: Add author
names horizontally, moving to a third row if needed for more
Do not confuse “imply” and “infer”.
than 8 authors.
The prefix “non” is not a word; it should be joined to 2) For papers with less than six authors: To change the
the word it modifies, usually without a hyphen. default, adjust the template as follows.
There is no period after the “et” in the Latin a) Selection: Highlight all author and affiliation lines.
abbreviation “et al.”. b) Change number of columns: Select the Columns icon
The abbreviation “i.e.” means “that is”, and the from the MS Word Standard toolbar and then select the correct
abbreviation “e.g.” means “for example”. number of columns from the selection palette.
c) Deletion: Delete the author and affiliation lines for
An excellent style manual for science writers is [7]. the extra authors.
IV. INTRODUCING THE DATASET
The Brain Tumor Classification Benchmark B. Identify the Headings
(BRATS) 2019 dataset serves as a cornerstone in advancing Headings, or heads, are organizational devices that guide
the field of brain tumor classification. BRATS2019 provides a the reader through your paper. There are two types: component
comprehensive collection of multimodal brain MRI scans, heads and text heads.
annotated with tumor regions and subtypes, enabling
Component heads identify the different components of your
researchers to develop and evaluate cutting-edge algorithms
paper and are not topically subordinate to each other. Examples
for brain tumor classification.
include Acknowledgments and References and, for these, the
BRATS2019 builds upon previous iterations of the correct style to use is “Heading 5”. Use “figure caption” for
benchmark by incorporating improvements in data quality, your Figure captions, and “table head” for your table title. Run-
diversity, and annotation consistency. The dataset comprises in heads, such as “Abstract”, will require you to apply a style
T1-weighted, T2-weighted, FLAIR, and post-contrast T1- (in this case, italic) in addition to the style provided by the drop
weighted MRI images, capturing various aspects of tumor down menu to differentiate the head from the text.
morphology and tissue characteristics. Additionally,
BRATS2019 introduces new challenges, such as the Text heads organize the topics on a relational, hierarchical
classification of non-enhancing gliomas and the prediction of basis. For example, the paper title is the primary text head
because all subsequent material relates and elaborates on this treatment strategies, ultimately leading to better patient
one topic. If there are two or more sub-topics, the next level outcomes and quality of life..
head (uppercase Roman numerals) should be used and,
conversely, if there are not at least two sub-topics, then no REFERENCES
subheads should be introduced. Styles named “Heading 1”,
“Heading 2”, “Heading 3”, and “Heading 4” are prescribed.
1. Clark, K., Vendt, B., Smith, K., et al. (2013). The Cancer Imaging
C. Figures and Tables Archive (TCIA): Maintaining and Operating a Public Information
Repository. Journal of Digital Imaging, 26(6), 1045–1060.
a) Positioning Figures and Tables: Place figures and
tables at the top and bottom of columns. Avoid placing them 2. Akkus, Z., Galimzianova, A., Hoogi, A., & Rubin, D. L. (2017).
in the middle of columns. Large figures and tables may span Deep Learning for Brain Tumor Classification. Proceedings of the
across both columns. Figure captions should be below the 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2567-2574.
figures; table heads should appear above the tables. Insert
figures and tables after they are cited in the text. Use the 3. Menze, B. H., Jakab, A., Bauer, S., et al. (2015). The Multimodal
abbreviation “Fig. 1”, even at the beginning of a sentence. Brain Tumor Image Segmentation Benchmark (BRATS). IEEE
Transactions on Medical Imaging, 34(10), 1993-2024.
TABLE I. TABLE TYPE STYLES 4. Prasanna, P., Tiwari, P., & Madabhushi, A. (2016). Co-occurrence
of Local Anisotropic Gradient Orientations (CoLlAGe): A new
Table Table Column Head
radiomics descriptor. Scientific Reports, 6, 37241.
Head Table column subhead Subhead Subhead
5. Huang, Y., Liu, Z., He, L., et al. (2016). Radiomics signature: A
copy More table copya
potential biomarker for the prediction of disease-free survival in
a.
Sample of a Table footnote. (Table footnote) early-stage (I or II) non-small cell lung cancer. Radiology, 281(3),
947-957.
Fig. 1. Example of a figure caption. (figure caption)
6. Havaei, M., Davy, A., Warde-Farley, D., et al. (2017). Brain tumor
Figure Labels: Use 8 point Times New Roman for Figure segmentation with Deep Neural Networks. Medical Image
labels. Use words rather than symbols or abbreviations when Analysis, 35, 18-31.
writing Figure axis labels to avoid confusing the reader. As an 7. Litjens, G., Kooi, T., Bejnordi, B. E., et al. (2017). A survey on
example, write the quantity “Magnetization”, or deep learning in medical image analysis. Medical Image Analysis,
“Magnetization, M”, not just “M”. If including units in the 42, 60-88.
label, present them within parentheses. Do not label axes only
with units. In the example, write “Magnetization (A/m)” or 8. Bejnordi, B. E., Veta, M., van Diest, P. J., et al. (2017). Diagnostic
Assessment of Deep Learning Algorithms for Detection of Lymph
“Magnetization {A[m(1)]}”, not just “A/m”. Do not label axes Node Metastases in Women With Breast Cancer. JAMA, 318(22),
with a ratio of quantities and units. For example, write 2199-2210.
“Temperature (K)”, not “Temperature/K”.
9. Ismael, A. M., & Kim, H. (2018). A comprehensive review of deep
V. CONCLUSION learning in image informatics. Journal of Nuclear Medicine, 59(5),
864-872.
The application of deep learning techniques, including
Recurrent Neural Networks (RNNs), Convolutional Neural 10. Ellingson, B. M., Bendszus, M., Boxerman, J., et al. (2021).
Networks (CNNs), and Artificial Neural Networks (ANNs), Consensus recommendations for a standardized Brain Tumor
Imaging Protocol in clinical trials. Neuro-Oncology, 23(2), 190-
has shown significant promise in the classification of brain 205.
tumors. Through the integration of these advanced algorithms,
researchers have achieved remarkable results in accurately 11. Bakas, S., Akbari, H., Sotiras, A., et al. (2017). Advancing The
identifying and categorizing brain tumors based on various Cancer Genome Atlas glioma MRI collections with expert
segmentation labels and radiomic features. Scientific Data, 4,
imaging modalities such as MRI and CT scans. 170117.
RNNs have demonstrated their efficacy in capturing
12. Cha, K. H., Hadjiiski, L., & Chan, H. P. (2018). Blinded
sequential patterns and temporal dependencies present in Evaluation of High-Grade Gliomas on 3T MR Images:
imaging data, thus enabling the extraction of meaningful Comparison of Deep Learning with an Existing Radiomics Model
features crucial for tumor classification. CNNs, on the other and Performance of Interpreting Radiologists. Radiology, 287(3),
hand, excel in extracting hierarchical spatial features, enabling 933-939.
precise localization and characterization of tumor regions 13. Wiestler, B., Kluge, A., Lukas, M., et al. (2019). Multiparametric
within medical images. ANNs have provided a robust MRI-based differentiation of WHO grade II/III glioma and WHO
framework for integrating diverse data sources and optimizing grade IV glioblastoma. Scientific Reports, 9, 2039.
classification models for enhanced accuracy.
14. Ardila, D., Kiraly, A. P., Bharadwaj, S., et al. (2019). End-to-end
The synergistic combination of RNNs, CNNs, and ANNs lung cancer screening with three-dimensional deep learning on
has paved the way for more accurate, efficient, and low-dose chest computed tomography. Nature Medicine, 25(6),
954-961.
interpretable brain tumor classification systems. These
advancements hold great promise for improving clinical 15. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net:
decision-making, facilitating early detection, and personalizing Convolutional Networks for Biomedical Image Segmentation.
International Conference on Medical Image Computing and
Computer-Assisted Intervention, 234-241. with expert segmentation labels and radiomic features. Sci Data.
2017 Sep;4:170117.
16. Louis DN, Perry A, Reifenberger G, von Deimling A, Figarella-
Branger D, Cavenee WK, et al. The 2016 World Health 25. Verma R, Zacharaki EI, Ou Y, Cai H, Chawla S, Lee SK, et al.
Organization Classification of Tumors of the Central Nervous Multiparametric tissue characterization of brain neoplasms and
System: a summary. Acta Neuropathol. 2016 Jun;131(6):803–20. their recurrence using pattern classification of MR images. Acad
Radiol. 2008 Feb;15(2):966–77.
17. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian
M, et al. A survey on deep learning in medical image analysis. 26. Huang Y-Q, Liang C-H, He L, Tian J, Liang CS, Chen X, et al.
Med Image Anal. 2017 Dec;42:60–88. Development and Validation of a Radiomics Nomogram for
Preoperative Prediction of Lymph Node Metastasis in Colorectal
18. Chang P, Grinband J, Weinberg BD, Bardis M, Khy M, Cadena G, Cancer. J Clin Oncol. 2016 Nov;34(18):2157–64.
et al. Deep-Learning Convolutional Neural Networks Accurately
Classify Genetic Mutations in Gliomas. AJNR Am J Neuroradiol. 27. Zhou M, Scott J, Chaudhury B, Hall L, Goldgof D, Yeom KW, et
2018 Oct;39(10):1691–7. al. Radiomics in Brain Tumor: Image Assessment, Quantitative
Feature Descriptors, and Machine-Learning Approaches. AJNR
19. Akkus Z, Galimzianova A, Hoogi A, Rubin DL, Erickson BJ. Deep Am J Neuroradiol. 2018 Feb;39(2):208–16.
Learning for Brain MRI Segmentation: State of the Art and Future
Directions. J Digit Imaging. 2017 Feb;30(4):449–59. 28. Chang K, Bai HX, Zhou H, Su C, Bi WL, Agbodza E, et al.
Residual Convolutional Neural Network for the Determination of
20. Nie D, Zhang H, Adeli E, Liu L, Shen D. 3D deep learning for IDH Status in Low- and High-Grade Gliomas from MR Imaging.
multi-modal imaging-guided survival time prediction of brain Clin Cancer Res. 2018 Jul;24(5):1073–81.
tumor patients. Med Image Comput Comput Assist Interv. 2016
Oct;9901:212–20. 29. Kickingereder P, Bonekamp D, Nowosielski M, Kratz A, Sill M,
Burth S, et al. Radiogenomics of Glioblastoma: Machine
21. Ellingson BM, Bendszus M, Boxerman J, Barboriak D, Erickson Learning–based Classification of Molecular Characteristics by
BJ, Smits M, et al. Consensus recommendations for a standardized Using Multiparametric and Multiregional MR Imaging Features.
Brain Tumor Imaging Protocol in clinical trials. Neuro Oncol. Radiology. 2016 Nov;281(3):907–18.
2015 Feb;17(9):1188–98.
30. Zikic D, Glocker B, Konukoglu E, Criminisi A, Demiralp C,
22. Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Shotton J, et al. Decision forests for tissue-specific segmentation of
Kirby J, et al. The Multimodal Brain Tumor Image Segmentation high-grade gliomas in multi-channel MR. In: Proceedings of the
Benchmark (BRATS). IEEE Trans Med Imaging. 2015 15th International Conference on Medical Image Computing and
Nov;34(10):1993–2024. Computer-Assisted Intervention – MICCAI 2012. Berlin,
Heidelberg: Springer Berlin Heidelberg; 2012. p. 369–76.
23. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional
Networks for Biomedical Image Segmentation. In: Navab N, IEEE conference templates contain guidance text for
Hornegger J, Wells W, Frangi A, editors. Medical Image composing and formatting conference papers. Please
Computing and Computer-Assisted Intervention – MICCAI 2015. ensure that all template text is removed from your
Cham: Springer International Publishing; 2015. p. 234–41. conference paper prior to submission to the conference.
Failure to remove template text from your paper may
24. Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby JS, et result in your paper not being published.
al. Advancing The Cancer Genome Atlas glioma MRI collections