<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Brain Tumor Classification in Magnetic Resonance Imaging using Convolutional Neural Networks and Transfer Learning⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>CHAHBAR Fatma</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MERATI Medjeded</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>MAHMOUDI Said</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>BAGHDADI Mohamed</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>LEBANI Ali Zakaria</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department of IBN Khaldoun University</institution>
          ,
          <addr-line>Tiaret</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Computer science Department of Mons University</institution>
          ,
          <addr-line>Mons, Belguim</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>LIM Research Laboratory of IBN Khaldoun University</institution>
          ,
          <addr-line>Tiaret</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Brain tumors pose a significant threat with the potential to disrupt critical brain functions and manifest neurological symptoms, warranting the highest concern. The evaluation of these tumors relies on various imaging methods, including Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and ultrasound. Particularly, MRI of the brain is renowned for its capability to provide vital insights into brain structure and tissue irregularities. This study harnesses the transformative influence of technology, notably artificial intelligence (AI) and deep learning (DL), to address this challenge.The novel approach involves the integration of Convolutional Neural Networks (CNNs) with transfer learning from VGG19 and ResNet. The primary objective is the classification of brain tumors into four distinct categories: meningioma, glioma, pituitary adenoma, and cases without tumors. The CNN model in isolation achieves an impressive 97.23% accuracy rate. However, when integrated with VGG19 and ResNet, the accuracy soars to an even higher 98.26%. This innovative amalgamation of technologies holds immense promise for enhancing the precision of brain tumor classification, potentially reshaping the landscape of neuroimaging and healthcare.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Brain Tumor</kwd>
        <kwd>Transfer Learning</kwd>
        <kwd>Classification</kwd>
        <kwd>CNNN</kwd>
        <kwd>VGG19</kwd>
        <kwd>ResNEt50</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        discussions related to brain tumors, treatment planning,
and prognosis, the World Health Organization 1 has
deThe human brain serves as the central control hub for vised a classification and grading system. This system
a multitude of bodily functions, including motor coordi- categorizes tumors based on the type of cells they consist
nation, sensory processing, and vital physiological pro- of or their primary site of origin [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
cesses [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Any disruption within the brain, such as the Brain MRI images hold a pivotal role in the detection of
emergence of a tumor, has the potential to interfere with tumors and the modeling of their progression, providing
its normal operations. essential guidance for treatment decisions. When
com
      </p>
      <p>
        A brain tumor comprises an abnormal cluster of cells pared to alternative imaging techniques such as CT scans
within the brain or the cranial cavity. These tumors can or ultrasound, MRI scans ofer a wealth of comprehensive
vary widely in nature, ranging from benign to potentially data, enabling the detailed examination of brain structure
life-threatening. They are categorized into primary tu- and the precise identification of anomalies within brain
mors (originating within the brain) or metastatic tumors tissue [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
(originating elsewhere in the body and spreading to the The impact of technology, especially artificial
intelbrain). Treatment approaches for these tumors depend ligence (AI) and deep learning (DL), on the field of
on factors such as type, size, and location. To facilitate medicine is undeniable, and MRI image processing
exemplifies this transformation. Deep Learning (DL), with a
6th International Hybrid Conference On Informatics And Applied Math- special focus on Convolutional Neural Networks (CNNs),
ematics, December 6-7, 2023 Guelma, Algeria ofers distinct advantages. These include automated
fea* Corresponding author. ture extraction, heightened accuracy in identifying
sub†$Thfaetsmeaa.ucthhaohrbsacro@nutrni bivu-tteidareeqt.udazll(yC.. Fatma); tle patterns and irregularities, scalability to handle vast
medjeded.merat@univ-tiaret.dz (M. Medjeded); datasets, and the ability to continuously enhance
persaid.mahmoudi@umons.ac.be (M. Said); formance through retraining with new data. These
atbaghdadimohamed@gmail.com (B. Mohamed); tributes position CNNs as powerful tools for the
processalizakaria.lebani@uni-tiaret.dz (L. A. Zakaria)
© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License 1https://www.who.int/
      </p>
      <p>
        Attribution 4.0 International (CC BY 4.0).
ing and diagnosis of brain tumors in MRI images [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. normal - using CNN. The model attained an accuracy of
CNNs represent a class of deep learning models specifi- 98.27%.
cally tailored for data structured in grids, such as images. Hossain et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] proposed a method to extract brain
They draw inspiration from the visual processing system tumors from 2D MRI images using Fuzzy C-Means
clusin the animal brain, allowing them to preserve spatial in- tering, traditional classifiers, and a convolutional neural
formation while capturing local image features [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This network. The experimental study utilized a benchmark
method is highly efective, primarily due to its strong fea- dataset (BraTS) with various tumor characteristics. To
ture extraction capabilities [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The evolution of CNNs diferentiate between normal and abnormal pixels based
has given rise to various architectural models like Resid- on texture and statistical features, CNN achieved a
disual Network (ResNet), Network in Network (NiN), VGG, tinction with an accuracy of 97.87%.
and GoogleNet [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].Transfer learning, a technique that Sultan et al. conducted a study in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] where they
introtransfers knowledge from one domain to another, has duced a DL model based on a CNN for classifying various
demonstrated its value across diverse domains, applica- brain tumor types using two publicly available datasets.
tions, and data distributions in both research and training The initial dataset classified tumors into meningioma,
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In this paper, we present a comprehensive approach glioma, and pituitary tumors, encompassing 233 cases
for classifying brain tumors into four distinct categories: and comprising a total of 3064 T1-weighted
Contrastmeningioma, glioma, pituitary adenoma, and cases with- Enhanced (CE) images. The second dataset diferentiated
out tumors. Our approach involves the development of among three glioma grades (Grade II, Grade III, and Grade
a CNN model, followed by training two transfer learn- IV), involving 73 patients and including 516 T1-weighted
ing models—VGG19 and ResNet—on the same dataset. Contrast-Enhanced (CE) images. The proposed network
The innovation lies in seamlessly integrating these three structure consists of 16 layers and achieves an accuracy
models into a unified framework, with the primary ob- of 96.13% and 98.7% for the two studies, respectively.
jective of enhancing the accuracy and precision of brain Similarly, Nayak et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] introduced a CNN-based
tumor categorization in MRI scans. This novel technique dense EficientNet model with min-max normalization
holds promise as an alternative for more efective MRI for classifying 3260 T1-weighted contrast-enhanced brain
diagnosis and treatment planning. MRI images collected from Kaggle into four categories(
glioma, meningioma, pituitary, and no tumor). The
experimental results demonstrated performance, with a
2. Related Work training accuracy of 99.97% and a testing accuracy of
98.78%.
      </p>
      <p>
        The process of manually identifying and categorizing Khan et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] introduced an automated brain tumor
brain tumors in large databases of medical images during classification system that employs two DL models. The
routine clinical tasks incurs substantial costs in terms of system is designed to classify brain tumors into binary
efort and time. Contemporary solutions that leverage categories (normal and abnormal) using a publicly
availMachine Learning (ML) and Deep Learning (DL) method- able CE-MRI dataset consisting of 3064 MRI images.
Adologies for brain tumor segmentation, detection, and clas- ditionally, it classifies tumors into multiclass categories
sification [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] have emerged, where they employ Convolu- (meningioma, glioma, and pituitary tumors) using a
sectional Neural Network (CNN) architectures for analyzing ond dataset containing 152 MRI images collected from the
medical images. Harvard repository. However, when dealing with limited
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the authors introduced a method for brain tu- volumes of data, which is the case in the second dataset,
mor detection using a publicly available brain tumor MRI the proposed ’23-layer CNN’ architecture faced an
overfitdataset comprising data from 233 patients. In this ap- ting problem. To address this issue, they applied transfer
proach, a preprocessing step was utilized to enhance the learning by combining the VGG16 architecture with the
images quality, and two pre-trained deep learning models ’23-layer CNN’ architecture in a reflective manner. The
were used to extract powerful features. The extracted experimental results demonstrate the efectiveness of the
features were then combined into a hybrid vector using proposed models, achieving an accuracy of 97.8% .
the partial least squares (PLS) method. With the aid of In another study, Aurna et al. [15] introduced an
accuagglomerative clustering, the technique achieved 98.95% rate and automated brain tumor classification approach
classification accuracy. using three distinct MRI datasets and a merged dataset.
Ramzan et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] directed their eforts toward devel- These datasets include images of three types of brain
oping four sequential CNN models for classifying brain tumors (meningioma, glioma, and pituitary tumors) as
tumors in MRI images. These experiments were con- well as normal brain images. The study selects the best
ducted on a Kaggle dataset comprising 3,000 MRI images. models and concatenates them in two stages for feature
The study involved two key steps: data preprocessing extraction. The most significant features are chosen
usand automatic classification into two classes - tumor and ing Principal Component Analysis (PCA) and fed into
the selected classifier. The proposed ensemble model to the MRI brain images. Subsequently, a 2D CNN and
achieves an average accuracy of 99.13%. a convolutional auto-encoder network were developed
Raza et al. [16] introduced a hybrid deep learning model, and trained with predetermined hyperparameters. The
named DeepTumorNet, in their study for classifying three 2D CNN featured several convolutional layers, with all
types of brain tumors using a basic CNN architecture. layers in this hierarchical network utilizing a 2*2 kernel
The GoogLeNet architecture served as the foundation, function. Additionally, six machine-learning techniques
with the last 5 layers replaced by 15 new layers in the de- were employed and compared for brain tumor
classificavelopment of the hybrid DeepTumorNet approach. The tion. The obtained results indicated a training accuracy
proposed model was assessed using the publicly avail- of 96.47% for the proposed 2D CNN and 95.63% for the
able research dataset, known as the CE-MRI dataset. This proposed auto-encoder network.
dataset consists of 3062 MRI images from 233 patients,
representing three distinct types of brain tumors. The
evaluation yielded an accuracy of 99.67%. 3. Proposed Methodology
In [17], Deep learning architectures, including CNN,
DNN, LIM (LeNet Inspired Model), AlexNet, and ResNet, In our suggested approach, we propose using a pair of
were employed to classify brain MRI images as normal models to classify brain tumors into four categories:
Pior abnormal. Gender and age were considered as addi- tuitary, Meningioma, Glioma, and No Tumor. The initial
tional attributes to enhance the accuracy of the classifi- model utilizes a Simple CNN, while the second model
cation. Multiple datasets were utilized, including those improves precision by incorporating transfer learning
from Figshare, Brainweb, and Radiopaedia. The Figshare from pre-trained VGG19 and ResNet50 models.
dataset comprised 1130 abnormal brain MRI images, the
Brainweb dataset contained T1 weighted data with 181 3.1. Proposed Methodology of Tumor
slices of normal and abnormal data, and the Radiopaedia Classification Using CNN
dataset included 768 T1 images and FLAIR data.
Experimental findings indicated that the LIM model demon- Convolutional Neural Networks (CNNs) are essential
strated superior performance compared to SVM, AlexNet, for improving the accuracy of tumor identification in
and ResNet in classifying brain MRI images as normal or medical image processing. Our goal is to develop a model
abnormal with an accuracy of 82%. that accurately detects tumors from two-dimensional
In addition, a segmentation and classification system brain MRI data. CNNs are preferred over fully-connected
based on transfer learning is presented in [18]. It uses pre- neural networks for tumor detection due to their
trained CNN (AlexNet and VGG-19) for classification, and efective parameter sharing and sparse connectivity,
threshold and quick bounded box algorithms for segmen- which maximize accuracy and computational eficiency
tation. The evaluation of Kaggle and Figshare datasets by utilizing features found in medical images.
showed that the transferred VGG-19 and AlexNet models
achieved high accuracies. Specifically, the VGG-19 model We have integrated and implemented 6 CNN layers
obtained 99.75% and 98.50% accuracy, while the AlexNet specifically designed for tumor detection and
classificamodel achieved 98.89% and 97.25% accuracy, respectively. tion, as illustrated in Figure 1. This combined model,
These findings confirm the superior performance of the comprising 8 stages and involving the integration of
VGG-19 model compared to the AlexNet model. hidden layers, has demonstrated the most remarkable
Recently, Gómez-Guzmán et al.[19] utilized the Msoud outcomes in the context of tumor identification. In the
dataset, which consisted of Figshare, SARTAJ, and Br35H following paragraph, we provide an overview of the
datasets, totaling 7023 MRI images. The dataset com- suggested technique and a brief explanation of each of
prised four classes: three brain tumor types and healthy its components.
brain images. The CNN models, including Generic CNN
and six pre-trained models (ResNet50, InceptionV3, In- Starting with a convolutional layer as the initial step,
ceptionResNetV2, Xception, MobileNetV2, and Eficient- the input of MRI images is shaped into a uniform
dimenNetB0), were trained with preprocessed MRI images us- sion of 224x224x3, ensuring consistency across all images.
ing various strategies. Among all the CNN models, Incep- Once the images are standardized, a convolutional kernel
tionV3 demonstrated superior performance, achieving is constructed to interact with the input layer. This
keran average accuracy of 97.12% on this dataset. nel employs 32 convolutional filters, each with a size of
Furthermore, Saeedi et al. [20] employed a dataset of 3x3, and operates on 3-channel tensors. The purpose is
3264 T1-weighted contrast-enhanced MRI brain images, to extract low-level features from the MRI data eficiently
encompassing images of three types of brain tumors and without overparameterizing, considering the complexity
healthy brains. The research commenced with the ap- and quantity of the data. We specifically use the Rectified
plication of preprocessing and augmentation algorithms Linear Unit (ReLU) activation function to introduce
nonour multiclass (4 classes) classification task, we opt for
the softmax activation function in the final layer, as it
consistently demonstrates superior accuracy compared
to other options. We also employ the "categorical
crossentropy" loss function. Our optimization approach of
choice is "Adam", an abbreviation for "Adaptive Moment
Estimation". Adam builds upon the foundations of
gradient descent and integrates concepts from the Adaptive
Gradient Algorithm (Adagrad) version. It adapts step
sizes for each parameter dynamically during the training
process using a decaying average of partial gradients.
      </p>
      <p>The model underwent training for 32 epochs. However,
in the current context, the accuracy falls short of our
expectations. Consequently, we have decided to enhance
our approach by incorporating transfer learning through
ResNet50 and VGG19 architectures. The objective is to
fortify the accuracy of our model and further elevate its
overall performance.
linearity and prevent it from aligning too closely with
the output.</p>
      <p>The ConvNet architecture undergoes a systematic
reduction in spatial dimensions, efectively reducing parameter 3.2. Proposed Methodology Using
count and computational load. A valuable Max Pooling transfer learning
layer is adept at curbing overfitting concerns tied to brain
MRI images. For geographical data that complements Within this section, we have utilized two separate models
input images, MaxPooling2D is employed. A pivotal con- – VGG-19 and ResNet-50 – to tackle the complexities
volutional layer operates at dimensions of 111x111x32. associated with brain tumor detection and classification.
The pooling size of (2, 2) enacts vertical and horizontal We will provide detailed explanations of these two models
downsizing due to input image segmentation across both in the subsequent sections.
spatial dimensions.</p>
      <p>The network comprises multiple convolutional blocks, 3.2.1. VGG-19
progressively increasing filter count to 64, 128, and 256 in The VGG network, created by Simonyan and Zisserman
subsequent layers. This strategic augmentation aims to at the University of Oxford in 2014 [21], is a widely
recogcapture intricate features in the input MRI images. Inter- nized pre-trained CNN model. Trained on the extensive
spersed Max Pooling layers mitigate overfitting concerns. ImageNet ILSVRC dataset containing 1.3 million images
The architecture concludes with a spatial dimension of divided into 1000 classes, it consists of 19 layers,
includ7x7x256, signaling an abstract representation for down- ing 16 for convolutions, 3 fully connected layers, and
stream tasks. This design fosters computational eficiency 5 for pooling. Instead of average pooling, Maxpooling
and addresses standardization concerns for diverse MRI is used for downsampling. The fully connected layers
inputs, underscoring a commitment to non-linearity and consist of two sets, each with 4096 channels, followed by
prevention of overfitting in the network’s learned repre- another fully connected layer with 1000 channels for
lasentations. bel prediction. Importantly, the last fully connected layer
After the pooling layer, we obtain a pooled feature map. benefits from GPU acceleration during training, enabling
Flattening becomes crucial at this point, as it requires the use of a softmax layer for classification.
us to reshape the entire matrix that represents the input
images into a single-column vector. This modification 3.2.2. ResNet-50
is necessary for further processing. We then feed this
lfattened vector into the Neural Network for additional ResNet-50, an abbreviation for residual neural network,
processing. is a convolutional neural network featuring a depth of
In our methodology, we incorporate three dense layers: 50 layers. This model was developed and trained by He
the first layer comprises 512 hidden units, the second et al. [22] in their research conducted in 2016. Similar
layer consists of 256 hidden units, and the third layer to VGG-19, this model is able to classify a wide range of
serves as the final layer. This sequence of 512, 256, and 4 objects, with a total of 1000 categories. The model’s
trainis tailored to match the complexity of our classification ing regime capitalized on a dataset comprising more than
task. To address potential overfitting risks, we introduce 1 million images sourced from the ImageNet database.
a dropout rate of 50% between these hidden layers. For</p>
    </sec>
    <sec id="sec-2">
      <title>4. Experimental Results</title>
      <p>In our exploration of brain tumor detection, we
extensively leveraged various brain MRI image databases to
3.2.3. Proposed CNN Architecture using Transfer construct a comprehensive dataset for the training,
valiLearning dation, and testing phases of our Convolutional Neural
Network (CNN) models. The dataset utilized is curated
Transfer learning plays a crucial role in augmenting the from the Br35H dataset and the Chen Jung dataset [23].
base CNN model, utilizing the feature maps generated It’s important to note that the Chen Jung dataset
comby the pre-trained VGG19 and ResNet50 models. Both prises three tumor classes (glioma, meningioma,
pituVGG19 and ResNet50 models undergo weight fetching, itary), whereas the class without a tumor was sourced
retaining the original weights acquired during the ini- from the Br35H dataset. The latter dataset originally
tial training. Specifically, only the last four layers intro- contains only two classes (tumor, non-tumor). We
specifduced in the subsequent training session remain train- ically extracted the non-tumor class after preprocessing
able. This strategic approach ensures the preservation and analysis to seamlessly integrate it with Cheng Jun’s
of pre-existing knowledge from the initial training, with dataset. This meticulous curation ensures a
comprehenifne-tuning focused on the recently added layers for op- sive and diverse representation of brain MRI images for
timized performance in targeted classification tasks. On our research.
the flip side, we loaded pre-existing saved files from the The dataset used for training and testing our models
pre-trained models (VGG19 and RESNET50 Model). Sub- consists of around 3027 T1-weighted MRI images in JPEG
sequently, we concatenated these models with the pro- format.These images were thoughtfully classified into
posed CNN Model to create a new model named "Con- four distinct classes: glioma, meningioma, pituitary, and
catenated Model", that generates output by averaging no tumor. The following Figure 3 illustrates examples
predictions from the three individual models. This ensem- from each class.
ble technique is designed to improve overall prediction We partitioned the dataset into three subsets:
trainperformance by capitalizing on the diverse strengths of ing, validation, and testing, with respective percentages
each base model. This contributes to heightened robust- of 80%, 10%, and 10%. However, the initial image count
ness and generalization capabilities across a range of data proved insuficient for efective neural network training.
types. Figure 2 presents a comprehensive illustration of To address this limitation, we implemented a practical
the model. solution: data augmentation. This image-processing
technique enabled us to generate additional data and images
from the original dataset. In the training phase, the initial
count of 2419 images was augmented, resulting in a total
of 4838 images.
4.2. Evaluation metrics
The algorithm’s performance measures, including
accuracy, were assessed using equation-defined TP (true
positive), TN (true negative), FP (false positive), and FN (false
negative) values.</p>
      <p>=</p>
      <p>+  
  +   +   +  
(1)</p>
      <p>Figure 6 illustrates the training process of the ResNet
model, encompassing 32 epochs, and utilizing an image
size of (224x224x3). Regarding accuracy, the initial
validation accuracy commenced below 69%, exhibiting a swift
4.3. Discussions and comparisons rise to nearly 77% after the inaugural epoch. Similarly,
the initial validation loss surpassed 1.01 but diminished
Various experimental assessments were conducted to val- to below 0.69 following the first epoch. Figure 6 visually
idate the proposed dense CNN model. All experiments captures favorable accuracy enhancement and loss
reducwere performed in a Python programming environment tion. Notably, the validation accuracy, initially modest,
with GPU support. Initially, image preprocessing in- exhibited progressive improvement, reaching 88.7%.
volved augmenting the images for training, enhancing Similarly, the VGG19 model underwent training for 32
the model’s accuracy in detecting augmented tumors. epochs, utilizing an image size of (224x224x3).
ConcernThe proposed model achieved an accuracy of 97.23% on ing accuracy, the initial training accuracy commenced
bethe training dataset and 97.75% on the validation dataset, low 76%, experiencing an increase to nearly 85% after the
as illustrated in Figures 4 and 5. ifrst epoch. Similarly, the initial training loss exceeded</p>
      <p>The experiments were conducted over 100 epochs, 0.74 but decreased to below 0.49 following the first epoch.
with a batch size of 32 and image size set at (224x224x3). Figure 7 illustrates the positive trend of enhancing
accuIn terms of accuracy, the initial validation accuracy racy and reducing loss. The validation accuracy, initially
started below 61% but rapidly increased to nearly 68% modest, exhibited progressive improvement, reaching
after the first epoch. Similarly, the initial validation loss almost 94.58%.
was above 1.05 but decreased to below 0.80 after the first Finally, Figure 8 illustrates the positive trend in
imepoch. Figure 5 depicts the positive trend in improving proving accuracy and reducing loss of the concatenated
accuracy and reducing loss. The validation accuracy, model involving the three architectures: VGG19, ResNet
initially low, progressively improved to almost 97.23%. 50, and a simple CNN model trained on the same dataset,
The subsequent experiments involved the ResNet50 concatenated together. This model underwent training
model, VGG19, and a concatenation of all three models,
03 classes encompassing the three types of brain tumors
(Meningioma, Glioma, Pituitary), and the 04-class
classification used in our studies, which includes the three
types of brain tumors along with a class denoted as "no
Tumor." Various datasets are employed for this
comparative analysis. Table 2 presents the accuracy achieved by
each model, with accuracies ranging from 82% to 98.78%.</p>
      <p>Notably, all these values fall below the accuracy attained
by our model, which stands at 99.34%.
5. Conclusion
for 32 epochs, with a batch size of 32 and an image size
set at (224x224x3). In terms of accuracy, the initial
training accuracy started below 96%, experiencing an increase
to nearly 97% after the first epoch. Similarly, the initial
training loss exceeded 0.25 but decreased to below 0.22
following the first epoch. After completing the last epoch,
the training accuracy reached 98.37%, with a validation
accuracy of 99.34%.</p>
      <p>The CNN model achieves a testing accuracy of 98.69%
with a testing loss of 0.13. In comparison, the VGG19
model attains a testing accuracy of 95.76% and a
testing loss of 0.10. The ResNet50 model demonstrates a
testing accuracy of 96.94% with a testing loss of 0.1339.</p>
      <p>Remarkably, when concatenating the three models (CNN,
VGG19, and ResNet50), an outstanding testing accuracy
of 99.34% is achieved, accompanied by a test loss of 0.17.</p>
      <p>A detailed comparison of test accuracy and loss among
diferent models is presented in Table 1</p>
      <p>The proposed model’s accuracy is evaluated in
comparison with other developed models designed for brain
tumor classification. These models are assessed across
three types of classifications: the binary classification of
02 classes (normal or abnormal brain), the classification of</p>
      <p>Dataset</p>
      <p>Br35H Dataset
T1-weighted contrast-enhanced MRI images</p>
      <p>T1-weighted Contrast-Enhanced (CE)
Multiple datasets (Figshare, Brainweb, and Radiopaedia</p>
      <p>T1-weighted dataset</p>
      <p>Tumor Classes
03 classes
04 classes
03 classes
02 classes
04 classes</p>
      <p>Techniques Used Accuracy
closed-from metric learning algorithm (CFML) 94.68%</p>
      <p>EficientNet CNN 98.78%
16-layer CNN 96.13%</p>
      <p>LIM model 82%
CNN + Transfer Learning (VGG19, ResNet50) 99.34%</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Putra</surname>
          </string-name>
          ,
          <article-title>Analisis citra otak pada color-task dan word-task dalam stroop task dengan menggunakan electroencephalography (eeg), Universitas Gadjah Mada</article-title>
          ,
          <string-name>
            <surname>Yogyakarta</surname>
          </string-name>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Pattanaik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Anitha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rathore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Biswas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sethy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Behera</surname>
          </string-name>
          ,
          <article-title>Brain tumor magnetic resonance images classification based machine learning paradigms</article-title>
          ,
          <source>Contemporary Oncology/Współczesna Onkologia</source>
          <volume>27</volume>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Seetha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Raja</surname>
          </string-name>
          ,
          <article-title>Brain tumor classification using convolutional neural networks</article-title>
          ,
          <source>Biomedical &amp; Pharmacology Journal</source>
          <volume>11</volume>
          (
          <year>2018</year>
          )
          <fpage>1457</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kumari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <article-title>Review of brain tumor segmentation and classification</article-title>
          , in: 2018 International conference
          <article-title>on current trends towards converging technologies (ICCTCT)</article-title>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . Method Cheng, Jun, et al.[
          <volume>24</volume>
          ]
          <string-name>
            <surname>Nayak</surname>
          </string-name>
          et al. [
          <volume>13</volume>
          ]
          <string-name>
            <surname>Sultan</surname>
          </string-name>
          et al.[
          <volume>12</volume>
          ]
          <string-name>
            <surname>Wahlang</surname>
          </string-name>
          et al.[
          <volume>17</volume>
          ] Proposed method
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K. N.</given-names>
            <surname>Qodri</surname>
          </string-name>
          , I. Soesanti,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Nugroho</surname>
          </string-name>
          ,
          <source>Image anal- 2022</source>
          .
          <volume>08</volume>
          .039.
          <article-title>ysis for mri-based brain tumor classification using</article-title>
          [15]
          <string-name>
            <given-names>N. F.</given-names>
            <surname>Aurna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Yousuf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. A.</given-names>
            <surname>Taher</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>Azad, deep learning, IJITEE (International Journal of In- M. A. Moni, A classification of mri brain tumor formation Technology and Electrical Engineering) based on two stage feature level ensemble of deep 5 (</article-title>
          <year>2021</year>
          )
          <fpage>21</fpage>
          -
          <lpage>28</lpage>
          . cnn models,
          <source>Computers in biology and medicine</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mahmud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hussain</surname>
          </string-name>
          , S. Vassanelli,
          <volume>146</volume>
          (
          <year>2022</year>
          )
          <fpage>105539</fpage>
          .
          <article-title>Applications of deep learning</article-title>
          and reinforcement [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Raza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ayub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ahmad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Salama</surname>
          </string-name>
          ,
          <article-title>learning to biological data</article-title>
          ,
          <source>IEEE transactions on Y. I. Daradkeh</source>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Javeed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ur</given-names>
            <surname>Rehman</surname>
          </string-name>
          , H. Hamam,
          <source>neural networks and learning systems 29</source>
          (
          <year>2018</year>
          )
          <article-title>A hybrid deep learning-based approach for brain 2063-2079</article-title>
          . tumor classification,
          <source>Electronics</source>
          <volume>11</volume>
          (
          <year>2022</year>
          )
          <fpage>1146</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>A survey on transfer learning</article-title>
          , [17]
          <string-name>
            <given-names>I.</given-names>
            <surname>Wahlang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Maji</surname>
          </string-name>
          , G. Saha, P. Chakrabarti,
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          <article-title>Transactions on knowledge and data engineer- M.</article-title>
          <string-name>
            <surname>Jasinski</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Leonowicz</surname>
          </string-name>
          , E. Jasinska, Brain maging
          <volume>22</volume>
          (
          <year>2009</year>
          )
          <fpage>1345</fpage>
          -
          <lpage>1359</lpage>
          .
          <article-title>netic resonance imaging classification using deep</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F. J.</given-names>
            <surname>Díaz-Pernas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Martínez-Zarzuela</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Antón- learning architectures with gender and age</article-title>
          , Sensors Rodríguez,
          <string-name>
            <given-names>D.</given-names>
            <surname>González-Ortega</surname>
          </string-name>
          ,
          <article-title>A deep learning 22 (</article-title>
          <year>2022</year>
          )
          <article-title>1766. approach for brain tumor classification</article-title>
          and seg- [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Akbar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. A.</given-names>
            <surname>Shoukat</surname>
          </string-name>
          ,
          <article-title>A deep transfer mentation using a multiscale convolutional neural learning approach for automated detection of brain network</article-title>
          ,
          <source>Healthcare</source>
          <volume>9</volume>
          (
          <year>2021</year>
          )
          <article-title>153. tumor through magnetic resonance imaging</article-title>
          , in:
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Aamir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Rahman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. A.</given-names>
            <surname>Dayo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. A.</given-names>
            <surname>Abro</surname>
          </string-name>
          ,
          <source>2021 International Conference on Innovative ComM. I. Uddin</source>
          ,
          <string-name>
            <surname>I. Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Imran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Ish- puting (ICIC)</article-title>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . faq,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <article-title>A deep learning ap-</article-title>
          [19]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Gómez-Guzmán</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jiménez-Beristaín</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. E.</surname>
          </string-name>
          <article-title>proach for brain tumor classification using mri im- García-</article-title>
          <string-name>
            <surname>Guerrero</surname>
            ,
            <given-names>O. R.</given-names>
          </string-name>
          <string-name>
            <surname>López-Bonilla</surname>
            ,
            <given-names>U. J.</given-names>
          </string-name>
          <string-name>
            <surname>Tamayoages</surname>
            , Computers and Electrical Engineering 101 Perez,
            <given-names>J. J.</given-names>
          </string-name>
          <string-name>
            <surname>Esqueda-Elizondo</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Palomino-Vizcaino</surname>
          </string-name>
          ,
          <article-title>(</article-title>
          <year>2022</year>
          )
          <article-title>108105</article-title>
          . doi:https://doi.org/10.1016/ E. Inzunza-González,
          <article-title>Classifying brain tumors j</article-title>
          .
          <source>compeleceng</source>
          .
          <year>2022</year>
          .
          <volume>108105</volume>
          .
          <article-title>on magnetic resonance imaging by using convo-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Remzan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tahiry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Farchi</surname>
          </string-name>
          ,
          <article-title>Brain tumor clas- lutional neural networks</article-title>
          ,
          <source>Electronics</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <article-title>955. sification in magnetic resonance imaging images</article-title>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Saeedi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rezayi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Keshavarz</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>R Niusing convolutional neural network</article-title>
          .,
          <source>International akan Kalhori</source>
          ,
          <article-title>Mri-based brain tumor detection Journal of Electrical &amp; Computer Engineering (2088- using convolutional deep learning methods and</article-title>
          8708) 12 (
          <year>2022</year>
          ).
          <article-title>chosen machine learning techniques</article-title>
          ,
          <source>BMC Medical</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hossain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. S.</given-names>
            <surname>Shishir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ashraf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Al</surname>
          </string-name>
          <string-name>
            <surname>Nasim</surname>
          </string-name>
          ,
          <source>Informatics and Decision Making</source>
          <volume>23</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          . F. Muhammad Shah, Brain tumor detection using [21]
          <string-name>
            <given-names>K.</given-names>
            <surname>Simonyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zisserman</surname>
          </string-name>
          ,
          <article-title>Very deep convoluconvolutional neural network</article-title>
          ,
          <source>IEEE</source>
          (
          <year>2019</year>
          )
          <article-title>1-6. tional networks for large-scale image recognition,</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Sultan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Salem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Al-Atabany</surname>
          </string-name>
          , Multi- arXiv
          <source>preprint arXiv:1409.1556</source>
          (
          <year>2014</year>
          ).
          <article-title>classification of brain tumor images using deep neu-</article-title>
          [22]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Ren,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Deep residual learnral network</article-title>
          ,
          <source>IEEE Access 7</source>
          (
          <year>2019</year>
          )
          <fpage>69215</fpage>
          -
          <lpage>69225</lpage>
          .
          <article-title>ing for image recognition</article-title>
          ,
          <source>in: Proceedings of the doi:10</source>
          .1109/ACCESS.
          <year>2019</year>
          .2919122. IEEE conference on computer vision and pattern
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Nayak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Padhy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Mallick</surname>
          </string-name>
          , M. Zymbler, recognition,
          <year>2016</year>
          , pp.
          <fpage>770</fpage>
          -
          <lpage>778</lpage>
          . S. Kumar,
          <article-title>Brain tumor classification using dense [23]</article-title>
          J. Cheng, Brain tumor dataset, figshare. Dataset, eficient-net,
          <source>Axioms</source>
          <volume>11</volume>
          (
          <year>2022</year>
          )
          <fpage>34</fpage>
          .
          <year>2017</year>
          . https://doi.org/10.6084/m9.figshare.
          <volume>1512427</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M. S. I.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rahman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Debnath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Karim</surname>
          </string-name>
          ,
          <string-name>
            <surname>v5. M. K. Nasir</surname>
            ,
            <given-names>S. S.</given-names>
          </string-name>
          <string-name>
            <surname>Band</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Mosavi</surname>
            ,
            <given-names>I. Dehzangi</given-names>
          </string-name>
          , [24] J. Cheng, W. Yang,
          <string-name>
            <given-names>M.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <article-title>Accurate brain tumor detection using deep con- Y.</article-title>
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Feng</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          <string-name>
            <surname>Feng</surname>
          </string-name>
          , et al.,
          <article-title>volutional neural network, Computational and Retrieval of brain tumors by adaptive spatial poolStructural</article-title>
          <source>Biotechnology Journal</source>
          <volume>20</volume>
          (
          <year>2022</year>
          ) 4733
          <article-title>- ing and fisher vector representation</article-title>
          ,
          <source>PloS one 11 4745</source>
          . doi:https://doi.org/10.1016/j.csbj. (
          <year>2016</year>
          )
          <article-title>e0157112</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>