<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Transfer Learning based Framework of VGG16 to Detect Breast Cancer using Histopathological Images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Zion G. Divya</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>B. K. Tripathy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science Engineering and Information Systems</institution>
          ,
          <addr-line>VIT, Vellore-632014, TN</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computer Science and Engineering</institution>
          ,
          <addr-line>VIT, Vellore-632014, TN</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>2</volume>
      <fpage>8</fpage>
      <lpage>29</lpage>
      <abstract>
        <p>Breast Cancer Detection introduces a prominent confrontation for researchers and clinical experts as it is one of the major public health issues and is weighed as a leading root for cancer correlated deaths among women worldwide. Early diagnosis helps the chances of survival rate which is a crucial part, but standard breast cancer classical techniques rely on operative techniques, open procedures and time-consuming analyses. So, the standard classical techniques pave a way in demanding accurate solutions that are provided with eficient algorithms. This paper addresses these challenges by utilizing automated cancer detection that presents an ensemble deep learning approach by implementing transfer learning for breast cancer detection using histopathological images for classifying Benign and Malignant tissues. Transfer learning uses VGG16 to train the model on large dataset, by reusing the knowledge that has obtained from previous task which is considered as an input to another task thereby improving the performance. In this paper VGG16 architecture is considered as a pretrained model to train on ImageNet. Ensemble strategy is applied as a next step by taking the average of predicted probabilities, out of which the VGG16 model ofers overall best accuracy of 98.83%</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;CNN</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>Transfer Learning</kwd>
        <kwd>VGG16</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Artificial neural networks (ANNs) are the most advanced and progressive machine learning models in
data science. The performance of models is vague and astonishing, because the models work eficiently
even with one hidden layer such that the model reaches the approximate level of precision with the
desired function. Deep Learning allows machines to discover with an emphasis on establishing facts
precisely on patterns and knowledge in a manner to handle hidden data. The concept has been inspired
by human brains using AI. Deep Learning allows machines to discover with an emphasis on establishing
facts precisely on patterns and knowledge in a manner to handle hidden data. The concept has been
inspired by human brains using AI (Artificial Intelligence). Machine Learning and Deep Learning
can be considered, when the traditional approach fails to solve problems. To solve a specific problem,
traditional approach chooses to solve a specific problem using algorithms that are programmed manually
by a human or uses a set of rules that are predefined. In diverge with traditional approach, machine
learning trains a model on a dataset and makes sure that the model learns how to solve the problem
on its own. When a model is applied to solve a problem, by applying it to the related problems, the
transfer learning which is a part of machine learning focuses on storing the knowledge gained from
that model. Therefore, transfer learning can be applied on large datasets then fine tune it into a smaller
dataset. Globally, every year millions of women are diagnosed with breast cancer and the death rate
is increased year by year. Women from every country in the world and at any age face the issue in
their breasts which leads to breast cancer. Some of the common risk factors include consumption of
alcohol, consumption of tobacco, history of breast cancer in the family, increasing age, exposure to the
radiation, hormonal issues. Breast cancer is seen mostly in the women whose age is 40 years and above.
Sometimes breast cancer in women develops based on family history. However, the absence of a cancer
within the family history does not imply a lower risk. The importance of early detection is necessary,
as many of the people will not have any exposure to the symptoms when the cancer is in early stage.
When the person is in advance state of breast cancer the symptoms will be like
• A noticeable mass or area of thickened tissue in the breast, typically painless.
• Alteration in the breast’s size, contour, or overall look.
• Skin changes such as dimpling, redness, or a texture resembling an orange peel.
• Modifications in the appearance of the nipple or the surrounding areola.
• Unusual discharge from the nipple, which may include blood. If there are any symptoms of
abnormal breast lump women need to have a medical checkup, even though having the lumps
present inside the breast does not show any issue or problem.</p>
      <p>Any lumps inside the breast are not malignant, in fact breast lumps when are of small size which
tends to be malignant, so early preventive measures can reduce the chances of spreading to other parts
of the body. Cancerous tissues may afect the other organs of the body and may lead to other issues.
Usually, the cells that are present in human body are abnormal or they get old and die. When the
cancer cells are being spread to other parts of the body, the healthy cells can be afected too, that’s
gives chance to the remaining parts of the body to not stay healthy. Some cancerous cells grow and
spread fast whereas some of them grow slowly but some to spread to other parts of the body suddenly.
The growth in the cancerous cell is called as a tumor which looks like a benign or malignant (cancer).
The tumors that grow slowly are benign, and will not develop into tissues around them and will not
afect other parts of the body. Benign lumps are not cancer. Cancerous tumors can develop rapidly and
grow without regulation. The benign tumors may invade nearby tissues and structures. Cancer cells
move through the bloodstream or lymphatic system, and establish new growths in remaining parts
of the body. The symptoms caused by these tumors often difer based on their location in the body.
The breast cancer tissue can be analyzed to its lowest magnification level which are also microscopic
images and are called as Histopathological images. The quality of the histopathological images can
be improved by analyzing and applying the eficient algorithms that uses preprocessing techniques
which helps in color transformation, transitioning and formalization. Histopathological images play
a crucial role in breast cancer datasets as they are fundamental for understanding, diagnosing, and
researching breast cancer. Histopathological images in breast cancer datasets explore the diseases in
terms of morphological and biological characteristics. Their inclusion fosters advancements in diagnosis,
therapy, and the development of personalized treatment approaches.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>
        The work related to breast cancer is discussed here, in this paper [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] authors used statistical methods
to analyze breast cancer in pathological images. The models they have considered perform detection,
segmentation, and classification on pathological images. The accuracy achieved by using deep learning
algorithms had provided reliable recommendations in considering deep learning techniques for diferent
application. In paper [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] ResNet50, Transformer which are deep learning models, and Hover-net are
applied for finding breast cancer diagnosis, treatment, and prognosis prediction. The accuracy and
eficiency using the deep learning models for predicting breast cancer metastasis is progressively eficient.
This review [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] explains profound impact of artificial intelligence on breast cancer by identifying breast
tumors and lymph node by processing large images. Artificial Intelligence based frameworks classify
breast tumors where the traditional methods struggle to do. The Artificial Intelligence contributions
provide enhanced accuracy, eficiency, and standardization. Authors [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] proposed model that
distinguishes breast cancer samples into benign and malignant categories by applying diferent algorithms
to predict interpretation. Algorithms like K-nearest Neighbor (KNN), convolutional neural network
(CNN), ResNet50V2 architecture are proven to results on a variety of image classifications. The models
were applied on breast cancer histopathology image dataset and performed well with an accuracy of
95%. In this paper [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] VGG16 has been demonstrated by applying it on skin cancer detection by making
the image quality having a better accuracy, on skin cancer detection and provides future improvements
in the field. This paper [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] presents classification of breast cancer on histopathology images using
deep learning. The pre-trained models VGG16 and VGG19 are applied and are efective in classifying
histopathology images of breast cancer. Accurate classification and unbiased prediction of breast lesions
is done in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] where deep learning approach using CNN architectures like MobileNetV2, VGG16, and
EficientNetB7 along with transfer learning being applied. Applied these models on erroneous outputs
of noisy images, variations in input data. In this paper [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] DenTnet algorithm classifies the breast cancer
using histopathological images where transfer learning solves the problem of extracting features and
the proposed DenTnet method shows the comparison of deep learning methods in terms of detection
accuracy. The work in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] performs the classification of breast cancer samples by applying transfer
learning technique based on lightweight Squeeze Net architecture, a variant of CNN. Both Gradient
Color Activation Mapping (Grad CAM) and the mechanism for image coloring are used for fine-tuning
and satisfactory results are achieved.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>When the knowledge abstracted by a machine from previous task in the process of improving the other
task which is achieved by transfer learning in machine learning. The knowledge acquired from the
already trained model is given as an input to other model when the problem is related. For example, if a
model is trained to classify whether an image contains a cat, in the process of training, the model grasps
the properties or features of cat like fur patterns, whiskers, breed, ear shapes, and eyes. Now, if the
pre-trained model’s knowledge which is referred as transfer learning tries in classifying an image which
contains a dog, it might perform well because dogs and cats share some similar features or properties
like fur, ear shapes, breed and general body structure. Transfer learning model reuses the knowledge
which it has gained while being trained for another task, instead of starting from scratch. Transfer
learning exploits its way of working in a way that what has been achieved during the training of one
task will improve or will be advantageous when working with another task by using the knowledge it
has gained from previous tasks.</p>
      <p>Transfer learning applies the methods for breast cancer diagnosis using two ways; one is via
ultrasound imaging, and the other is pre-training data that depends on source data. One method is
cross-domain, where the model is already trained by considering natural images and cross-modal
is second method, here the model that is pre-trained on medical images is used. Feature Extraction
and Fine-Tuning are two main techniques in transfer learning shown in Figure 1. Feature Extractor
technique extracts meaningful features from the pre-trained model based on the input data like image
data and text data without afecting the existing models’ parameters. The pre-trained model that is
used in feature extractor considers only the last layers which are later replaced and trained on new
dataset. Feature Extractor works by considering the following steps;
• A pre-trained model like ResNet, VGG, or BERT which have already acquired general features
from a large dataset (e.g., ImageNet for images, or a large text corpus for language models).
• The model’s final classification layers can be removed as these are based on a particular task.
• Replace the final classification layers with new layers tailored to the target task.
• Train only the new layers on the smaller target dataset, while the rest of the model remains
frozen.</p>
      <p>Fine-tuning involves unfreezing the pre-trained model either partially or fully and updating its weights
along with the newly added layers during training on the new dataset. Therefore, the model adapts its
features based on target task necessary requirements. Fine-Tuning works by considering the following
steps;
• A pre-trained model from the feature extraction step is considered.
• Unfreeze some or all of the layers of the model.
• Training is performed on the new dataset, by considering a smaller learning rate for the pre-trained
layers so as to avoid overwriting the learned features drastically.
• Both the pre-trained layers and the new layers are updated to improve the performance when
working with target task.</p>
      <p>
        The pre-trained model [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] that is considered for Feature Extractor is VGG16 the histopathological
image data are passed as the input data and text data from sources that are accessible before moving
on to the pre-processing stage. VGG16 is a pre-trained transfer learning model that is trained, tested,
and validated on the histopathology images dataset. Based on analyzing and processing the results
using the transfer learning detection of breast cancer can be identified. A convolutional neural network
(CNN) is a one of the kinds of artificial neural network which contains input layer, along with output
layer, and can have many hidden layers. VGG16 is one of the types of CNN and is a proficient algorithm
for image classification with transfer learning.
      </p>
      <p>The VGG16 architecture shown in Figure 2, explains that the 16 in VGG16 are the 16 layers that have
weights and the convolutional layers that are present are 13, the count of Max Pooling layers is 5, the 3
Dense layers are considered which add up to 21 layers. The 16 layers that are considered have weights
which are recognized as learnable parameters layer. VGG16 performs the processing using a simple
design that uses a fixed filter size of 3 ×3 with a step size (stride) of 1. Uses padding where the output
size matches the input size after convolution is performed. The Max Pooling Layers uses 2×2 filters
with a stride of 2 in order to reduce the size of the image while keeping important features. Throughout
the network processing the convolution and max pooling layers are repeated with the same pattern by
allowing the structure to be predictable and systematic. Each block of convolution layers has a standard
number of filters which are specified in detecting patterns like edges or textures.</p>
      <p>• Conv Block 1: has 64 filters used to find simpler patterns like edges.
• Conv Block 2: has 128 filters used to detect more detailed patterns.
• Conv Block 3: has 256 filters used in finding complex patterns.</p>
      <p>• Conv Block 4 and 5: has 512 filters each used in finding highly complex patterns.</p>
      <p>
        After the convolution and pooling layers, the network processing has three fully connected (FC) layers.
The first two layers are large, with 4096 neurons each that are used in extracting features. The last
FC layer returns 1000 neurons as outputs, each representing as one of the classes in the ImageNet
classification task. A SoftMax layer at the end converts these 1000 outputs into probabilities for
classification. Transfer learning [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] is applied on histopathological image datasets to predict breast
cancer, VGG16 is a pretrained model that is applied for feature extraction capabilities and pretraining
on large image datasets like ImageNet. VGG16 works efectively for performing image classification.
The system architecture shown in Figure 3, which uses transfer learning with VGG16 to predict breast
cancer from histopathological image datasets is performed using the below steps;
      </p>
      <p>Step-1: Histopathological Image Breast Cancer Dataset:</p>
      <p>The histopathological image breast cancer dataset is a collection of images regarding the breast tissue
samples that are captured under a microscope to enhance their cellular structures. These datasets are
typically used for diagnosing, classifying, and understanding breast cancer through machine learning
and deep learning techniques. These Images are collected from biopsy samples or surgical specimens
of breast tissue. The necessity of these images is to identify and classify breast cancer in a person
if the tumor is benign, malignant, or specific grades of cancer. The resolution of image ranges from
low-resolution to high-resolution.</p>
      <p>Step-2: Data Preparation:</p>
      <p>
        Dataset Collection, the histopathological image dataset is collected with classes like benign and
malignant and Data Preprocessing is performed, where the images which are passed as input are resized
by VGG16 into 224x224 pixels and when normalization is performed the pixel values lie between [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ]
or [
        <xref ref-type="bibr" rid="ref1">-1,1</xref>
        ]. In Data Augmentation, techniques like flipping, rotation, zooming, and brightness adjustments
are performed on the images based on the requirement and necessity required by the model.
      </p>
      <p>Step-3: Transfer Learning:</p>
      <p>A model is trained by considering one task and is reused or adapted to a diferent task. However,
making a model to be trained from scratch based on the knowledge acquired from a pretrained model
helps in solving a new problem. Feature extraction, uses a pretrained model to extract general features
(edgeseatures. The new layers are added on top of the pretrained mode, corners, textures), while later
layers extract task-specific fl to match the target task’s output. Only the newly added layers are trained,
while the pretrained layers remain frozen. Fine-tuning unfreezes some layers of the pretrained model
in order to adapt the new features from new dataset and retraining it along with the new layers for the
target task.</p>
      <p>Step-4: Loading Pretrained Model VGG16:</p>
      <p>Pretraining of a model is done on the ImageNet dataset by ensuring the necessary weights are
assigned. ImageNet contains millions of labelled images to perform image classification tasks e.g.,
animals, vehicles, household items. VGG16 as the Pretrained mode is trained on ImageNet to provide
an input or starting point for other image-related tasks through transfer learning. The generic image
features like edges, textures, and shapes are extracted from images which can be fine-tuned and often
require high-resolution processing on Histopathological images. Fine-Tuning, is performed by the
pretrained model to train breast cancer dataset while freezing some layers or retraining fully. Fine-tuning
is performed to prevent the weights being updated by the convolutional layers during training.</p>
      <sec id="sec-3-1">
        <title>Step-5: Adding Custom Classification Layers:</title>
        <p>To adapt the output of the pretrained VGG16 model applied on breast cancer histopathological images,
the model is trained on classifying images from ImageNet dataset where the output layers are not
directly usable for labelled custom dataset like benign and malignant classification. So, removing of
original classification layers and replacing with the custom layers tailored to your dataset is done. Global
Average Pooling (GAP), Converts the feature maps which are having high-dimensions and outputs
from VGG16 convolutional layers into smaller representation. The risk of overfitting is reduced by
considering a smaller number of parameters. Adding ReLu activation with Dense Layers, dense layers
make the decisions on the features that are extracted by the VGG16 model by bringing out a meaningful
output. To do this, ReLU (Rectified Linear Unit) activation is added to the dense layers to make the
model learn complex relationships that are specific to histopathological images where the tumors are
benign and malignant. Applying SoftMax and sigmoid activation for final output layer, the final output
is predicted by performing classification using the activation functions. The Sigmoid Activation is
considered for binary classification benign or malignant with a single value between 0 and 1. SoftMax
Activation is considered for multi-class classification which outputs probability for multiple classes.</p>
        <p>Step-6: Train the Model:</p>
        <p>Training is performed on the dataset, based on categories like training, validation, and test sets. The
model gets trained on the augmented dataset using custom classification layers. While performing
training, the model adjusts its weights based on the patterns it finds from the 70-80% data of the total
dataset is considered. The validation set checks the model’s performance during training to overcome
overfitting issue, by considering about 10-15% of the total dataset. Test set is considered to know the
entire processing of model’s performance after training is done where 10-15% from entire dataset is
considered.</p>
        <p>Step-7: Initial Evaluation of the model:</p>
        <p>Before evaluating the model, it needs be applied on the validation or test dataset which are not
considered during training. This initial evaluation paves a way and determines whether fine-tuning is
necessary for a model.</p>
        <p>Step-8: Fine-Tune the Model:</p>
        <p>After performing the training on the model, the upper layers of the pretrained VGG16 model are
unfrozen. As the layers are fine-tuned on the histopathological dataset by considering a smaller learning
rate in order to adapt to the learned features based on the specific patterns in the dataset whether the
tumor is benign and malignant. Therefore, learning rate is reduced to avoid pretrained weights for fine
tuning.Transfer learning makes the model to define based on the existing knowledge which is gathered
from the features of ImageNet to learn specific features relevant to histopathological images. The layers’
weights are not updated during training done by the VGG16, and only the newly added layers that
are fully connected are allowed to be trained. So, the top layers are unfrozen to update weights while
training is performed. The lower layers of the VGG16 model extracts basic features like edges and
textures, when unfreezing is done on top layers, the model can bring out some specific features related
to breast cancer images.</p>
        <p>Step-9: Evaluate the Model:</p>
        <p>A model is evaluated after performing training, where the performance is measured on new data. The
performance is evaluated on the test set. The evaluation is measured by considering various metrics
like accuracy, precision, recall. Diferent insights based on diferent aspects are considered to check the
model’s performance with imbalanced datasets that are available in medical image classification tasks
(e.g., predicting breast cancer). Accuracy metrics reviews the overall correctness of the model based
on both true positives and true negatives along with the available number of predictions. Precision
measures the correctness of positive predictions by explaining how accurate your model is dealing with
predicting positive cases. In breast cancer detection, you want to minimize false positives to avoid
unnecessary treatments for patients decide are the predicted positive instances (e.g., malignant) are
true positives. It is especially important when false positives are high, so when wrongly classifying a
benign case as malignant. Recall measures the ability to detect all actual positive cases and is treated as
Sensitivity or True Positive Rate that the actual positive instances (e.g., malignant cases) were correctly
Figure 3: System Architecture.</p>
        <p>Aspect
Dataset
Approach
Data Size
dency
Model Complexity
Feature Extraction
Overfitting Risk
Training Time
Preprocessing
Scalability
identified by the model. The model is evaluated by considering these metrics, to understand how
accurate the model is but also handles false positives and false negatives errors that can be critical in
breast cancer detection. F1-Score calculates the average mean weight to smaller values of precision and
recall. It is a balanced metric that combines the model’s ability to avoid false positives which is
precision and false negatives which is recall. The F1-score is useful when working with imbalanced datasets.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Comparative Analysis</title>
      <p>A comparative analysis of present work with two earlier works of ours [13] [14] [15] [16] in predicting
breast cancer, focusing on key aspects such as dataset type, methodologies, computational cost,
performance, interpretability, and scalability are discussed in Table-1;</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>The analysis is performed by considering Breast Cancer Histopathology Image dataset which consists of
277,524 image patches of size 50x50 (198,738 IDC negative and 78,786 IDC positive). The images are in png
format.The VGG16 model ensures that the prediction is evaluated by considering the metrics precision,
recall and F1-Score and avoids overfitting. When evaluating the model’s prediction performance on
a test dataset, which provides unbiased assessment of the model’s prediction performance using the
metrics precision, recall and F1-Score.</p>
      <p>Precision metric which shows the correctly identified positive cases out of all predicted positives
which is stable from 0.633 to 0.574 as shown in Figure 4.</p>
      <p>Recall metric shows the correctly identified positive cases out of all actual positives which is
Somewhat stable from 0.747 to 0.811 as shown in Figure 5.</p>
      <p>F1-Score metric shows the average mean weight of smaller values of precision and recall which is
somewhat stable from 0.672 to 0.664 shown in Figure 6. The graph in Figure 7 indicates the model’s
performance, the training accuracy is excellent, as the epochs increase the test accuracy has made
the model reliable for real-world histopathological image classification tasks. The graph in figure 8,
highlights model is fitting the training data very well, and makes sure there is less consistent test loss.</p>
      <p>Figure 9, explains the performance metrics versus epochs as to visualize the training and validation
performance of the model based on epochs that are considered or consumed when training along with
performance metrics precision, recall and F1-Score on the training dataset across epochs.
The line graphs compare training and validation metrics and verifying if there is any large gap between
training and validation metrics. Loss vs. Epochs describes how training and validation loss change over
time. Accuracy vs. Epochs shows the model’s accuracy using precision/recall/F1-score during training
and validation. However, the model’s accuracy is shown good. Training the Model using VGG16 on the
training data and the performance is justified based on the validation data and test dataset after each
epoch shown in Figure 10.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>This study presents a robust framework leveraging transfer learning with the VGG16 architecture
for breast cancer detection using histopathological images, addressing critical challenges in early
diagnosis. The pretrained model VGG-16 uses the ensemble strategy to enhance classification accuracy
and the model successfully diferentiates between benign and malignant tissues. The proposed
approach achieves an impressive accuracy of 98.83%, demonstrating its potential as an efective tool for
automated breast cancer detection. By reusing knowledge from pre-trained models, the framework
not only reduces computational overhead but also ensures high performance in a critical application area.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Future work</title>
      <p>Future work could explore integrating additional architectures, fine-tuning hyperparameters, and
extending the approach to multiclass classification tasks or other histopathological datasets to validate
its generalizability and scalability from pre-trained models.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Acknowledgments</title>
      <p>No funding sources and other support has been taken, and thanks to my guide in assisting the research
and the preparation of work.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <sec id="sec-9-1">
        <title>The author(s) have not employed any Generative AI tools.</title>
        <p>[13] G. D. Zion, B. Tripathy, Pattern prediction on uncertain big datasets using combined light gbm and
lstm model., International Journal of Advances in Soft Computing &amp; Its Applications 15 (2023).
[14] G. D. Zion, B. Tripathy, Amalgamation of gan and resnet methods in accurate detection of breast
cancer with histopathological images., International Journal of Advances in Soft Computing &amp; Its
Applications 16 (2024).
[15] S. Jain, U. Singhania, B. Tripathy, E. A. Nasr, M. K. Aboudaif, A. K. Kamrani, Deep learning-based
transfer learning for classification of skin cancer, Sensors 21 (2021) 8142–8152.
[16] V. Bhattacharya, B. Tripathy, Advancing gender, age and ethnicity with yolov5 and transfer
learning, in: International Conference on Information Systems and Management Science, Springer,
2023, pp. 287–296.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Qu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <article-title>Application of deep learning in histopathology images of breast cancer: a review</article-title>
          ,
          <source>Micromachines</source>
          <volume>13</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <article-title>Deep learning applications in breast cancer histopathological imaging: diagnosis, treatment, and prognosis</article-title>
          ,
          <source>Breast Cancer Research</source>
          <volume>26</volume>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Soliman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Parwani</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence's impact on breast cancer pathology: a literature review</article-title>
          ,
          <source>Diagnostic pathology 19</source>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>K. B. L. A.</given-names>
            <surname>Johar</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          ,
          <source>Breast cancer using artificial intelligence</source>
          ,
          <source>International Journal for Research in Applied Science and Engineering Technology</source>
          <volume>12</volume>
          (
          <year>2024</year>
          )
          <fpage>878</fpage>
          -
          <lpage>882</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Djaroudib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lorenz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. Belkacem</given-names>
            <surname>Bouzida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Merzougui</surname>
          </string-name>
          ,
          <article-title>Skin cancer diagnosis using vgg16 and transfer learning: Analyzing the efects of data quality over quantity on model eficiency</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hameed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zahia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Garcia-Zapirain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Javier</given-names>
            <surname>Aguirre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maria</surname>
          </string-name>
          <string-name>
            <surname>Vanegas</surname>
          </string-name>
          ,
          <article-title>Breast cancer histopathology image classification using an ensemble of deep learning models</article-title>
          ,
          <source>Sensors</source>
          <volume>20</volume>
          (
          <year>2020</year>
          )
          <fpage>4373</fpage>
          -
          <lpage>4390</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>I.-N. A.</given-names>
            <surname>Nastase</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Moldovanu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. C.</given-names>
            <surname>Biswas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Moraru</surname>
          </string-name>
          ,
          <article-title>Role of inter-and extra-lesion tissue, transfer learning, and fine-tuning in the robust classification of breast lesions</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <fpage>22754</fpage>
          -
          <lpage>22766</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Wakili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Shehu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Sharif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H. U.</given-names>
            <surname>Sharif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Umar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kusetogullari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. F.</given-names>
            <surname>Ince</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Uyaver</surname>
          </string-name>
          ,
          <article-title>Classification of breast cancer histopathological images using densenet and transfer learning</article-title>
          ,
          <source>Computational Intelligence and Neuroscience</source>
          <year>2022</year>
          (
          <year>2022</year>
          )
          <fpage>8904768</fpage>
          -
          <lpage>8904799</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chaudhury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Shabaz</surname>
          </string-name>
          ,
          <article-title>Deep transfer learning for idc breast cancer detection using fast ai technique and sqeezenet architecture</article-title>
          ,
          <source>Math Biosci Eng</source>
          <volume>20</volume>
          (
          <year>2023</year>
          )
          <fpage>10404</fpage>
          -
          <lpage>10427</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhattacharyya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Snasel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Hassanien</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Tripathy</surname>
          </string-name>
          ,
          <source>Deep Learning: Research and Applications</source>
          , volume
          <volume>7</volume>
          ,
          <string-name>
            <surname>Walter de Gruyter GmbH</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>Co</surname>
            <given-names>KG</given-names>
          </string-name>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Adate</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Tripathy</surname>
          </string-name>
          ,
          <article-title>Understanding single image super-resolution techniques with generative adversarial networks</article-title>
          ,
          <source>in: Soft Computing for Problem Solving: SocProS</source>
          <year>2017</year>
          , Volume
          <volume>1</volume>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>833</fpage>
          -
          <lpage>840</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhattacharyya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Bhaumik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mukherjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>De</surname>
          </string-name>
          , Sourav,
          <string-name>
            <given-names>B. K.</given-names>
            <surname>Tripathy</surname>
          </string-name>
          ,
          <article-title>Machine learning for big data analysis</article-title>
          , volume
          <volume>1</volume>
          ,
          <string-name>
            <surname>Walter de Gruyter GmbH</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>Co</surname>
            <given-names>KG</given-names>
          </string-name>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>