<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Improving Breast Cancer Detection with Pre-trained Models: A CADe and CADx System⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ali Zakaria LEBANI</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Medjeded MERATI</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Said MAHMOUDI</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fatma CHAHBAR</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department of IBN Khaldoun University</institution>
          ,
          <addr-line>Tiaret</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Computer science Department of Mons University</institution>
          ,
          <addr-line>Mons, Belguim</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>LIM Research Laboratory of IBN Khaldoun University</institution>
          ,
          <addr-line>Tiaret</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this research study, we introduce a novel system of two primary modules: (1) Computer-Aided Detection (CADe) and (2) Computer-Aided Diagnosis (CADx). The CADe module is dedicated to the detection and segmentation of potentially anomalous regions within mammograms through the utilization of a pre-trained YOLOv5-seg model. This approach facilitates the early detection of breast cancer. Subsequently, the CADx module leverages the identified and segmented regions for further analysis, employing the VGG 16 classification model to discern the benign or malignant nature of these regions. To gauge the eficacy of our proposed methodology, extensive experiments were conducted on a substantial dataset procured from the Digital Database for Screening Mammography (DDSM). The CADe system yielded robust results in terms of detection and segmentation assessments, with a mean average precision (mAP) of 88%, a precision rate of 93.93%, and a recall rate of 98.02%. Furthermore, the CADx system demonstrated an accuracy rate of 97%.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Breast cancer</kwd>
        <kwd>Computer-aided diagnosis</kwd>
        <kwd>Deep learning</kwd>
        <kwd>CNNN</kwd>
        <kwd>Mammography</kwd>
        <kwd>DDSM</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        professionals employ various diagnostic tools,
including mammography, magnetic resonance imaging (MRI),
Cancer, characterized by uncontrolled cell growth and and biopsy, to diagnose and evaluate breast cancer [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ].
invasive tendencies into neighboring tissues, constitutes Mammography is widely recognized as the gold standard
a significant global public health challenge. It ranks due to its precision and efectiveness.
as a leading cause of mortality in developed nations, In addition to tumor detection, accurate diagnosis and
contributing to a substantial number of annual deaths. appropriate treatment selection are essential aspects of
Among the diverse spectrum of malignancies, breast can- breast cancer management. Computer-Aided Diagnosis
cer stands out as a notably prevalent and extensively (CAD) systems have become indispensable tools in
mediresearched ailment. While encompassing both benign cal imaging, assisting healthcare practitioners in making
and malignant forms, it’s the latter that poses the most se- informed decisions quickly. CAD incorporates
technolovere threat due to its potential for metastasis and spread gies like artificial intelligence (AI), computer vision, and
to distant organs. Age emerges as a prominent risk factor medical image processing to analyze and interpret
medfor breast cancer, with its influence growing as individ- ical images, aiding in the identification and
characteriuals age. Notably, in 2020, cancer claimed the lives of zation of abnormalities. Within the CAD domain, two
32,802 individuals in Algeria alone [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. primary categories are emerging [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]: CADe, which helps
      </p>
      <p>
        When it comes to existing diagnostic methods, the radiologists identify breast cancer indicators on
mammoearly detection of breast cancer plays a crucial role in grams, and CADx, which aids in distinguishing between
improving patient outcomes. The origins of breast cancer diferent tumor types, encompassing the broader field of
predominantly result from genetic mutations, accounting computer-aided diagnostics.
for only around 10% of cases, while the majority occur Machine learning algorithms have made significant
spontaneously [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Common clinical signs include breast progress in automating the detection, classification, and
lumps, skin dimpling, and nipple retraction. Healthcare characterization of breast anomalies in medical images.
6th International Hybrid Conference On Informatics And Applied Math- Traditional machine learning algorithms often require
ematics, December 6-7, 2023 Guelma, Algeria manual feature extraction for these tasks, which involves
* Corresponding author. determining tumor attributes such as size and
morphol† These authors contributed equally. ogy. However, this approach necessitates domain
exper$ alizakaria.lebani@univ-tiaret.dz (A. Z. LEBANI); tise and prior knowledge.
medjeded.merat@univ-tiaret.dz (M. MERATI); In contrast, deep learning techniques have
revolutionfsaatimd.ma.cahhamhobuadr@i@uunmivo-ntisa.raect..bdez ((SF.. MCHAAHHMBOAURD)I); ized image analysis, pattern recognition, and computer
© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License vision [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The Convolutional Neural Network (CNN),
Attribution 4.0 International (CC BY 4.0).
a well-established deep learning paradigm, has gained an F1-score of 98.02%. Additionally, FrCN-based
segrecognition for its exceptional performance in image mentation achieves 92.69% accuracy, 85.36% MCC, a
recognition tasks. This research leverages the capabilities Dice (F1-score) of 92.36%, and a Jaccard similarity
coof CNNs to develop and evaluate an innovative model eficient of 85.81%. For classification, CNN, ResNet-50,
for predicting and identifying breast cancer. and InceptionResNet-V2 models exhibit average
accura
      </p>
      <p>
        The structure of this paper is organized as follows. Sec- cies of 88.74%, 92.56%, and 95.32%, respectively.
tion 2 will delve into existing research and prior work A. Bal et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], propose an innovative deep learning
pertaining to the detection and treatment of breast can- framework for automated breast cancer diagnosis. The
cer. Following this, Section 3 will address the theoretical framework leverages YOLOv3 as a Region Proposal
Netand conceptual models employed within this study. Sec- work (RPN) to identify significant regions within
cytoltion 4 will center on the datasets utilized in our research ogy images. These identified regions are then classified
endeavors. Furthermore, Section 5 will be dedicated to using three distinct CNN classifiers (VGG16, ResNet-50,
discussing data preprocessing methods, involving an ex- Inception-v3), resulting in impressive diagnostic
accuploration of techniques implemented for the preparation racy. Notably, Inception-v3 outperforms the other
classiof data intended for model training. Section 6 will intro- fiers. The study suggests enhancing the YOLOv3 network
duce our proposed methodology, outlining the algorith- and diversifying the dataset to improve region detection.
mic approach utilized and emphasizing the innovative Noteworthy results include the VGG16 model achieving
facets intrinsic to our breast cancer prediction and iden- a 96.6% accuracy, 99.6% precision, 94.4% recall, and 99.6%
tification model. Lastly, Section 7 will be devoted to pre- specificity. The ResNet-50 model reaches a 98.8%
accusenting our findings and undertaking a comprehensive racy, 0.985 precision, 99.4% recall, and 97.9% specificity.
analysis thereof. Finally, the Inception-v3 model attains a 98.9% accuracy,
1 precision, 98.2% recall, and perfect specificity.
      </p>
      <p>Mefty Zahira Hanan et al. introduced CAD system
2. Related Work for breast cancer, employing CNN models [9]. Their
approach encompasses two pivotal stages: detection (CADe)
In recent years, significant advancements have been and identification (CADx). By meticulously fine-tuning
made in the field of breast cancer detection and diagnosis, the VGG19 CNN model on the DDSM dataset for CADe
thanks to notable progress in deep learning techniques. and the IDC dataset for CADx, they accomplished
reMany studies have explored innovative approaches and markable results, attaining a striking accuracy of 99% for
frameworks, all aimed at improving the accuracy and CADe and 91% for CADx.
eficiency of (CAD) systems for breast lesion analysis. G. H. Aly et al. [10], introduce a breast cancer detection
This section provides a comprehensive overview of the framework utilizing YOLO-V3 and YOLO-V4 models. The
most notable contributions in this area. framework identifies masses in mammograms and
cate</p>
      <p>
        Yousefi kamal P. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], presents a novel biphasic algo- gorizes them as benign or malignant via transfer learning.
rithm for the classification and segmentation of mammo- Notably, YOLO-V4 demonstrates superior detection
accugraphic images. In the classification phase, the author use racy, achieving a mean average precision (mAP) of 82.43%
a CNN to automatically extract image features, achieving in comparison to YOLO-V3’s 74.99% after 2 trials. The
an accuracy of 78% and an AUC of 69%. For tumor seg- subsequent classification of masses involves ResNet and
mentation, the author applies the Level-set segmentation Inception V3 classifiers, with Inception V3 yielding more
method, utilizing spatial fuzzy clustering (LS-SFC), which favorable outcomes—an accuracy of 95.00% compared to
accurately delineates the tumor region within mammo- ResNet’s 90.00%.
graphic images. The combination of segmentation levels Zeiser et al. [11], propose a CAD system that
leverwith spatial fuzzy clustering enhances the quality of seg- ages deep learning and data augmentation methods to
mentation outcomes, ultimately presenting preprocessed perform mammogram segmentation. They employ a
Uimages that unveil the precise tumor region within the Net model and the DDSM dataset, achieving impressive
image. results including an accuracy of 85.95%, sensitivity of
      </p>
      <p>
        M. A. Al-antari et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], introduce a comprehensive 92.32%, and specificity of 80.47%.
computer-aided diagnosis (CAD) system for breast lesion In [12], the authors created a CAD system utilizing
analysis, employing integrated deep learning techniques. CNNs, which efectively classified mammography mass
This system combines You-Only-Look-Once (YOLO) for lesions as benign or malignant with high accuracy. They
lesion detection, a full resolution convolutional network improved model performance by using techniques such
(FrCN) for segmentation, and three distinct deep learning as transfer learning, fine-tuning, data augmentation,
regmodels for classification. Notably, YOLO-based lesion ularization, and dropout. Results indicated that
intedetection achieves impressive accuracy rates of 97.27%, grating well-engineered deep learning CNNs through
Matthews’s correlation coeficient (MCC) of 93.93%, and transfer learning improved breast cancer classification
accuracy compared to other methods. The fine-tuning within Figure 1. [14].
approach, focusing on the last two convolutional layers,
yielded superior outcomes. The Breast Cancer
Screening Framework, developed using Inception v3 model on
merged data, achieved remarkable accuracy in classifying
mammography mass lesions. This framework even
outperformed human assessment, achieving an impressive
area under the curve (AUC) of 0.99. The developed frame- Figure 1: Elucidating the Architectural Configuration of
work accurately diagnosed images from various datasets, VGG16
including Merged Dataset (MD) 98.94%, Digital Database
for Screening Mammography (DDSM) 97.35%, Full-field
Digital Mammographic Database (INbreast) 95.50% and 3.2. YOLOv5-seg
A Breast Cancer Digital Repository (BCDR) 96.67% [12].
      </p>
      <p>J.Shi. [13], utilizes a YOLO-based computer-aided di- YOLOv5-seg is a specialized variant of the YOLO (You
agnosis (CAD) system to address the challenges associ- Only Look Once) [15], algorithm designed for precise
inated with chest cancer detection. Three key issues are stance segmentation tasks. It directly predicts pixel-level
discussed and analyzed within the CAD system imple- masks for objects in images by combining YOLOv5’s
obmentation: the utilization of handcrafted features, the ject detection with ProtoNet, a fully connected network
prevalent high false positive rate in clinical settings, and utilizing 2D convolutions with SiLU activation functions
the complexity of detecting irregular nodules in spiral to generate mask prototypes. YOLOv5-seg surpasses
CT scans. YOLOv5 in channel outputs, with 351 channels, due to</p>
      <p>These related works demonstrate the growing interest its additional 32 mask outputs. This integration
efecin utilizing deep learning methods, particularly CNNs, for tively enables instance segmentation by leveraging
Probreast cancer detection and classification. They provide toNet instance features and object detection information,
valuable insights into the advancements and potential making it suitable for real-time applications, including
solutions in this field, contributing to the development medical imaging. Various weight variants
(YOLOv5sof accurate and eficient CAD systems for breast cancer seg, YOLOv5m-seg, YOLOv5l-seg, and YOLOv5x-seg)
diagnosis. are available to cater to diferent complexity and
accuracy requirements [16].A depiction outlining the
architectural arrangement of YOLOv5-sag is provided in Figure
3. Models 2.[15, 16].</p>
      <sec id="sec-1-1">
        <title>In this research, we propose to use two distinct models, namely VGG16 and YOLOv5-seg, to address the challenges in breast cancer detection and classification.</title>
        <p>3.1. VGG16
VGG16, a widely recognized deep CNN architecture
introduced by Simonyan and Zisserman. [14], is celebrated Figure 2: Elucidation of the Comprehensive Structure of
for its depth and eficacy in image classification tasks. It YOLOv5l-seg
comprises 16 convolutional layers organized into blocks,
each housing multiple 3x3-sized filter layers. This choice
of filter size enables the model to discern intricate
image features efectively. VGG16 maintains uniformity 4. Datasets
by consistently using 3x3 filters, and it simplifies
complexity with 2x2 pooling layers, reducing activation map In our work, we primarily utilized the Digital Database
sizes while preserving vital features. After convolution for Screening Mammography (DDSM) [17], a widely
recand pooling, the model flattens the features for classifica- ognized public database extensively employed in breast
tion using softmax, making it adept at object recognition. cancer diagnostic and preventive assistance systems. The
VGG16’s extensive depth facilitates the capture of com- DDSM dataset includes 2620 cases, each containing four
plex patterns, enhancing its classification performance LJPEG-format images representing two images for each
across various computer vision applications, including breast (Left_CC, Left_MLO, Right_CC, Right_MLO) from
medical image analysis.A visual representation elucidat- diferent angles. In total, it comprises 10,480 images
deing the architectural configuration of VGG16 is appended picting various breast conditions, including normal,
cancerous, and benign states. For this study, 800 images were
randomly selected from the DDSM dataset, with an equal
distribution of 400 normal and 400 abnormal (cancerous
and benign) cases.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>5. Preprocessing</title>
      <sec id="sec-2-1">
        <title>The preprocessing of all mammograms is accomplished through the following sequential steps:</title>
        <sec id="sec-2-1-1">
          <title>5.1. Normalization</title>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>To optimize the utilization of mammographic images in</title>
        <p>our CNN model, a normalization process is employed. In
this process, the entire images are resized to a
standardized dimension of 200×200 pixels [17, 18]. This resizing
step ensures uniformity in the input image size, allowing
for consistent processing and analysis by the CNN model.</p>
        <p>The choice of 200×200 pixels as the target dimension
is based on considerations of both computational
feasibility and preserving important image information. This
dimension strikes a balance between capturing
significant details within the breast tissue and maintaining a
manageable computational workload.</p>
        <sec id="sec-2-2-1">
          <title>5.2. Data Partitioning</title>
          <p>Data splitting in deep learning involves partitioning the
dataset into three subsets: training (70%), validation (10%),
and test (20%). The training set is used for model training,
the test set assesses performance, and the validation set
ifne-tunes hyperparameters and guides model
optimization.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>6. Proposed Method</title>
      <sec id="sec-3-1">
        <title>We propose an innovative method for the classification</title>
        <p>and segmentation of medical images. This method
incorporates state-of-the-art techniques, including CNN
YOLOv5-seg for precise region of interest segmentation,
and VGG16 for the final classification of results. This
approach is designed to enhance the accuracy and eficiency
of the Computer-Aided Diagnosis (CAD) system in the
medical domain. In the following sections, we will delve
into each component of our method and explain how they
are integrated to achieve an overall high-performing
solution.Figure 3. illustrates a comprehensive depiction of
our methodology.</p>
        <sec id="sec-3-1-1">
          <title>6.1. CADe (Computer-Aided Detection)</title>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>To achieve an optimal configuration of the YOLOv5seg model for our breast cancer detection task, we conducted a series of trials and tests with various versions of the model. After thorough evaluation, we selected</title>
        <p>the YOLOv5L-seg variant, which exhibited the best
performance for our specific database, and we made some
changes to it.
6.1.1. Addition of convolutional layers</p>
      </sec>
      <sec id="sec-3-3">
        <title>Given the relatively modest size of the images (200 x 200</title>
        <p>pixels), we introduced three additional convolutional
layers into the detection head of YOLOv5-seg. This strategic
augmentation was undertaken to extract finer-grained
information. The specifications of these added layers are
depicted in Table 1.</p>
        <p>Table 2 facilitate the transfer of information extracted by the
Evolution of Convolutional Layer Characteristics through Fil- convolutional layers to the fully connected layers.
ters Modifications. Finally, the development of a new classifier represented
the ultimate step. Situated above the existing
convoluLayers CONV’1 CONV’2 CONV’3 tional layers, this classifier was composed of three
speBefore 125 256 512 cific layers. The first layer was a fully connected layer
After 256 512 1024 endowed with 256 neurons and activated by a ReLU
function. The second layer was also fully connected and
comprised 128 neurons activated by ReLU. The final
output</p>
        <p>These adaptations aim to broaden the receptive field of oriented layer featured a solitary neuron, its activation
the filters and enhance the detection head’s capability to governed by a sigmoid function for binary classification.
capture salient features across diferent spatial scales.The Consequently, this classifier yields output
probabilicomplete modifications are depicted in Figure 4. ties through the sigmoid activation function. A value
proximate to zero indicates a diminished probability of
malignancy, while a value approaching one signifies a
substantial probability of malignancy.The new
architecture is presented in Figure 5.</p>
        <sec id="sec-3-3-1">
          <title>6.2. CADx (Computer-Aided Diagnosis)</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>7. Findings and Analysis</title>
      <p>7.1. Assessment Criteria For the YOLO model, we evaluate its performance by
using mAP, precision, and recall. These metrics assess the
In this section, we discuss the evaluation metrics used to model’s ability to accurately detect and classify objects
assess the performance of our proposed method, specif- in the given dataset.
ically for the YOLO model and the VGG16 model. We For the VGG16 model, we focus on accuracy as the
focus on key metrics such as mean Average Precision primary evaluation metric. Accuracy provides a
compre(mAP), precision, recall, and accuracy to provide a com- hensive measure of the model’s correctness in classifying
prehensive evaluation of the models’ performance. images into their respective categories.
• True Positives (TP) : It represents the
number of positive instances correctly identified or 7.2. Outcome Summary
classified as positive by a model or test. In this section, we present the outcomes of our research
• True Negatives (TN) : It represents the num- study, which focuses on the combined use of CADe and
ber of negative instances correctly identified or CADx techniques. Our experimental setup involved
uticlassified as negative by a model or test. lizing a 10th generation Intel Core i7 processor coupled
• False Positives (FP) : cIt represents the with an NVIDIA GeForce MX150 graphics card for
trainnumber of negative instances incorrectly identi- ing and evaluation purposes.
ifed or classified as positive by a model or test. In
other words, it is the number of instances that are
actually negative but were mistakenly classified 7.2.1. CADe Results
as positive. For CADe, we employed the YOLOv5-seg model,
con• False Negatives (FN) : It represents the num- ducting extensive experiments to achieve improved
perber of positive instances incorrectly identified or formances. Over 100 iterations were conducted using
classified as negative by a model or test. In other various YOLOv5-seg models, each with diferent
hyperwords, it is the number of instances that are ac- parameters, aiming to enhance the results. The obtained
tually positive but were mistakenly classified as results are presented in Figure 6. and Table 3.
negative.
• Accuracy : It is a common evaluation metric
that measures the overall correctness of the
predictions made by the model.
•
•
•
 =</p>
      <p>+  
  +   +   +  
Precision : Precision measures the accuracy
of positive predictions made by the models.</p>
      <p>=</p>
      <p>+</p>
      <p>Recall : Recall, also known as sensitivity or
true positive rate, measures the model’s ability to
correctly detect positive instances.</p>
      <p>=</p>
      <p>+</p>
      <p>Mean Average Precision (mAP) : The
computer vision research community relies on
the Mean Average Precision (mAP) as a standard
metric to assess the reliability of object detection
models. It quantifies the performance of these
models by calculating the mean of average
precision (AP) values across recall values ranging from
0 to 1.</p>
      <p>(1)
(2)
(3)
 () d
mAP =</p>
      <p>1 ∑︁ AP (4)

=1
Table 3 primary phases: Detection (CADe) and Identification
Performance Evaluation of YOLOv5l-seg Through Multiple (CADx). Recent years have seen a rapid ascent in the
Training Iterations: A Comparative Analysis. prominence of deep learning as a robust method for
predictive analysis, particularly in the realm of image
proEPOCHS MODEL mAP cessing. Deep learning Convolutional Neural Network
(CNN) models, renowned for their hierarchical
architec40 YOLOv5m-seg1 22% tures, have proven invaluable in extracting intricate
fea40 YOLOv5m-seg2 26% tures from input images. In the development of our CAD
system, a meticulous selection process was employed to
100 YOLOv5l-seg1 49% identify pre-trained CNN models that exhibited
promis100 YOLOv5l-seg2 68% ing outcomes. Specifically, YOLOv5-seg was designated
300 YOLOv5l-seg1 70% for CADe, while VGG16 emerged as the most suitable
choice for CADx.
300 YOLOv5l-seg2 88% The principal objective of this research endeavor was
to develop a Computer-Aided Diagnosis (CAD) system
aimed at augmenting radiologists’ diagnostic
capabili7.2.2. CADx Results ties, thereby ensuring heightened accuracy in patient
assessments. This CAD system was structured into two
Moving on to CADx, we employed the VGG116 model primary phases: Detection (CADe) and Identification
and performed over 100 iterations for evaluation. After (CADx). Recent years have seen a rapid ascent in the
100 epochs of training, we achieved an impressive preci- prominence of deep learning as a robust method for
presion of 97%.The obtained results are presented in Figure dictive analysis, particularly in the realm of image
pro7. cessing. Deep learning Convolutional Neural Network
(CNN) models, renowned for their hierarchical
architectures, have proven invaluable in extracting intricate
features from input images. In the development of our CAD
system, a meticulous selection process was employed to
identify pre-trained CNN models that exhibited
promising outcomes. Specifically, YOLOv5-seg was designated
for CADe, while VGG16 emerged as the most suitable
choice for CADx.</p>
      <p>While the strides made in developing an advanced
Computer-Aided Diagnosis (CAD) system employing
deep learning models such as YOLOv5-seg and VGG16 are
noteworthy, it is imperative to acknowledge broader
implications beyond performance metrics. One significant
aspect deserving attention in deploying such systems at
scale is the safeguarding of patient data security.
Ensuring the integrity and confidentiality of medical
information within the CAD system against potential breaches
is crucial, aligning with stringent data protection
regulaFigure 7: Displaying Training and Validation Accuracies for tions to uphold patient privacy.</p>
      <p>VGG16 Furthermore, the real-world application of these
systems necessitates consideration of latency issues,
especially in time-sensitive scenarios like medical diagnoses.</p>
      <p>Minimizing latency between image input and diagnosis
8. CONCLUSION output is pivotal for improving clinical workflow
eficiency and providing timely insights to healthcare
proThe principal objective of this research endeavor was fessionals. Achieving low latency in real-time processing
to develop a Computer-Aided Diagnosis (CAD) system without compromising accuracy becomes essential to
aimed at augmenting radiologists’ diagnostic capabili- ensure the practical usability of these systems.
ties, thereby ensuring heightened accuracy in patient Another critical aspect revolves around the
manageassessments. This CAD system was structured into two ment of extensive datasets in real-world scenarios. As
1Original YOLOv5-seg the CAD system operates within clinical environments,
2Modified YOLOv5-seg seamless handling and storage of vast datasets become
imperative. Scalability and eficient data management India, January 7–10, 2021, Proceedings 17, Springer,
techniques are vital to accommodate the continuous in- 2021, pp. 253–267.
lfux of medical imaging data while preserving system [9] M. Z. Hanane, M. Mejdeded, Utilization of
preperformance and responsiveness. trained models of cnn in mammograms processing</p>
      <p>Addressing these challenges—data security, latency op- for the diagnosis of breast cancer, in: 2022 7th
timization, and efective management of extensive real- International Conference on Image and Signal
Proworld datasets—becomes integral for the successful in- cessing and their Applications (ISPA), IEEE, 2022,
tegration and sustainable utilization of CAD systems in pp. 1–5.
clinical settings. While our research demonstrated im- [10] G. H. Aly, M. A. E.-R. Marey, S. El-Sayed Amin,
pressive accuracy and precision, future advancements M. F. Tolba, Yolo v3 and yolo v4 for masses
detecshould focus not only on model performance but also on tion in mammograms with resnet and inception for
overcoming these practical hurdles to ensure the seam- masses classification, in: Advanced Machine
Learnless and secure deployment of CAD systems in enhancing ing Technologies and Applications: Proceedings of
patient care and diagnosis. AMLTA 2021, Springer, 2021, pp. 145–153.
[11] F. A. Zeiser, C. A. da Costa, T. Zonta, N. M.
Marques, A. V. Roehe, M. Moreno, R. da Rosa Righi,
References Segmentation of masses on mammograms using
data augmentation and deep learning, Journal of
Algeria, digital imaging 33 (2020) 858–868.</p>
      <p>[12] H. Chougrad, H. Zouaki, O. Alheyane, Deep
convolutional neural networks for breast cancer
screening, Computer methods and programs in
biomedicine 157 (2018) 19–30.
[13] J. Shi, A technical comparison of yolo-based chest
cancer diagnosis methods, Highlights in Science,</p>
      <p>Engineering and Technology 41 (2023) 35–42.
[14] K. Simonyan, A. Zisserman, Very deep
convolutional networks for large-scale image recognition,
arXiv preprint arXiv:1409.1556 (2014).
[15] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You
only look once: Unified, real-time object detection,
in: Proceedings of the IEEE conference on computer
vision and pattern recognition, 2016, pp. 779–788.
[16] G. Jocher, A. Chaurasia, A. Stoken, J. Borovec,</p>
      <p>Y. Kwon, K. Michael, J. Fang, Z. Yifu, C. Wong,
D. Montes, et al., ultralytics/yolov5: v7. 0-yolov5
sota realtime instance segmentation, Zenodo
(2022).
[17] G. V. Suganthi, J. Sutha, M. Parvathy,</p>
      <p>N. Muthamil Selvi, Genetic algorithm for
feature selection in mammograms for breast
masses classification, Computer Methods in
Biomechanics and Biomedical Engineering: Imaging &amp;</p>
      <p>Visualization (2023) 1–12.
[18] M. Heath, K. Bowyer, D. Kopans, P. Kegelmeyer Jr,</p>
      <p>R. Moore, K. Chang, S. Munishkumaran, Current
status of the digital database for screening
mammography, in: Digital Mammography: Nijmegen,
1998, Springer, 1998, pp. 457–460.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Global</given-names>
            <surname>Cancer</surname>
          </string-name>
          Observatory - https://gco.iarc.fr/today/data/factsheets/ populations/12-algeria-fact-sheets.pdf ,
          <year>2020</year>
          .
          <article-title>Top 5 most frequent cancers excluding nonmelanoma skin cancer</article-title>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>American</given-names>
            <surname>Cancer</surname>
          </string-name>
          <string-name>
            <surname>Society</surname>
          </string-name>
          ,
          <source>About breast cancer</source>
          ,
          <source>The American Cancer Society and editorial content team</source>
          ,
          <year>2022</year>
          . URL: https://www.cancer.org/content/ dam/CRC/PDF/Public/8577.00.pdf , accessed
          <year>August 2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Karthik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. Srinivasa</given-names>
            <surname>Perumal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chandra</surname>
          </string-name>
          <string-name>
            <surname>Mouli</surname>
          </string-name>
          ,
          <article-title>Breast cancer classification using deep neural networks</article-title>
          ,
          <source>Knowledge Computing and Its Applications: Knowledge Manipulation and Processing Techniques: Volume</source>
          <volume>1</volume>
          (
          <year>2018</year>
          )
          <fpage>227</fpage>
          -
          <lpage>241</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Nishikawa</surname>
          </string-name>
          ,
          <article-title>Current status and future directions of computer-aided diagnosis in mammography</article-title>
          ,
          <source>Computerized Medical Imaging and Graphics</source>
          <volume>31</volume>
          (
          <year>2007</year>
          )
          <fpage>224</fpage>
          -
          <lpage>235</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Bidgoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Iribarren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Molloi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Baldi</surname>
          </string-name>
          ,
          <article-title>Detecting cardiovascular disease from mammograms with deep learning</article-title>
          ,
          <source>IEEE transactions on medical imaging 36</source>
          (
          <year>2017</year>
          )
          <fpage>1172</fpage>
          -
          <lpage>1181</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Yousefikamal</surname>
          </string-name>
          ,
          <article-title>Breast tumor classification and segmentation using convolutional neural networks</article-title>
          , arXiv preprint arXiv:
          <year>1905</year>
          .
          <volume>04247</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Al-Antari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Al-Masni</surname>
          </string-name>
          , T.-S. Kim,
          <article-title>Deep learning computer-aided diagnosis for breast lesion in digital mammogram, Deep Learning in Medical Image Analysis: Challenges and Applications (</article-title>
          <year>2020</year>
          )
          <fpage>59</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bal</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. Das</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          <string-name>
            <surname>Satapathy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Jena</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>K. Das</surname>
          </string-name>
          ,
          <article-title>Automated diagnosis of breast cancer with roi detection using yolo and heuristics</article-title>
          ,
          <source>in: Distributed Computing and Internet Technology: 17th International Conference, ICDCIT</source>
          <year>2021</year>
          , Bhubaneswar,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>