<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Detection of Atherosclerosis Using Deep Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Juliet Chebet Moso</string-name>
          <email>juliet-chebet.moso@univ-reims.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed Tahar Bennai</string-name>
          <email>m.bennai@univ-boumerdes.dz</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zahia Guessoum</string-name>
          <email>zahia.guessoum@univ-reims.fr</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stéphane Cormier</string-name>
          <email>stephane.cormier@univ-reims.fr</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jasminka Hasić Telalović</string-name>
          <email>jasminka.hasic@ssst.edu.ba</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Dedan Kimathi University of Technology</institution>
          ,
          <addr-line>Private bag 10143, Nyeri</addr-line>
          ,
          <country country="KE">Kenya</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>UR 4474, Université de Reims Champagne-Ardenne</institution>
          ,
          <addr-line>51100, Reims</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University Sarajevo School of Science and Technology</institution>
          ,
          <addr-line>71000, Sarajevo</addr-line>
          ,
          <country country="BA">Bosnia and Herzegovina</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Atherosclerosis is one of the major causes of cardiovascular disease, requiring early detection and intervention. Deep learning techniques, particularly transfer learning, ofer potential avenues for improving the diagnosis and management of atherosclerosis. In this paper, a transfer learning approach is proposed to enhance the detection of atherosclerotic plaque by adapting pre-trained Convolutional Neural Networks (CNNs). By fine-tuning these models on a dataset of medical images, the study aims to leverage learned representations to improve detection accuracy and eficiency. Additionally, data augmentation is used to enhance model robustness and address data scarcity and class imbalance issues. The findings from our experiments indicate that the ResNet-50 model outperformed others in terms of Recall at 1.0, followed by Inception-v3 at 0.941. In classification accuracy, ResNet-50 achieved 93%, followed by the Inception-v3 model at 86%. Similarly, in AUC-ROC performance, the ResNet-50 model attained the highest score of 0.99, with the Inception-v3 model following closely at 0.966. Our results demonstrate that transfer learning significantly improves the accuracy, sensitivity, and specificity of the detection of atherosclerotic plaque, showcasing its potential as a valuable tool for the detection of atherosclerosis.</p>
      </abstract>
      <kwd-group>
        <kwd>Deep learning</kwd>
        <kwd>Image classification</kwd>
        <kwd>Medical Imaging</kwd>
        <kwd>Transfer learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        Atherosclerosis is the most prevalent underlying cause of cardiovascular disease (CVD) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], a
worldwide health concern that afects the heart and circulatory system [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. It stands as the
leading cause of global mortality, responsible for 17.9 million deaths in 2019 alone, according to
WHO [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In European Society of Cardiology member countries, CVD prevails as the primary
cause of mortality, afecting more females than males, with ischemic heart disease constituting
a significant portion of fatalities [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In cardiovascular medicine, Coronary artery calcification
(CAC) serves as a crucial clinical marker indicating atherosclerosis progression characterized
by the accumulation of calcium deposits in the coronary arteries.
nEvelop-O
∗Corresponding author.
      </p>
      <p>
        Detecting CAC is crucial for assessing coronary heart disease risk and guiding preventive
treatments. Tests like treadmill tests, radionuclide scans, CT scans, MRI scans, and coronary
angiography aid in CAC identification [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Early and precise detection enables timely
interventions, alleviating the burden of cardiovascular disease. Manual interpretation of medical imaging
for CAC diagnosis is labour-intensive and prone to inter-observer variability [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], necessitating
more eficient and reliable diagnostic methods. Over the past decade, medical research has
made noteworthy progress using Convolutional Neural Networks (CNNs) for image processing.
Several approaches have been proposed for detecting and classifying CAC from CT scans,
including paired CNN for CAC quantification and risk classification [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], Recurrent CNN for
plaque and stenosis classification [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], and 3D CNN architecture for atherosclerosis visualization
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        Deep learning holds promise for transforming CAC detection, automating procedures and
enhancing diagnostic accuracy. However, its integration into cardiac CT is an evolving process
due to the need for in-depth validation [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The scarcity of labelled medical data for training
these systems also poses a challenge, as models sufer from overfitting and increased complexity.
Optimizing hyperparameters such as filter size and learning rate poses challenges.
Generalization errors occur when processing images from varied modalities or dealing with unseen
pathological cases, underscoring the need for robust deep learning models in CAC detection
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. A viable method for producing highly accurate classification models with little training
data is transfer learning [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. It leverages a network that has undergone comprehensive prior
training on a specific dataset and reconfigures its network components to meet the demands of
the new domain.
      </p>
      <p>This study utilizes CNNs to automatically classify CT scans for detecting presence of
atherosclerosis. It highlights the efectiveness of transfer learning in leveraging existing
knowledge and resources to address the challenges associated with detection of atherosclerotic plaques
in medical imaging. By harnessing the power of deep learning and transfer learning techniques,
we aim to contribute to the development of more eficient and accurate diagnostic tools for
cardiovascular diseases, ultimately leading to improved patient outcomes. This will improve
detection performance and alleviate cardiologist workload. Three pretrained models: ResNet-50,
Inception V3, and VGG19 where implemented and compared against our proposed baseline
CNN in classification of images into diseased or normal categories. This technique simplifies
clinical processes, enhances early identification of at-risk patients, allows personalized risk
assessment, and facilitates prompt interventions.</p>
      <p>This paper is structured as follows: Section 2 presents the related works on classification
of coronary artery diseases. Section 3 presents the methodology which discusses the data
acquisition and pre-processing steps and our proposed classification models leveraging transfer
learning. Section 4 is dedicated to the experimental results, with section 5 giving the conclusion
and perspectives.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work</title>
      <p>Computer vision research on coronary artery diseases is dificult because of resource constraints
brought on by the laws governing patient privacy. The dificulty is increased by the intricacy
and accuracy needed for these investigations. This section provides an overview of existing
research in our context, shedding light on relevant studies and works.</p>
      <p>
        Deep learning methods, including artificial neural networks and CNNs, have been used to
detect coronary artery diseases. These architectures eliminate the need for explicit feature
engineering [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], as they autonomously extract relevant information from training data reducing
computational complexity. In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], a paired CNN technique was applied to quantify coronary
artery disease (CAD) on CT angiography scans, achieving a sensitivity of 0.71 and an accuracy
of 83% compared to manual annotations by an expert human observer. A Recurrent CNN was
used for detection of coronary artery plaque and stenosis in CT scans, achieving 77% accuracy
for plaque analysis and 80% for stenosis analysis [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Candemir et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] developed a 3D CNN
architecture using visual cues to categorize vessels, provide insights into coronary artery volume,
identify pathological lesions, and automatically pinpoint atherosclerosis regions, achieving an
accuracy of 90.9%.
      </p>
      <p>
        Gupta et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] investigated transforming coronary CT images from 3D to 2D, focusing on
straightened-Multiplanar Reformatted (MPR) representations for arteriosclerosis prediction.
Using Inception-v3 with transfer learning and data augmentation, they achieved an AUC of 0.93.
Alothman et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] proposed a method that employed feature extraction and a CNN model.
Their modified DenseNet-161 with transfer learning and leaky ReLU activation achieving 99.2%
prediction accuracy and F1 score of 0.9895, and precision-recall curves of 0.92 and 0.91 with less
memory usage and computation time compared to state-of-the-art CNNs.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], a CAD detection method using You Only Look Once (YOLO) V7 and UNet++
models is proposed. A fuzzy function is used to enhance images and extract key features. The
Aquila optimization algorithm is used to optimize hyperparameters in the UNet++ model. This
approach reduces computational costs and improves the model’s performance, achieving an
average accuracy of 99.40% and an AUROC of 0.97. In [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], non-contrast and contrast heart
CT scan images were utilized to predict stenosis. Transfer learning with five pretrained neural
network models was applied, with EficientNetB0 achieving the best recall of 0.933. In[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], three
CNN models (Inception ResNet v2, VGG, and ResNet 50) were trained for coronary artery
calcification detection. ResNet 50 achieved the highest accuracy of 98.52% on cardiac cropped
images, obtained after dissecting the cardiac region using K-means clustering and mathematical
morphology.
      </p>
      <p>This section reviewed works showcasing eficient coronary artery disease detection using
deep learning. Deep learning methods excel in detecting detailed features and handling noise
to a certain extent during data processing. However, they face limitations such as insuficient
training data, imbalanced datasets leading to biased models, and challenges with overfitting
and underfitting, which can be mitigated through hyperparameter tuning. To address these
issues, we developed a transfer learning-based deep learning method to detect atherosclerosis
using minimal data.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Methodology</title>
      <p>In this section, we present the procedures for data acquisition and pre-processing. Additionally,
we delve into our proposed classification models, which utilize transfer learning, along with the
evaluation metrics used to assess their performance.</p>
      <sec id="sec-4-1">
        <title>3.1. Data acquisition and Pre-processing</title>
        <p>
          This study uses publicly available Coronary Computed Tomography Angiography (CCTA)
images from 500 patients [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] which is de-identified and anonymized to protect privacy and
confidentiality. The images underwent a 2D Projection Reformatting procedure. The coronary
artery was projected from 18 diferent angles, generating a 2D projection for each angle at
10-degree intervals, to boost accuracy and maintain a realistic representation [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. A 2D
representation of a diseased coronary artery projected from 18 angles is shown in Figure 1.
        </p>
        <p>The dataset was partitioned into training, validation, and testing subsets on a per-patient
basis, following a ratio of 3:1:1 (300/100/100). This ensures a fair proportion of both healthy,
atherosclerosis-free cases and sick instances, each making up 50% of the total. To improve
modelling and attain dataset balance, data augmentation was done to artery images obtained
from the 300 training cases, resulting in a six-fold increase in the dataset to 2,364 images.
This augmentation aimed to improve model training and maintain dataset balance. Markedly,
the entire validation dataset, the testing dataset, and the normal component of the training
dataset (2,304 images) were excluded from augmentation. Table 1 provides a summary of the
pre-processed dataset:</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Transfer Learning Methodology</title>
        <p>Three pre-trained models, VGG19 [18], Inception-v3 [19], and ResNet50 [20], were selected to
be used for classification using transfer learning. Inception-v3 excels in handling extensive
datasets and diverse image dimensions and resolutions, proving particularly beneficial in medical
imaging. ResNet50 introduces residual connections, enabling the network to learn residual
functions for mapping input to output. These connections tackle the vanishing gradient problem
by acting as gradient superhighways, ensuring uninterrupted gradient propagation in deeper
architectures.</p>
        <p>Transfer learning with the pre-trained models involved freezing all layers of the models and
removing their Fully Connected layers. Global Average Pooling 2D (GAP) was then applied to
the frozen layers to extract features for subsequent fine-tuning layers. GAP efectively reduces
spatial dimensions while preserving channel information, aiding in parameter reduction and
guarding against overfitting. By condensing each feature map into a single value, GAP maintains
spatial information, enhancing model generalization and adaptability to new datasets. For the
binary classification task, the sigmoid activation function was chosen, compressing inputs to a
range of 0 to 1. Outputs near 1 indicate a high probability of being ”Diseased” while those close
to 0 suggest ”Normal” afiliation. Binary Cross-Entropy Loss, paired with sigmoid activation,
efectively measures dissimilarity between predicted probabilities and true labels. The transfer
learning based workflow applying pre-trained deep learning model for feature extraction and
the custom classification layers for detection of atherosclerosis is summarized in Figure 2.
Proposed Baseline CNN: While deep transfer learning models may demonstrate
state-of-theart performance on certain datasets and tasks, it’s essential to compare their performance against
simpler baseline approaches to assess whether the additional complexity and computational
cost are justified. Our proposed baseline CNN serves as a fundamental benchmark against
which the eficacy of deep transfer learning models can be evaluated. Our baseline CNN is
designed to ofer insights into the efectiveness of these sophisticated approaches.</p>
        <p>The architecture of our baseline CNN begins with pre-processed image data fed into a
Convolutional Layer (Conv2D) with 32 filters, facilitating feature detection. Rectified Linear
Unit (ReLU) activation introduces non-linearity, enabling complex pattern learning. Subsequent
Max Pooling layer downsamples feature maps, reducing spatial dimensions and computational
load while retaining essential information. Another Conv2D layer with 64 filters and ReLU
activation is employed, followed by further Max Pooling. The model then flattens the 2D feature
maps into a 1D vector, feeding it into a Fully Connected Layer (Dense) comprising 128 neurons
with ReLU activation. Finally, a single neuron with sigmoid activation is utilized for binary
classification, employing binary cross-entropy loss and the Adam optimizer. Figure 3 give a
visual representation of our proposed baseline CNN’s architecture.</p>
      </sec>
      <sec id="sec-4-3">
        <title>3.3. Performance Indicators</title>
        <p>The efectiveness of the proposed model was evaluated using Recall, Precision, F1-score, area
under the receiver operating characteristic curve (AUC-ROC), and accuracy. These metrics are
derived from the following measures [21]:
• True positive (TP): Accurately identified images with atherosclerosis.
• False positive (FP): Incorrectly classified normal images.
• True negative (TN): Correctly classified normal atherosclerosis-free images.
• False negative (FN): Images with atherosclerosis that are incorrectly classified as normal
images.</p>
        <p>Recall or Sensitivity evaluates the model’s ability to correctly identify all relevant instances
within a dataset. Specificity measures the proportion of correctly predicted negative instances
out of all the actual negative instances. The positive prediction rate is represented by precision,
and the classification performance is evaluated in terms of both recall and precision using
the F-score. Accuracy measures the model’s total classification performance, accounting for
both True Positives and True Negatives. In classification tasks, the AUC-ROC curve is used as
a performance statistic across various threshold settings [22], with higher AUC-ROC values
indicating superior detection capabilities.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Implementation and Experiments</title>
      <p>In this section, details about the implementation of the networks using transfer learning process
will be discussed alongside the results achieved through tests.</p>
      <sec id="sec-5-1">
        <title>4.1. Model Implementation</title>
        <p>The model scripts were developed using the Keras framework, utilizing Tensorflow as the
backend within a Python 3 Jupyter notebook environment. The experiments were conducted
on an NVIDIA P100 GPU equipped with 16 GB of high-bandwidth memory. Initialization of
the ResNet-50, Inception v3, and VGG19 networks involved using weights pre-trained on the
ImageNet dataset. During the training phase, batches of 32 images were utilized, evenly divided
between positive and negative cases of atherosclerosis. After 50 epochs of training, the best
model was saved for further analysis and evaluation of the test dataset.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Results</title>
        <p>After training the models, an evaluation was conducted using a distinct test dataset to gauge their
performance. Table 2 presents a summary of the results obtained from this test dataset, consisting
of a sample size of 200 images. Comprehensive evaluation of models in disease classification tasks
is crucial due to the potentially significant consequences of false positives and false negatives.
In the diagnosis of atherosclerosis, false negatives can have severe consequences; ensuring that
all positive cases are correctly identified is crucial. False negatives in medical diagnosis can
lead to missed diagnoses, delayed treatments, and potentially serious health consequences for
patients. Recall measures the ability of a model to capture all positive cases out of the total
number of actual positives. In scenarios where correctly identifying true negatives is essential,
Specificity is applied. Specificity indicates the model’s ability to correctly identify individuals
without a particular condition as being negative for that condition, minimizing false alarms or
misdiagnoses.</p>
        <p>
          In our experiments, Recall is given more weight since it aims to minimize false negatives [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ],
as it focuses on capturing as many true positive cases as possible. The confusion matrix plays a
pivotal role in this assessment by ofering an in-depth analysis of the models’ predictions and
how well they match the ground truth. Figure 4 illustrates the results of the confusion matrix for
the four models. According to the findings, ResNet-50 successfully classified all diseased images
correctly. In contrast, Inception-v3 misclassified one instance in which disease was present as
absent (FN), with four FNs for both VGG19 and baseline CNN. Regarding false positives, where
the models incorrectly predicted the presence of disease when it was not present, ResNet-50
had the lowest number at 14, whereas VGG19 had the highest number at 66.
        </p>
        <p>
          Class imbalances are typical in medical image classification when there are much fewer
positive instances (i.e., ill patients) than negative cases (i.e., healthy patients). This is the
situation in this instance, where there are 17 positive cases out of 200 cases. AUC-ROC is robust
to class imbalance and provides an aggregated performance measure that is not influenced by
the class distribution. This makes it particularly suitable for evaluating our classifiers since the
positive cases (diseased instances) are less prevalent. This metric ofers a singular, succinct value
that efectively quantifies the model’s performance across diferent thresholds, considering both
sensitivity (true positive rate) and specificity (true negative rate). In medical image classification,
where the balance between sensitivity and specificity is crucial, AUC-ROC ofers a holistic view
of the classifier’s ability to distinguish between classes. As depicted in Figure 5, it is evident
that ResNet-50 exhibits the highest discrimination capacity with an AUC-ROC of 0.99, followed
by Inception-v3 at 0.966, baseline CNN at 0.836, and VGG19 at 0.813. When compared to other
state-of-the-art models, UNet++ [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] and PCCN [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], ResNet-50 had the best performance in
both Recall and AUC-ROC.
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion</title>
      <p>In this study, we proposed a deep learning approach to analyse heart CT scans for detecting
atherosclerosis. We employed a transfer learning strategy, utilizing three pre-trained deep
neural networks: ResNet-50, VGG19, and Inception v3. These models were compared against a
baseline shallow CNN model to assess their performance. Our proposed baseline CNN gives
comparable results to the more complex models outperforming the VGG19 model. The
ResNet50 network exhibited promising results, achieving a Recall of 1.0, an accuracy of 93% and an
AUC-ROC of 0.99. This approach shows potential for efective classification of atherosclerosis
in CT scans.</p>
      <p>Future work will include an evaluation of the model’s robustness across diferent imaging
modalities, acquisition settings, and patient demographics (such as diferent races, ethnicities,
genders, ages, and socioeconomic backgrounds) to ensure its generalizability and
applicability in diverse clinical environments. Interpretability and explainability of our deep learning
models require further attention so as to understand the decision-making process. Additionally,
clinical validation is crucial to ascertain the real-world applicability of our approach. While our
results are promising, further validation in clinical settings is necessary to assess the model’s
performance in routine practice.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Acknowledgement</title>
      <p>This work was made possible by funding from the European Union’s Horizon Europe program
for widening participation and spreading excellence under Grant Agreement number 101060145
for EDIRE project.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>While preparing this work, the authors used Grammarly to check grammar and spelling. After
using this tool, they reviewed and edited the content as needed and took full responsibility for
the publication’s content.
cnn algorithm development to detect coronary atherosclerosis in coronary ct angiography,
Mendeley Data 1 (2019). URL: https://data.mendeley.com/datasets/fk6rys63h9/1. doi:10.
17632/fk6rys63h9.1.
[18] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image
recognition, arXiv preprint arXiv:1409.1556 (2014).
[19] C. Szegedy, V. Vanhoucke, S. Iofe, J. Shlens, Z. Wojna, Rethinking the inception architecture
for computer vision, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), IEEE, Las Vegas, NV, USA, 2016, pp. 2818–2826. doi:10.1109/CVPR.2016.308.
[20] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in:
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp.
770–778.
[21] G. Canbek, S. Sagiroglu, T. T. Temizel, N. Baykal, Binary classification performance
measures/metrics: A comprehensive visualized roadmap to gain new insights, in: 2017
International Conference on Computer Science and Engineering (UBMK), IEEE, 2017, pp.
821–826.
[22] G. O. Campos, A. Zimek, J. Sander, R. J. Campello, B. Micenková, E. Schubert, I. Assent,
M. E. Houle, On the evaluation of unsupervised outlier detection: measures, datasets, and
an empirical study, Data mining and knowledge discovery 30 (2016) 891–927.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N. D.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Budof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ferdinand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Graham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Michos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Shapiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. P.</given-names>
            <surname>Toth</surname>
          </string-name>
          ,
          <article-title>Atherosclerotic cardiovascular disease risk assessment: an american society for preventive cardiology clinical practice statement</article-title>
          ,
          <source>American journal of preventive cardiology 10</source>
          (
          <year>2022</year>
          )
          <fpage>100335</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Pagidipati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Gaziano</surname>
          </string-name>
          ,
          <article-title>Estimating deaths from cardiovascular disease: a review of global methodologies of mortality measurement</article-title>
          ,
          <source>Circulation</source>
          <volume>127</volume>
          (
          <year>2013</year>
          )
          <fpage>749</fpage>
          -
          <lpage>756</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>W. H.</given-names>
            <surname>Organization</surname>
          </string-name>
          ,
          <source>Cardiovascular diseases (cvds)</source>
          ,
          <year>2021</year>
          . URL: https://www.who.int/ news-room/fact-sheets/detail/cardiovascular-diseases
          <string-name>
            <surname>-</surname>
          </string-name>
          (cvds).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Timmis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Vardas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Townsend</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Torbica</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Katus</surname>
          </string-name>
          , D. De Smedt,
          <string-name>
            <given-names>C. P.</given-names>
            <surname>Gale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Maggioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. E.</given-names>
            <surname>Petersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Huculeci</surname>
          </string-name>
          , et al.,
          <source>European society of cardiology: cardiovascular disease statistics 2021, European Heart Journal</source>
          <volume>43</volume>
          (
          <year>2022</year>
          )
          <fpage>716</fpage>
          -
          <lpage>799</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Rim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-S.</given-names>
            <surname>Jou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-W.</given-names>
            <surname>Gil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <article-title>Deep-learning-based coronary artery calcium detection from ct image</article-title>
          ,
          <source>Sensors</source>
          <volume>21</volume>
          (
          <year>2021</year>
          )
          <fpage>7059</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Kora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. P.</given-names>
            <surname>Ooi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Faust</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Raghavendra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gudigar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. Y.</given-names>
            <surname>Chan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Meenakshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Swaraja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Plawiak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U. R.</given-names>
            <surname>Acharya</surname>
          </string-name>
          ,
          <article-title>Transfer learning techniques for medical image analysis: A review</article-title>
          ,
          <source>Biocybernetics and Biomedical Engineering</source>
          <volume>42</volume>
          (
          <year>2022</year>
          )
          <fpage>79</fpage>
          -
          <lpage>107</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Wolterink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Leiner</surname>
          </string-name>
          , B. D. de Vos, R. W. van Hamersvelt,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Viergever</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Išgum</surname>
          </string-name>
          ,
          <article-title>Automatic coronary artery calcium scoring in cardiac ct angiography using paired convolutional neural networks</article-title>
          ,
          <source>Medical image analysis 34</source>
          (
          <year>2016</year>
          )
          <fpage>123</fpage>
          -
          <lpage>136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zreik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. W. Van</given-names>
            <surname>Hamersvelt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Wolterink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Leiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Viergever</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Išgum,</surname>
          </string-name>
          <article-title>A recurrent cnn for automatic detection and classification of coronary artery plaque and stenosis in coronary ct angiography</article-title>
          ,
          <source>IEEE transactions on medical imaging 38</source>
          (
          <year>2018</year>
          )
          <fpage>1588</fpage>
          -
          <lpage>1598</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Candemir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. D.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Demirer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Bigelow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Prevedello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Erdal</surname>
          </string-name>
          ,
          <article-title>Automated coronary artery atherosclerosis detection and weakly supervised localization on coronary ct angiography with a deep 3-dimensional convolutional neural network</article-title>
          ,
          <source>Computerized Medical Imaging and Graphics</source>
          <volume>83</volume>
          (
          <year>2020</year>
          )
          <fpage>101721</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>C. B. Monti</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Codari</surname>
          </string-name>
          , M. van
          <string-name>
            <surname>Assen</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. N. De</surname>
            <given-names>Cecco</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Vliegenthart</surname>
          </string-name>
          ,
          <article-title>Machine learning and deep neural networks applications in computed tomography for coronary artery disease and myocardial perfusion</article-title>
          ,
          <source>Journal of thoracic imaging 35</source>
          (
          <year>2020</year>
          )
          <fpage>S58</fpage>
          -
          <lpage>S65</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ravì</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Deligianni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Berthelot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Andreu-Perez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lo</surname>
          </string-name>
          , G.-
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Deep learning for health informatics</article-title>
          ,
          <source>IEEE journal of biomedical and health informatics 21</source>
          (
          <year>2016</year>
          )
          <fpage>4</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>M. D. Zeiler</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Fergus</surname>
          </string-name>
          ,
          <article-title>Visualizing and understanding convolutional networks</article-title>
          , in: D.
          <string-name>
            <surname>Fleet</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Pajdla</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Schiele</surname>
          </string-name>
          , T. Tuytelaars (Eds.),
          <source>Computer Vision-ECCV</source>
          <year>2014</year>
          : 13th European Conference, Zurich, Switzerland, September 6-
          <issue>12</issue>
          ,
          <year>2014</year>
          , Proceedings,
          <source>Part I 13</source>
          , Springer International Publishing, Cham,
          <year>2014</year>
          , pp.
          <fpage>818</fpage>
          -
          <lpage>833</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>V.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Demirer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bigelow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. J.</given-names>
            <surname>Little</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Candemir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Prevedello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. D.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. P. O'Donnell</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Wels</surname>
            ,
            <given-names>B. S.</given-names>
          </string-name>
          <string-name>
            <surname>Erdal</surname>
          </string-name>
          ,
          <article-title>Performance of a deep neural network algorithm based on a small medical image dataset: incremental impact of 3d-to-2d reformation combined with novel data augmentation, photometric conversion, or transfer learning</article-title>
          ,
          <source>Journal of digital imaging 33</source>
          (
          <year>2020</year>
          )
          <fpage>431</fpage>
          -
          <lpage>438</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>A. F. AlOthman</surname>
            ,
            <given-names>A. R. W.</given-names>
          </string-name>
          <string-name>
            <surname>Sait</surname>
            ,
            <given-names>T. A.</given-names>
          </string-name>
          <string-name>
            <surname>Alhussain</surname>
          </string-name>
          ,
          <article-title>Detecting coronary artery disease from computed tomography images using a deep learning technique</article-title>
          ,
          <source>Diagnostics</source>
          <volume>12</volume>
          (
          <year>2022</year>
          )
          <year>2073</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A. R. Wahab</given-names>
            <surname>Sait</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Dutta</surname>
          </string-name>
          ,
          <article-title>Developing a deep-learning-based coronary artery disease detection technique using computer tomography images</article-title>
          ,
          <source>Diagnostics</source>
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <fpage>1312</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Aono</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Asakawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Shinoda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shimizu</surname>
          </string-name>
          , T. Komoda,
          <article-title>Predicting stenosis in coronary arteries based on deep neural network using non-contrast and contrast cardiac ct images</article-title>
          ,
          <source>in: Proceedings of the 2023 6th International Conference on Machine Vision and Applications</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>154</fpage>
          -
          <lpage>160</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Demirer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bigelow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Erdal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Prevedello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <article-title>Image dataset for a</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>