<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Cognitive Electronic Unit for Assisted Ultrasound: Preliminary Results and Future Perspectives</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Emanuele De Luca</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emanuele Amato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vincenzo Valente</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marianna La Rocca</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tommaso Maggipinto</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberto Bellotti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Dell'Olio</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dipartimento Interateneo di Fisica, Università degli Studi di Bari Aldo Moro</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Micro Nano Sensor Group, Politecnico di Bari</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Predict s.r.l.</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper reports on the preliminary results of an ongoing research activity aimed at developing a cognitive electronic unit for assisted echocardiography. The paper discusses the selection of the suitable hardware and the design of a neural network model optimized for small datasets. A data capture unit has been implemented to facilitate the collection of ultrasound data in collaboration with clinical professionals. The chosen hardware supports real-time image processing and data transmission, while the neural network is intended for image classification. Initial results indicate the potential of the cognitive electronic unit we are developing to reduce inter-operator variability and enhance diagnostic precision in echocardiography. Further data collection and model refinement are still ongoing.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Assisted ultrasonography</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>embedded systems</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Ultrasound is one of the most widely used diagnostic techniques due to its numerous advantages,
including cost-efectiveness, safety (being radiation-free), and the ability to be performed in real time
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Additionally, ultrasound demonstrates significant versatility, applicable to almost any part of the
body except for bones, lungs, and sections of the intestine.
      </p>
      <p>However, performing an ultrasound scan correctly necessitates substantial training and years of
experience. This requirement creates a considerable workload for specialists and complicates the
repeatability of diagnostic examinations. Consequently, the expertise and skills of the sonographer
profoundly influence the performance of an examination, leading to substantial inter-operator variability.</p>
      <p>
        Recent advancements in artificial intelligence techniques have enabled the exploration of both robotic
and assisted ultrasound applications to address these critical issues [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In the robotic scenario, a
robotic arm performs ultrasound examinations under the guidance of artificial intelligence algorithms.
These algorithms manage various functions, including planning the scanning path, adjusting the arm’s
movement in space and the probe’s pressure on the scanned body area, and completing the scan. Robotic
ultrasound systems can be classified into three categories based on their level of autonomy: teleoperated
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], semi-autonomous [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and fully autonomous systems [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Teleoperated systems involve robotic
arms piloted by an operator, aiming to reduce or prevent musculoskeletal disorders in physicians and
facilitate remote diagnosis. Semi-autonomous systems autonomously perform some tasks, such as
positioning the probe in the body region of interest, but still require operator intervention to conduct
the examination. Fully autonomous systems can independently plan and execute the ultrasound scan,
acquiring the necessary images without sonographer intervention, thereby allowing the physician to
review the examination subsequently. In the assisted ultrasound scenario, a sonographer performs
the examination while being guided by an algorithm in the movement of the probe. Although robotic
ultrasound remains primarily within the research domain, some applications of assisted ultrasound
have already reached the market [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Notable advancements in the echocardiographic domain employ
deep learning algorithms to guide non-expert operators. This technology includes real-time image
quality assessment systems and determines the optimal probe movements, enabling operators to obtain
transthoracic echocardiographic images without the necessity of external tracking systems.
      </p>
      <p>The objective of the ongoing research activity reported in this paper is to develop a cognitive unit
that enhances ultrasound imaging by assisting operators with real-time guidance and assessment. This
unit aims to improve image quality and provide valuable feedback during the scanning process. The
integration of such a cognitive unit with commercial ultrasound scanners can potentially aid in reducing
inter-operator variability, benefiting both trainees and general practitioners.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Hardware setup</title>
      <p>The selection of appropriate hardware for executing deep learning algorithms was guided by a
thorough literature review focused on evaluating the performance of edge devices in artificial intelligence
applications. Given the parallel development of software and the absence of specific algorithms to test,
it was crucial to base hardware choices on established benchmarks and comparative analyses available
in the literature.</p>
      <p>
        The initial comparative analysis included devices commonly tested in image processing applications,
such as the ASUS Tinker Edge R, Raspberry Pi 4, Google Coral Dev Board, NVIDIA Jetson Nano, and
Arduino Nano 33 BLE. The evaluation considered inference speed and accuracy across various network
models. Findings indicated that the Google Coral Dev Board demonstrated superior performance in
continuous computational applications for models compatible with the TensorFlow Lite framework.
The NVIDIA Jetson Nano ranked closely behind, ofering greater versatility and the capability to train
models due to its GPU presence [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Further analysis narrowed the focus to GPU-equipped devices. Literature indicated that the NVIDIA
Jetson Nano exhibited better image processing performance compared to the Jetson TX2, GTX 1060,
and Tesla V100 in convolutional neural network applications [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Within the NVIDIA Jetson series, the
Jetson Orin Nano was identified as significantly outperforming both the Jetson Nano and Jetson AGX
Xavier for video processing tasks using convolutional neural network models developed in PyTorch
and optimized with NVIDIA’s Torch-TensorRT SDK [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Based on these insights, the NVIDIA Jetson Orin Nano was selected for its robust performance.
Table 1 summarizes the main features on the NVIDIA Jetson Orin Nano.
with 2-chamber view (2CH)’, ’Apical projection with 4-chamber view (4CH)’, and ’Unknown’, with the
latter including all non-classifiable images. This network was designed to classify automatically and in
real time the observed cardiac projection.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Neural network model and data capture</title>
      <p>The neural network model, presented in figure 2, is designed to balance computational complexity and
generalization capacity, making it suitable for smaller datasets. Its architecture comprises several layers,
each performing specific operations to extract features and classify images. Original images were in
NIf TI format and diferent in size, so they were converted into JPG format and resized to a resolution of
64x64x3. The resizing of the images was carried out to keep the inference computational cost lower.
Each pixel is represented on 8 bits.</p>
      <p>
        The model begins with a convolutional layer that applies 8 filters of size 3x3 on each input image
(64x64x3). This convolutional layer uses a stride of (1,1) and ’same’ padding to maintain the output
dimensions equal to the input. Following the convolution, a Rectified Linear Unit (ReLU) activation
function [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] introduces non-linearity into the model, essential for learning complex data relationships.
To mitigate overfitting, a dropout layer with a 30 % dropout rate is applied, randomly deactivating
neurons during training.
      </p>
      <p>The second stage involves another convolutional layer, this time with 16 filters of size 3x3, maintaining
the same stride and padding settings. The ReLU activation function is used again to ensure non-linearity.
Subsequently, a max pooling layer with a pool size of (2,2) and a stride of (2,2) reduces the spatial
dimensions of the input while preserving the most significant features.</p>
      <p>In the third stage, the model includes a third convolutional layer with 32 filters of size 3x3, followed
by a max pooling layer with identical configurations to the previous one. This further reduces the
spatial dimensions of the input, ensuring the model focuses on the most prominent features.</p>
      <p>The output from the previous layers is then flattened into a one-dimensional vector, preparing it for
the fully connected layers. The first fully connected layer consists of 32 units, with a ReLU activation
function to introduce further non-linearity. The final fully connected layer, the output layer, comprises
3 units, corresponding to the number of classes in the classification problem. This layer uses a softmax
activation function to assign probabilities to each class.</p>
      <p>
        The choice of a simpler model, such as the one described, is motivated by the limited availability
of training data. More complex models like ResNet [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] or Vision Transformers (ViTs) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] are more
prone to overfitting and require extensive computational resources, which are not ideal for scenarios
with smaller datasets. The modular structure of this simple CNN model allows for easy integration of
additional components, such as Long Short-Term Memory (LSTM) layers [14], for further enhancements
if necessary. This flexibility and eficiency make the model well-suited for the targeted application. It is
also important to notice that untill now, there has been no need to quantize the neural network model.
      </p>
      <p>Data collection is a critical aspect in the development of the artificial intelligence model. The available
datasets are insuficient to meet the research objectives, necessitating the acquisition of new data in
collaboration with clinical professionals. To streamline this process, a comprehensive data capture
unit has been implemented. This unit is capable of recording ultrasound screens, capturing the spatial
position of the probe, and receiving anonymized data from the ultrasound scanner. It also facilitates
the secure transmission of these data for further analysis. The unit incorporates components that
automatically receive, process, and anonymize data from the ultrasound scanner. A dedicated graphical
user interface (GUI) has been developed to support the data collection process. Despite the automation
provided by this system, the labelling of data still requires significant back-ofice efort from clinical
staf members.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Preliminary experimental results</title>
      <p>In order to verify the performance of the neural network on the embedded hardware, we acquired the
ultrasound video stream during an echocardiography performed on a volunteer.</p>
      <p>Using the neural network, we classified the cardiac projection contained in each frame in real time.
The output of the network was printed on a monitor, superimposed on the ultrasound image. The
spatial information collected by inertial sensor was not used. The performance of the neural network is
shown in Table 2, while the confusion matrix on test set is reported in figure 3.</p>
      <p>During the test, the CPU and its temperature remained stable around 55.5 ∘ C, and the RAM usage
was approximately 70 %. The average inference time for a single frame was measured at 13.74 ± 2.48 ms,
confirming the hardware’s capability for the intended application. It should be noted that, untill now, it
was not necessary to use the GPU to perform the tests.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The research is in its preliminary stages, but the initial findings establishes a foundation for developing
a system capable of supporting more precise and reliable echocardiographic diagnostics. The next
essential step involves the comprehensive collection of a dataset and collaboration with sonographers
for accurate frame labelling. Strategies for incorporating accelerometer and gyroscope data will be
explored to enhance visual assistance for probe movements. Future eforts will focus on integrating
software advancements with the chosen hardware to improve overall system performance.
Transformers for Image Recognition at Scale, 2020. URL: https://arxiv.org/abs/2010.11929. doi:10.
48550/ARXIV.2010.11929, version Number: 2.
[14] S. Hochreiter, J. Schmidhuber, Long Short-Term Memory, Neural Computation 9 (1997) 1735–
1780. URL: https://direct.mit.edu/neco/article/9/8/1735-1780/6109. doi:10.1162/neco.1997.9.
8.1735.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. E.</given-names>
            <surname>Salcudean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Navab</surname>
          </string-name>
          ,
          <article-title>Robotic ultrasound imaging: State-of-the-art and future perspectives</article-title>
          ,
          <source>Medical Image Analysis</source>
          <volume>89</volume>
          (
          <year>2023</year>
          )
          <article-title>102878</article-title>
          . URL: https://linkinghub.elsevier.com/ retrieve/pii/S136184152300138X. doi:
          <volume>10</volume>
          .1016/j.media.
          <year>2023</year>
          .
          <volume>102878</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Tenajas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Miraut</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. I. Illana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Alonso-Gonzalez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Arias-Valcayo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Herraiz</surname>
          </string-name>
          ,
          <source>Recent Advances in Artificial Intelligence-Assisted Ultrasound Scanning, Applied Sciences</source>
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <article-title>3693</article-title>
          . URL: https://www.mdpi.com/2076-3417/13/6/3693. doi:
          <volume>10</volume>
          .3390/app13063693.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Mathiassen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Fjellin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Glette</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Hol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. J.</given-names>
            <surname>Elle</surname>
          </string-name>
          ,
          <article-title>An Ultrasound Robotic System Using the Commercial Robot UR5</article-title>
          ,
          <source>Frontiers in Robotics and AI</source>
          <volume>3</volume>
          (
          <year>2016</year>
          ). URL: http://journal.frontiersin.org/ Article/10.3389/frobt.
          <year>2016</year>
          .00001/abstract. doi:
          <volume>10</volume>
          .3389/frobt.
          <year>2016</year>
          .
          <volume>00001</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Mathur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Topiwala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schafer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Saeidi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Fleiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Krieger</surname>
          </string-name>
          ,
          <article-title>A Semi-Autonomous Robotic System for Remote Trauma Assessment</article-title>
          ,
          <source>in: 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE)</source>
          , IEEE, Athens, Greece,
          <year>2019</year>
          , pp.
          <fpage>649</fpage>
          -
          <lpage>656</lpage>
          . URL: https://ieeexplore.ieee.org/document/8941790/. doi:
          <volume>10</volume>
          .1109/BIBE.
          <year>2019</year>
          .
          <volume>00122</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Robotic Arm Based Automatic Ultrasound Scanning for Three-Dimensional Imaging</article-title>
          ,
          <source>IEEE Transactions on Industrial Informatics</source>
          <volume>15</volume>
          (
          <year>2019</year>
          )
          <fpage>1173</fpage>
          -
          <lpage>1182</lpage>
          . URL: https://ieeexplore. ieee.org/document/8472788/. doi:
          <volume>10</volume>
          .1109/TII.
          <year>2018</year>
          .
          <volume>2871864</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bae</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hong</surname>
          </string-name>
          , Y. Thomas,
          <string-name>
            <given-names>S.</given-names>
            <surname>Surette</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cadieu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chaudhry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>McCarthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Rubenson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Goldstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Little</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Lang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Weissman</surname>
          </string-name>
          , J. D. Thomas,
          <article-title>Utility of a Deep-Learning Algorithm to Guide Novices to Acquire Echocardiograms for Limited Diagnostic Use, JAMA Cardiology 6 (</article-title>
          <year>2021</year>
          )
          <article-title>624</article-title>
          . URL: https://jamanetwork.com/journals/jamacardiology/ fullarticle/2776714. doi:
          <volume>10</volume>
          .1001/jamacardio.
          <year>2021</year>
          .
          <volume>0185</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. P.</given-names>
            <surname>Baller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jindal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chadha</surname>
          </string-name>
          , M. Gerndt,
          <source>DeepEdgeBench: Benchmarking Deep Neural Networks on Edge Devices, in: 2021 IEEE International Conference on Cloud Engineering (IC2E)</source>
          , IEEE, San Francisco, CA, USA,
          <year>2021</year>
          , pp.
          <fpage>20</fpage>
          -
          <lpage>30</lpage>
          . URL: https://ieeexplore.ieee.org/document/9610432/. doi:
          <volume>10</volume>
          .1109/IC2E52221.
          <year>2021</year>
          .
          <volume>00016</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Jo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jeong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Benchmarking</surname>
            <given-names>GPU</given-names>
          </string-name>
          -Accelerated Edge Devices, in: 2020
          <source>IEEE International Conference on Big Data and Smart Computing (BigComp)</source>
          , IEEE, Busan, Korea (South),
          <year>2020</year>
          , pp.
          <fpage>117</fpage>
          -
          <lpage>120</lpage>
          . URL: https://ieeexplore.ieee.org/document/9070647/. doi:
          <volume>10</volume>
          .1109/BigComp48618.
          <year>2020</year>
          .
          <volume>00</volume>
          -
          <fpage>89</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H. V.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. G.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. B.</given-names>
            <surname>Vo</surname>
          </string-name>
          ,
          <article-title>Benchmarking Jetson Edge Devices with an End-to-End Video-Based Anomaly Detection System</article-title>
          , in: K. Arai (Ed.),
          <source>Advances in Information and Communication</source>
          , volume
          <volume>920</volume>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>358</fpage>
          -
          <lpage>374</lpage>
          . URL: https://link.springer.
          <source>com/10.1007/978-3-031-53963-3_25. doi:10.1007/978-3-031-53963-3_ 25, series Title: Lecture Notes in Networks and Systems.</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Leclerc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Smistad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pedrosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ostvik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cervenansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Espinosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Espeland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A. R.</given-names>
            <surname>Berg</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.-M. Jodoin</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Grenier</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Lartizien</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Dhooge</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Lovstakken</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Bernard</surname>
          </string-name>
          ,
          <article-title>Deep Learning for Segmentation Using an Open Large-Scale Dataset in 2D Echocardiography</article-title>
          ,
          <source>IEEE Transactions on Medical Imaging</source>
          <volume>38</volume>
          (
          <year>2019</year>
          )
          <fpage>2198</fpage>
          -
          <lpage>2210</lpage>
          . URL: https://ieeexplore.ieee.org/document/8649738/. doi:
          <volume>10</volume>
          .1109/TMI.
          <year>2019</year>
          .
          <volume>2900516</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Agarap</surname>
          </string-name>
          ,
          <article-title>Deep Learning using Rectified Linear Units (ReLU</article-title>
          ),
          <year>2018</year>
          . URL: https://arxiv.org/abs/
          <year>1803</year>
          .08375. doi:
          <volume>10</volume>
          .48550/ARXIV.
          <year>1803</year>
          .
          <volume>08375</volume>
          , version Number:
          <volume>2</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Ren,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Deep Residual Learning for Image Recognition, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE</article-title>
          ,
          <string-name>
            <surname>Las</surname>
            <given-names>Vegas</given-names>
          </string-name>
          ,
          <string-name>
            <surname>NV</surname>
          </string-name>
          , USA,
          <year>2016</year>
          , pp.
          <fpage>770</fpage>
          -
          <lpage>778</lpage>
          . URL: http://ieeexplore.ieee.org/document/7780459/. doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2016</year>
          .
          <volume>90</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dosovitskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Beyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kolesnikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weissenborn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Unterthiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dehghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Minderer</surname>
          </string-name>
          , G. Heigold,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gelly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Houlsby</surname>
          </string-name>
          ,
          <article-title>An Image is Worth 16x16 Words:</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>