<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Classification of Fetal and Maternal Structures in Ultrasound Using a Deep Learning Approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Haifa Ghabri</string-name>
          <email>ghabrihaaifa@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed Hamroun</string-name>
          <email>hamrounmohammed@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hedia Bellali</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hedi Sakli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Communications Systems Lab, École Nationale d'Ingénieurs de Tunis</institution>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Epidemiology and Statistics, Abderrahmen Mami Hospital</institution>
          ,
          <addr-line>Ariana</addr-line>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>XLIM-Lab, UMR CNRS 7252, University of Limoges</institution>
          ,
          <addr-line>Avenue Albert Thomas, Limoges, 87060</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This study investigates the application of deep learning techniques for the classification of ultrasound images in prenatal and maternal care. The objective was to develop a robust convolutional neural network (CNN) model capable of accurately distinguishing between various anatomical structures including the fetal abdomen, brain, femur, thorax, and maternal cervix. A dataset comprising a diverse range of ultrasound images was used for training and evaluation purposes. The CNN model demonstrated exceptional performance in classification tasks, achieving average precision, recall, and F1 score metrics exceeding 98% across all classes. This indicates the model's capability to efectively identify and diferentiate critical features within ultrasound images relevant to fetal and maternal health. The results highlight the potential of deep learning in enhancing diagnostic accuracy and eficiency in prenatal and maternal health monitoring.</p>
      </abstract>
      <kwd-group>
        <kwd>Approach</kwd>
        <kwd>Ultrasound image classification</kwd>
        <kwd>multi-class classification</kwd>
        <kwd>Deep learning</kwd>
        <kwd>Clinical applications</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Ultrasound imaging is a critical component of prenatal care, providing invaluable insights into the
health and development of the fetus. It is widely used due to its non-invasive nature, safety, and ability
to ofer real-time visualization of the fetus [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Ultrasound scans enable healthcare professionals to
monitor fetal growth, assess anatomy, and detect potential anomalies early in the pregnancy. However,
the interpretation of ultrasound images requires a high level of expertise and experience, making it a
challenging task even for seasoned practitioners. The dificulty in analyzing ultrasound images arises
from several factors, including variability in image quality, complexity of fetal anatomy, subtlety of
anatomical features, and inter-observer variability. Ultrasound images can vary significantly in quality
due to diferences in equipment, operator skill, and the physical condition of the patient, which can
obscure important anatomical details and make consistent interpretation challenging. The developing
fetus undergoes rapid changes, and distinguishing between diferent anatomical structures requires
detailed knowledge and precise image interpretation skills. Many fetal anomalies present subtle signs
that can be easily missed without careful and expert examination. Additionally, diferent clinicians may
interpret the same ultrasound images diferently, leading to inconsistencies in diagnosis and treatment
planning.
      </p>
      <p>Given these challenges, there is a growing interest in applying artificial intelligence (AI) to the
ifeld of medical imaging. AI, particularly deep learning, has shown remarkable success in various
(Co-located with the 17th International Conference on Verification and Evaluation of Computer and Communication Systems
(H. Bellali)</p>
      <p>CEUR</p>
      <p>
        ceur-ws.org
image analysis tasks, from object detection to classi-fication[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In the context of fetal
ultrasound imaging, AI ofers several significant advantages, including consistency and objectivity,
eficiency, enhanced accuracy, and scalability. AI systems can provide consistent and objective analysis
of ultrasound images, reducing the variability associated with human interpretation. Automated
image analysis can significantly speed up the diagnostic process, allowing for quicker decision-making
and reducing the workload on healthcare professionals. Advanced AI models can learn to identify
subtle patterns and features in ultrasound images, potentially improving the accuracy of fetal anomaly
detection. Furthermore, AI systems can be deployed across various healthcare settings, including
those with limited access to experienced radiologists, thereby democratizing access to high quality
prenatal care. The application of deep learning techniques to ultrasound image analysis has seen
significant advancements, particularly in the domain of prenatal and maternal care [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This section
reviews relevant literature on the use of deep learning for classifying ultrasound images, focusing on
key anatomical structures. Several studies have leveraged deep learning techniques to advance the
classification of fetal ultrasound images, each contributing unique methodologies and performance
metrics.
      </p>
      <p>
        Zhang et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] introduced a multitask learning-based system for automatic quali-ty assessment
of fetal ultrasound images, achieving notable metrics such as Accura-cy (0.9431) and AUC (0.9826).
Their approach focused on enhancing image quality through precise anatomical identification. Qu et
al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposed a diferential-CNN to identify standard fetal brain planes, achieving high accuracy
(0.9311) and AUC (0.937). This method efectively discriminated between standard and non-standard
views, improving classification accuracy significantly. Montero et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] utilized a GAN-enhanced
ResNet model to classify fetal brain images, demonstrating promis-ing results with an accuracy of
0.815 and AUC of 0.867. Their study highlighted the eficacy of GANs in enhancing classification tasks
through synthetic image genera-tion. Prieto et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] applied a CNN to classify a diverse dataset of
fetal ultrasound images, achieving an accuracy of 0.91.
      </p>
      <p>In this research, we harness the capabilities of the InceptionResNetV2 model, a sophisticated
convolutional neural network, to tackle the challenges of fetal ultra-sound image classification. This
study is distinguished by the integration of a meticulously curated dataset and advanced preprocessing
techniques to maximize the performance of the AI model proposed. The dataset, collected in 2020,
includes 12,400 2D ultrasound images from 896 pregnant women. These images are categorized into six
anatomical regions: maternal cervix, thorax, femur, abdomen, brain, and other.</p>
      <p>Our innovative approach incorporates several key steps. We rigorously cleaned the dataset to remove
poor quality images and irrelevant information, ensuring a high standard of input data. We precisely
cropped the images to focus on the relevant anatomical regions, enhancing the model’s ability to learn
important features. We also applied the SMOTE techniques to balance the dataset, to ensure an even
representation of all categories during training. The core of our methodology is the InceptionResNetV2
model, known for its hybrid architecture combining Inception modules and Residual connections. This
design allows the model to eficiently handle high-dimensional data and capture intricate patterns in
ultrasound images.</p>
      <p>The remainder of this paper is structured as follows.Section II details the proposed model and
technique, including model training and parameters. Section III presents the findings of the proposed
model. Finally, Section IV concludes with a discussion of our results and potential future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>The methodology proposed for fetal ultrasound image classification involved several key steps to
ensure robust model training and accurate results. We began by compiling a dataset consisting of
12,400 2D ultrasound images from 896 pregnant women, sourced from hospitals in Barcelona, Spain, in
2020. Each image was meticulously annotated by specialist fetal doctors to label anatomical regions
such as the maternal cervix, thorax, femur, abdomen, brain, and other structures.Data preprocessing
was crucial to enhance the dataset quality. We conducted rigorous cleaning to remove images with
artifacts or poor quality that could interfere with model train-ing. Additionally, we employed cropping
techniques to focus the model’s attention on relevant anatomical features while removing extraneous
elements from the images.We adopted the InceptionResNetV2 architecture for model development,
leveraging its pretrained weights from the ImageNet dataset. This allowed the model to extract complex
features from ultrasound images eficiently.Addressing class imbalance was essential, achieved through
SMOTE to ensure each anatomical category had suficient representation during training. Evaluation of
model performance included standard metrics such as accuracy and F1-score, providing comprehensive
insights into its efectiveness.By integrating these methodologies, our study aims to contribute to
advancements in prenatal care by developing a reliable automated system for fetal health assessment
through ultrasound imaging. This approach enhances diagnostic capabilities and supports more efective
patient care in clinical settings. the proposed methodology is presented in 1.</p>
      <sec id="sec-2-1">
        <title>2.1. Dataset</title>
        <p>
          The dataset used, collected in 2020 from various hospitals in Barcelona, Spain, is comprehensive and
diverse, encompassing 12,400 2D ultrasound images from 896 pregnant women[
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. The images are
categorized into six distinct anatomical regions: maternal cervix, thorax, femur, abdomen, brain, and
other. Each image was meticulously annotated by a specialist fetal doctor, ensuring high quality, reliable
labels for training our model. The distribution of the classes is presented in 2.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Data Preparation</title>
        <sec id="sec-2-2-1">
          <title>2.2.1. Data Cleaning</title>
          <p>To prepare the dataset for efective model training, we implemented several key data preprocessing
steps.</p>
          <p>In this study on fetal ultrasound image classification, data cleaning played a pivotal role in ensuring the
quality and reliability of our dataset. The primary objectives of our data cleaning process were twofold,
ifrst, to enhance the clarity and relevance of the ultrasound images. Second, to standardize the dataset
for consistent model training. Here’s how we approached data cleaning:
• Artifact Removal: Ultrasound images often contain artifacts such as shadows, speckles, and
machine specific annotations. We applied image processing techniques, such as noise reduction
iflters and artifact removal algorithms, to eliminate these distractions. By doing so, we ensured
that the model focused only on the anatomical structures relevant to fetal health assessment.
• Normalization and Standardization: To facilitate efective model training, we normalized the
intensity values of the images and standardized their dimensions. This preprocessing step ensured
uniformity across the dataset, enabling the model to learn consistent features irrespective of
variations in image acquisition parameters.</p>
          <p>Preprocessing stepsensure that the model focuses on the most pertinent anatomical regions, thereby
improving the accuracy and eficiency of the classification process. The advantages of using
preprocessing in our research are significant. It enhances the model’s ability to concentrate on the essential
features of each anatomical region, such as the maternal cervix, thorax, femur, abdomen, and brain
parts. This focused learning improves the model’s capability to diferentiate between similar anatomical
structures and identify subtle anomalies. This consistency is crucial for achieving reliable performance
across diferent ultrasound machines and varying image acquisition conditions.</p>
        </sec>
        <sec id="sec-2-2-2">
          <title>2.2.2. Data Balancing</title>
          <p>
            In this research, we employed Synthetic Minority Over-sampling Technique (SMOTE) [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] to address
the challenge of class imbalance in the fetal ultrasound image dataset. SMOTE is a powerful data
augmentation technique specifically designed to balance class distribution by generating synthetic
examples for minority classes. Unlike traditional oversampling methods that simply duplicate existing
minority class samples, SMOTE creates new synthetic instances by interpolating between existing
samples. This approach enhances the diversity of the training set and helps the model to learn more
generalized features.
          </p>
          <p>The use of SMOTE in our dataset, which consists of 12,400 2D ultrasound images categorized into five
anatomical regions, is particularly advantageous. Fetal ultrasound images often exhibit significant class
imbalances, with some anatomical regions being underrepresented compared to others. This imbalance
can lead to biased model training, where the model becomes more accurate in predicting the majority
classes while underperforming on the minority ones. By applying SMOTE, we efectively mitigated this
issue, ensuring that each class is adequately represented in the training process.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Model Architecture: InceptionResNetV2</title>
        <p>
          In this research, the choice of model played anessential role in achieving accurate and reliable results.
We opted for the InceptionResNetV2 architecture[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], a state of the art convolutional neural network
(CNN), renowned for its efectiveness in handling complex image recognition tasks.
        </p>
        <p>In our implementation, we leveraged the pretrained weights of InceptionResNetV2 trained on the
ImageNet dataset. Transfer learning from ImageNet provides a significant advantage by initializing the
model with weights that have already learned general features from a vast and diverse set of natural
images. Fine-tuning the model on our specific fetal ultrasound dataset further tailored its parameters
to better recognize anatomical structures such as the maternal cervix, thorax, femur, abdomen, and
brainregions to fetal health assessment. After importing the pretrained InceptionResNetV2 without its
fully connected layers, we introduced dropout layers strategically placed after the convolutional base
and dense layers. These dropout layers mitigate overfitting by randomly deactivating neurons during
training, promoting better generalization of the model. Following the dropout layers, we incorporated
a flatten layer to reshape the output into a 1D vector, preparing it for input into subsequent dense
layers. These dense layers, equipped with ReLU activation functions, facilitate the learning of complex,
nonlinear relationships within the data, crucial for distinguishing between diferent structures in fetal
ultrasound images. To stabilize and accelerate training, batch normalization layers were added after
each dense layer, normalizing the activations and improving the model’s convergence speed and overall
performance. This tailored approach not only leverages the powerful feature extraction capabilities of
InceptionResNetV2 but also optimizes it for the nuanced requirements of medical image classification,
ultimately advancing automated diagnostic tools for prenatal care.</p>
        <p>In our study, we optimized various hyperparameters to enhance the performance of our model for
fetal ultrasound image classification. The Adam optimizer was selected due to its adaptive learning
rate capabilities, which help in eficiently handling the sparse gradients encountered in our dataset.
We set the learning rate at 0.001, striking a balance between training speed and convergence stability.
The categorical cross entropy loss function was employed to handle the multiclass classification task
efectively. Our dataset was divided into training, validation, and test sets in a ratio of 64:16:20, ensuring
a robust evaluation of our model’s generalizability. A batch size of 64 was chosen to make the training
process manageable while maintaining computational eficiency. Finally, the model was trained for
50 epochs, allowing suficient iterations to learn complex patterns in the data without overfitting.
These carefully selected hyperparameters contributed significantly to the superior performance metrics
achieved in our study.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and discussion</title>
      <p>The proposed approach yielded promising results, demonstrating significant improvements in the
accuracy and robustness of fetal ultrasound image classification compared to existing methods. The
InceptionResNetV2 model achieved high classification accuracy across all five categories, with notable
performance in distinguishing between anatomically similar regions such as the thorax and abdomen.</p>
      <p>The training process spanned 50 epochs, during which both training and validation datasets were
utilized to optimize the model. Initially, the model achieved a validation loss of 0.8232 and an accuracy
of 95.23% in the first epoch. This initial validation performance provided a benchmark for subsequent
epochs.</p>
      <p>As training progressed, there was a consistent improvement in both training and validation metrics,
indicating that the model efectively learned to generalize to unseen data. The validation accuracy
regularly increased, reaching a peak of 99.60% by the 31st epoch. Notably, the validation loss continued
to decrease, suggesting that the model’s predictions became more precise and aligned with ground
truth labels. Despite the model’s impressive performance, there were instances where the training
accuracy exceeded the validation accuracy slightly, suggesting some degree of over-fitting. However,
the consistent decrease in validation loss and increase in validation accuracy throughout most epochs
indicate that the model learned to generalize well to new data, mitigating overfitting to a considerable
extent.</p>
      <p>The model achieved a peak validation accuracy of 99.60%, indicating that it correctly classified the
majority of samples in the validation set. The validation loss decreased from 0.8232 in the first epoch
to as low as 0.0142 by the last epoch, highlighting the model’s improved predictive accuracy and
consistency.</p>
      <p>Figure 4 presents the evaluation of performance metrics throughout the training and validation
phases, focusing on loss and accuracy.</p>
      <p>Throughout the training, both precision and recall metrics consistently improved. Precision measures
the model’s ability to correctly identify positive instances among all predicted positive instances, while
recall measures the model’s ability to correctly identify positive instances among all actual positive
instances. By the end of training, precision and recall values were consistently above 99%, indicating
high confidence in the model’s predictions.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>Overall, the model exhibits excellent performance across all evaluated metrics. It shows high precision
and recall for each class, typically above 98%, indicating accurate predictions and efective identification
of relevant instances. The F1 score also reflect a balanced trade of between precision and recall,
consistently above 98%. With an overall accuracy of 99.09%, the model correctly predicts class labels
for the majority of instances, demonstrating its efectiveness in distinguishing between diferent
classes. The support (number of instances) for each class is well-distributed, which helps ensure
the model learns efectively without bias towards any specific class.In summary, the model’s strong
performance across accuracy, precision, recall, and F1 scores indicates its robustness and reliability
in classifying instances into their respective classes. It is well suited for practical applications where
accurate classification is essential. For future work, several avenues could be explored to potentially
enhance the model’s performance and address broader considerations, experimenting with diferent
neural network architectures or exploring more advanced models (e.g., deeper networks, attention
mechanisms) could potentially capture more intricate patterns in the data and further boost performance.
Thoroughly validating the model’s performance in real-world settings and monitoring its performance
post-deployment to ensure consistency and reliability in diverse conditions.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Fiorentino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. P.</given-names>
            <surname>Villani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Di</given-names>
            <surname>Cosmo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Frontoni</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Moccia</surname>
          </string-name>
          , “
          <article-title>A review on deep-learning algorithms for fetal ultrasound image analysis”, Med</article-title>
          . Image Anal., vol.
          <volume>83</volume>
          , p.
          <fpage>102629</fpage>
          ,
          <string-name>
            <surname>Jan</surname>
          </string-name>
          .
          <year>2023</year>
          , doi: 10.1016/j.media.
          <year>2022</year>
          .
          <volume>102629</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sakli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Essid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Ben</given-names>
            <surname>Salah</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Sakli</surname>
          </string-name>
          , “
          <article-title>Deep Learning-Based Multi-Stage Analysis for Accurate Skin Cancer Diagnosis using a Lightweight CNN Architecture”</article-title>
          ,
          <source>in 2023 International Conference on Innovations in Intelligent Systems and Applications (INISTA)</source>
          ,
          <year>Sep</year>
          .
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/INISTA59065.
          <year>2023</year>
          .
          <volume>10310615</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Abdelbaki</surname>
          </string-name>
          et al., “
          <article-title>Improving diagnosis accuracy with an intelligent image retrieval system for lung pathologies detection: a features extractor approach</article-title>
          ,
          <source>” Sci. Rep</source>
          ., vol.
          <volume>13</volume>
          ,
          <string-name>
            <surname>Oct</surname>
          </string-name>
          .
          <year>2023</year>
          , doi: 10.1038/s41598- 023-42366-w.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sakli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Essid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. B.</given-names>
            <surname>Salah</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Sakli</surname>
          </string-name>
          , “
          <source>Lightweight CNN Towards Skin Lesions Automated Diagnosis In Dermoscopic Images,” in 2023 International Conference on Innovations in Intelligent Systems and Applications (INISTA)</source>
          ,
          <year>Sep</year>
          .
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/INISTA59065.
          <year>2023</year>
          .
          <volume>10310480</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Benameur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mahmoudi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Benameur</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Mahmoudi</surname>
          </string-name>
          , ”
          <article-title>Deep Learning in Medical Imaging”</article-title>
          ,
          <source>IntechOpen</source>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .5772/intechopen.111686.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Ghabri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Fathallah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Sakli</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Abdelkarim</surname>
          </string-name>
          , “
          <article-title>Enhancing Maternofetal Ultrasound Images Toward Boosting Classification Performance on a Diverse and Com-prehensive Data ,” in 2023 International Conference on Innovations in Intelligent Sys-tems and Applications (INISTA)</article-title>
          ,
          <year>Sep</year>
          .
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/INISTA59065.
          <year>2023</year>
          .
          <volume>10310548</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T. G.</given-names>
            <surname>Day</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kainz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hajnal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Razavi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Simpson</surname>
          </string-name>
          , “
          <article-title>Artificial intelligence, fetal echocardiography and congenital heart disease,” Prenat</article-title>
          . Diagn., vol.
          <volume>41</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>733</fpage>
          -
          <lpage>742</lpage>
          ,
          <year>2021</year>
          , doi: 10.1002/pd.5892.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , H. Liu,
          <string-name>
            <given-names>H.</given-names>
            <surname>Luo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Li</surname>
          </string-name>
          , “
          <article-title>Automatic quality assessment for 2D fetal sonographic standard plane based on multitask learning</article-title>
          ,
          <source>” Medicine (Baltimore)</source>
          , vol.
          <volume>100</volume>
          , no.
          <issue>4</issue>
          , p.
          <fpage>e24427</fpage>
          ,
          <string-name>
            <surname>Jan</surname>
          </string-name>
          .
          <year>2021</year>
          , doi: 10.1097/MD.0000000000024427.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Qu</surname>
          </string-name>
          , G. Xu,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Jia</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Sun</surname>
          </string-name>
          , “
          <article-title>Standard Plane Identification in Fetal Brain Ultrasound Scans Using a Diferential Convolutional Neural Network ,” IEEE Ac-cess</article-title>
          , vol.
          <volume>8</volume>
          , pp.
          <fpage>83821</fpage>
          -
          <lpage>83830</lpage>
          ,
          <year>2020</year>
          , doi: 10.1109/ACCESS.
          <year>2020</year>
          .
          <volume>2991845</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Montero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Bonet-Carne</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X. P.</given-names>
            <surname>Burgos-Artizzu</surname>
          </string-name>
          , “
          <article-title>Generative Adversarial Net-works to Improve Fetal Brain Fine-Grained Plane Classification</article-title>
          ,” Sensors, vol.
          <volume>21</volume>
          , no.
          <issue>23</issue>
          ,
          <string-name>
            <surname>Art</surname>
          </string-name>
          . no.
          <issue>23</issue>
          ,
          <string-name>
            <surname>Jan</surname>
          </string-name>
          .
          <year>2021</year>
          , doi: 10.3390/s21237975.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Prieto</surname>
          </string-name>
          et al.,
          <article-title>“An automated framework for image classification and segmentation of fetal ultrasound images for gestational age estimation</article-title>
          ,
          <source>” in Medical Imaging</source>
          <year>2021</year>
          :
          <string-name>
            <given-names>Image</given-names>
            <surname>Processing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Landman</surname>
          </string-name>
          and I. Išgum, Eds.,
          <string-name>
            <surname>Online</surname>
            <given-names>Only</given-names>
          </string-name>
          , United States: SPIE, Feb.
          <year>2021</year>
          , p.
          <fpage>55</fpage>
          . doi:
          <volume>10</volume>
          .1117/12.2582243.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>X. P.</given-names>
            <surname>Burgos-Artizzu</surname>
          </string-name>
          et al.,
          <article-title>“Evaluation of deep convolutional neural networks for automatic classification of common maternal fetal ultrasound planes</article-title>
          ,
          <source>” Sci. Rep</source>
          ., vol.
          <volume>10</volume>
          , no.
          <issue>1</issue>
          , p.
          <fpage>10200</fpage>
          ,
          <string-name>
            <surname>Dec</surname>
          </string-name>
          .
          <year>2020</year>
          , doi: 10.1038/s41598-020-67076-5.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>N. V.</given-names>
            <surname>Chawla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. W.</given-names>
            <surname>Bowyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. O.</given-names>
            <surname>Hall</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W. P.</given-names>
            <surname>Kegelmeyer</surname>
          </string-name>
          , “SMOTE: Synthetic Minority Oversampling Technique,
          <string-name>
            <given-names>” J.</given-names>
            <surname>Artif</surname>
          </string-name>
          .
          <source>Intell. Res.</source>
          , vol.
          <volume>16</volume>
          , pp.
          <fpage>321</fpage>
          -
          <lpage>357</lpage>
          , Jun.
          <year>2002</year>
          , doi: 10.1613/jair.953.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baldassarre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Morín</surname>
          </string-name>
          , and L.
          <string-name>
            <surname>Rodés-Guirao</surname>
          </string-name>
          , “Deep Koalarization:
          <article-title>Image Colorization using CNNs and InceptionResNetv2</article-title>
          ,” arXiv.org.
          <source>Accessed: Mar. 02</source>
          ,
          <year>2023</year>
          . [Online]. Available: https://arxiv.org/abs/1712.03400v1
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>