<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Euro-Mediterranean Workshop on Artificial Intelligence and Smart Systems, October</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Regions in Ultrasound Images: An Innovative Approach Using Deep Learning Arhitectures</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Wyssem Fathallah</string-name>
          <email>fathallahwyssem@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Haifa Ghabri</string-name>
          <email>ghabrihaaifa@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed Hamroun</string-name>
          <email>hamrounmohammed@gmail.com</email>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hedia Bellali</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hedi Sakli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Communications Systems Lab, École Nationale d'Ingénieurs de Tunis</institution>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Epidemiology and Statistics, Abderrahmen Mami Hospital</institution>
          ,
          <addr-line>Ariana</addr-line>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Research Lab LR-99-ES21, University El Manar</institution>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Thyroid Ultrasound Image1</institution>
          ,
          <addr-line>Segmentation2, Deep Learning3, UNet4, Medical Imaging5, Computer Aided Diagno-</addr-line>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>XLIM-Lab, UMR CNRS 7252, University of Limoges</institution>
          ,
          <addr-line>Avenue Albert Thomas, Limoges, 87060</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>15</volume>
      <issue>2024</issue>
      <fpage>0000</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>This study introduces an approach to improve the segmentation accuracy of thyroid regions in ultrasound images using deep learning techniques. Addressing challenges such as textual artifacts and the lack of pre-annotated masks in the dataset, the methodology employs advanced data cleaning and cropping techniques. Evaluations are conducted on various deep learning architectures, including UNet-VGG-16, UNet-VGG-18, UNet-ResNet-18, and UNet-ResNet-34, utilizing the Intersection over Union (IoU) metric for accuracy assessment. Results highlight the eficacy of the proposed approach, with UNet-VGG-16 achieving the highest IoU value of 93.39%. This study contributes to the advancement of automated thyroid ultrasound image segmentation, ofering a robust methodology for precise segmentation. The proposed approach holds promise for enhancing clinical diagnoses and streamlining the analysis of thyroid conditions using ultrasound imaging. Future research avenues include exploring additional architectures and refining segmentation processes for improved accuracy and eficiency.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>∗Corresponding author.
CEUR</p>
      <p>ceur-ws.org</p>
      <p>In this article, we will explore recent advances in the diagnosis of thyroid disorders using ultrasound
scans using AI. We will discuss the methods and algorithms developed to analyze ultrasound images,
identify specific features of thyroid conditions, and provide diagnostic assistance to clinicians. We will
also discuss the challenges and prospects of this promising approach, focusing on the potential benefits
it can ofer in terms of diagnostic accuracy and clinical management. By combining the technological
advances of AI with ultrasound imaging, we hope this revolutionary approach will contribute to earlier
and more accurate detection of thyroid problems, enabling more efective treatment and improved
quality of life for patients. Thyroid nodules are a common abnormality found in the thyroid gland and
can be benign or malignant. Recent advances in artificial intelligence have enabled the development
of automated systems for detecting and diagnosing thyroid nodules from ultrasound images. These
computer-aided diagnostic systems, which use deep learning models for thyroid nodule localization in
2D image frames from ultrasound videos, have shown great potential for accurately identifying thyroid
nodules in a screening application. However, it is essential to note that in a clinical setting, the final
diagnosis and determination of whether a thyroid nodule warrants biopsy or further evaluation still
depend on the judgment and expertise of a qualified medical professional. The widespread adoption of
imaging techniques, especially ultrasound, has notably enhanced the detection rates of thyroid nodules.
Despite this, diagnosis of both thyroid cancer and thyroid nodules through ultrasound imaging remains
the primary method. It is expected that the integration of AI-based automated systems for thyroid
nodule detection will aid medical professionals in improving diagnostic accuracy and reducing false
positives. However, further research is required to validate their accuracy, sensitivity, and specificity in
large scale clinical studies to ensure the high performance and reliability of such AI-based systems in
clinical practice.</p>
      <p>The main contributions that we have made in the field of AI-based thyroid nod-ule detection from
ultrasound images:
• Novel Preprocessing Techniques: We have developed innovative preprocessing techniques
specifically tailored for thyroid ultrasound images. These techniques include advanced cropping
methods to remove extraneous text and artifacts from the images, ensuring that the model focuses
solely on the relevant features for accurate nodule detection. By efectively addressing this
preprocessing challenge, we have improved the input data quality and enhanced our AI model’s
performance.
• Dataset Augmentation and Enrichment: Recognizing the importance of data diversity and size in
training deep learning models, we have implemented ex-tensive data augmentation techniques.
By implementing diferent transformations like rotations, flips, and brightness adjustments, we
have significantly increased the size and diversity of our dataset. This augmentation has not only
improved the robustness of our model but also allowed it to generalize better to unseen thyroid
ultrasound images.</p>
      <p>By introducing these novel preprocessing techniques, augmenting the dataset, and developing a
customized deep-learning architecture, our work has advanced the field of AI-based thyroid nodule
detection. These contributions have not only improved the accuracy and reliability of the detection
process but also paved the way for further advancements in the field, ultimately benefiting healthcare
professionals and patients by enabling earlier and more accurate diagnoses of thyroid nodules.</p>
      <p>The remainder of this paper is structured as follows. Section II reviews related literature, while
Section III details the proposed model and technique, including model training and parameters. Section
IV presents the findings of the proposed model. Finally, Section V concludes with a discussion of our
results and potential future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>
        Segmentation is important in medical image analysis because it helps to accurately identify and define
anatomical structures or diseased areas [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Various techniques have been proposed for segmentation in
medical images, addressing the challenges specific to diferent modalities and anatomical areas. In the
domain of medical imaging, segmentation has been extensively explored across a range of modalities,
including computed tomography (CT) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], magnetic resonance imaging (MRI) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and ultrasound
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]–[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Traditional approaches such as thresholding, region-growing, and active contour models
have been widely utilized. However, these methods often struggle with handling complex anatomical
structures and achieving accurate delineation. In the study of Li et al [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], The researchers concentrated
on segmenting thyroid nodules in ultrasound images and developed a segmentation network called
BTNet, which merges the strengths of convolutional neural networks and transformers. This network
features a boundary attention mechanism to enhance the accuracy of nodule margin segmentation.
Additionally, a deep supervision mechanism improves the segmentation efect by integrating outputs
from various levels. The BTNet model exhibited outstanding segmentation performance, with an
intersection-over-union of 0.810 and a Dice coeficient of 0.892.
      </p>
      <p>
        Pan et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] introduced a novel network architecture named Semantic Guided UNet (SGUNet) for the
automatic segmentation of thyroid nodules in ultrasound images. Unlike traditional UNet architectures,
SGUNet extracts a single channel pixel wise semantic map from high-dimensional features at each
decoding step. This semantic map provides high-level guidance to low-level features, resulting in
more precise nodule representation. Evaluations of SGUNet on the Thyroid Digital Image Database
confirmed its efectiveness, achieving a Dice coeficient of 72.9%. In the study by Zheng et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ],
The researchers aimed to automate the segmentation of thyroid nodules and glands in ultrasound
images to address the labor-intensive manual segmentation process. They utilized the UNet model and
proposed an enhanced method named deformable pyramid split attention residual UNet (DSRU-Net).
This method integrated various techniques such as the ResNeSt block, atrous spatial pyramid pooling,
and deformable convolution v3 to improve feature extraction and context information integration. The
DSRU-Net outperformed the UNet, achieving a mean Intersection over Union of 85.8%, a mean Dice
coeficient of 92.5%, and a nodule Dice coeficient of 94.1%. The experiment utilized a dataset of 5822
ultra-sound images, with 4658 images for training and 1164 images for independent testing.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. Dataset</title>
        <p>
          The DDTI (Digital Database Thyroid Image) dataset [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] is a freely accessible data-base containing 347
thyroid ultrasound images from patients with diverse thyroid diseases. The dataset includes annotations
and classifications provided by two expert observers using the TIRADS (Thyroid Imaging Reporting and
Data System) system. Additionally, confirmation of cases was performed using the Bethesda system, with
200 cases confirmed. The dataset encompasses a range of conditions, including thyroiditis, spongiform
nodules, papillary and follicular cancer, and cases unsatisfactory for pathology study. Notably, the
dataset highlights the discrepancy between TIRADS scores and Bethesda system confirmation, with
male cases exhibiting a higher correlation with cancer. Detailed annotations encompass various features
such as nodule composition, calcifications, and boundaries, providing valuable in-sights for diagnostic
accuracy. The dataset also includes images used for training and comparison purposes between expert
and student annotations based on TIRADS requirements.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Preparing Data</title>
        <p>Our innovative approach aims to address the challenge of segmentation in thyroid ultrasound images.
Unlike traditional methods that require pre-annotated image masks, our method leverages the mask
information contained in the associated XML files. By cleaning and extracting this information, we were
able to prepare our dataset for segmentation learning. Using advanced machine learning techniques,
we developed a model capable of accurately segmenting thyroid regions in ultrasound images. Our
approach ofers a significant advantage in terms of data preparation ease and eliminates the need for
additional resources for manual mask annotation. Preliminary results from our method have shown
promising performance, opening new possibilities for the analysis and diagnosis of thyroid conditions
based on ultra-sound imaging. A sample from our dataset includes both an ultrasound image of the
thyroid and its corresponding mask, extracted from the XML file is presented in figure 1.</p>
        <p>After generating the masks, the next crucial step in our data preprocessing pipeline is the cleaning
process. One challenge we encountered was the presence of textual information embedded within
the ultrasound images, originating from the ultrasound device interface. To address this, we applied a
cropping technique to isolate and remove the textual regions from the images. By carefully selecting the
appropriate cropping region, we ensured that the essential anatomical structures of the thyroid were
preserved while eliminating the unwanted text artifacts. This cropping process not only improved the
quality of the training data but also enabled our model to concentrate exclusively on the relevant features
for accurate segmentation. Figure 2 depicts the cropping technique used in our data preprocessing
pipeline.</p>
        <p>After the data cleaning process, we proceed with the essential steps to train a deep learning model.
To begin, we apply a uniform resizing technique to ensure that all images have consistent dimensions.
This resizing step is crucial for compatibility with the model architecture and eficient computation
during training. Following image resizing, We apply data normalization to improve model convergence
and stability. This process involves transforming the pixel values of the images to a standardized scale.
This process generally entails subtracting the mean value of the dataset and dividing by the standard
deviation. Normalizing the data ensures that the input features have similar ranges and prevents any
single feature from dominating the learning process.</p>
        <p>Once the dataset has been preprocessed and prepared, it is crucial to partition it into separate
subsets for diferent purposes. The training set trains the model’s parameters and learns the underlying
patterns and features. The validation set is utilized to adjust the model’s hyper parameters and track
its performance throughout training. Subsequently, the testing set acts as an independent evaluation
set to gauge the model’s generalization and performance on unseen data. The division of the dataset
ensures that the model is assessed on data it hasn’t encountered during training. This partitioning aids
in preventing overfitting, wherein the model becomes overly attuned to the training data and struggles
with new samples.</p>
        <p>By meticulously cleaning the data, we produced a refined dataset that was particularly designed to
assist successful training and increase our model’s segmentation performance on thyroid ultrasound
scans.
3.3. Model
Deep learning has delivered impressive results across various fields, including computer vision.
Researchers exploring images denoising algorithms have become more interested in deep learning in
recent years, among them, UNet, initially renowned for its application in medical image segmentation,
has become popular for its efectiveness in image denoising. UNet, a convolutional neural network,
stands out with its encoder-decoder structure. It employs a combination of four down-sampling and
four up-sampling stages while incorporating skip connections to preserve crucial image features from
the down-sampling and up-sampling processes. This ensures that the valuable characteristics of the
image remain intact. In this work, we use UNet with a diferent backbone. Figure 3 illustrates the
models introduced in this study, where we integrated transfer learning models by incorporating new
layers.</p>
        <p>In our study, we trained the model for 100 epochs using input images scaled to 160x256 pixels. The
model was optimized using the Adam Optimizer, configured with a learning rate of 0.001 and an epsilon
value of 0.1. For the loss function, Jac-card distance was employed, which is an efective measure for
evaluating the similarity between predicted and ground truth segmentation masks. were used to assess
the model’s performance.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.4. Evaluation Metrics</title>
        <sec id="sec-3-3-1">
          <title>3.4.1. Accuracy</title>
          <p>It refers to the proportion of thyroid pixels accurately identified as thyroid, and normal tissue pixels
correctly classified as normal.</p>
          <p>=</p>
          <p>+  
  +   +   +</p>
          <p>(1)</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>3.4.2. Dice similarity coeficient (DSC)</title>
          <p>A DSC, or Dice similarity coeficient, serves as a performance measure for segmentation, akin to the F1
score, which combines precision and recall. It gauges overlap by assessing the intersection of X and Y,
where X represents the segmented pixels and Y represents the ground truth.</p>
        </sec>
        <sec id="sec-3-3-3">
          <title>3.4.3. Intersection over Union (IOU)</title>
          <p>The Jaccard index is a statistical measure used to quantify the similarity and dissimilarity of sample
sets. It is commonly referred to as the Intersection over Union and the Jaccard similarity coeficient. It
calculates the ratio of the intersection size to the union size of two finite sample sets to determine their
similarity.</p>
          <p>=
‖ ∩  ‖
‖ ∪  ‖
=</p>
          <p>+   +</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results and discussion</title>
      <p>We conducted experiments using diferent backbone architectures for the UNet model, including
VGG16, VGG19, ResNet34, and ResNet18. Each backbone architecture ofers its unique characteristics
and capabilities in feature extraction and representation. During our evaluation, we observed varying
results in terms of model performance and accuracy. The VGG16 and VGG19 backbones, known for their
deep architectures and strong representation capabilities, provided good overall performance. They were
able to capture intricate details in the ultrasound images and efectively learn the patterns associated
with thyroid nodules. On the other hand, the ResNet34 and ResNet18 backbones, with their residual
connections, demonstrated eficient feature propagation and helped alleviate the vanishing gradient
problem. These architectures showcased promising results, indicating their efectiveness in learning
and representing the distinctive features of thyroid nodules. By experimenting with diferent backbone
architectures, we aimed to explore the trade of between model complexity and performance. Table II
presents the test results of diferent network structures for thyroid ultrasound image segmentation,
evaluated using the Intersection over Union (IoU) metric.</p>
      <p>Among the network structures evaluated, Unet-VGG-16 achieved the highest IoU value of 93.39%. This
model demonstrated excellent performance in accurately segmenting the thyroid regions in ultrasound
images. Following closely behind, Unet-ResNet-34 and Unet-ResNet-18 achieved IoU values of 93.32%
and 92.45%, respectively. These models also displayed strong segmentation capabilities, with minimal
variation from the highest-performing model. Unet-ResNext-50 achieved an IoU value of 92.62%,
indicating its efective performance in thyroid ultrasound image segmentation, although slightly lower
than the top-performing model. Unet-VGG-19 achieved an IoU value of 86.93%, demonstrating relatively
(2)
(3)
lower performance compared to the other models. These results highlight the efectiveness of diferent
network structures for thyroid ultrasound image segmentation. Models based on Unet architecture
combined with VGG-16, ResNet-18, ResNet-34, and ResNext-50 achieved impressive segmentation
performance, with IoU values exceeding 90%. These findings provide valuable insights for researchers
and practitioners in choosing appropriate network architectures for accurate and reliable thyroid
segmentation in ultrasound images.</p>
      <p>Based on the evaluation of multiple segmentation UNet backbone, the VGG16 archi-tecture was
utilized, and the best IoU (Intersection over Union) score was considered instead of DenseNet169 for the
segmentation task. The F1-score was chosen as the metric for evaluating the segmentation performance,
which takes into account both precision and recall. After 100 epochs, the evaluation results for the
VGG16 model were as follows: During the training phase, the model achieved a loss of 0.0468, an IoU of
0.9532, and an accuracy of 0.9682. In the validation phase, it recorded a loss of 0.0550, an IoU of 0.9450,
an accuracy of 0.9539, and a dice coeficient of 0.7042. The learning rate applied during training was
0.0010. These measurements showcase the model’s proficiency in accurately delineating desired objects
within the images. A high IoU score signifies the model’s adeptness in capturing the overlap between
predicted and ground truth segmentation masks, underscoring its efectiveness in this task. In Figure 4,
the training process is depicted, illustrating the metrics fluctuations over 10 epochs until reaching a
consistent state. These fluctuations arise as the model is still adapting and learning from the training
data. Initially, the model is sensitive to minor input variations, leading to metric fluctuations. However,
with continued training and exposure to more examples, the model becomes more resilient and capable
of generalizing to new data. Consequently, metric values stabilize in later epochs. Monitoring these
metrics throughout training is vital for ensuring efective learning and detecting potential issues like
overfitting or underfitting. By scrutinizing metric trends and stability, one can evaluate the model’s
performance and make informed decisions regarding necessary adjustments. Fig.4.a illustrates the
increasing trend of IOU values during training, indicating improved overlap between predicted and
ground truth masks. Fig.4.b demonstrates the progressive improve-ment in pixel level classification
accuracy as training advances. Fig.4.c depicts the decreasing trend of the loss metric, reflecting the
model’s learning process and con-vergence toward accurate predictions. Fig 4.d depicts the increasing
trend of the Dice coeficient during training.</p>
      <p>Overall, the model’s performance on both the training and validation sets shows a consistent
improvement over the course of training. The significant increases in IOU and accuracy, along with
the decreases in loss, indicate the model’s ability to accurately detect and classify thyroid nodules
from ultrasound images. Data augmentation helps to mitigate overfitting and enhances the model’s
ability to generalize to unseen data. Additionally, through preprocessing operations such as resizing,
normalization, and cropping, we optimize the input data to ensure consistency, comparability, and
relevance. Normalization enhances the model’s ability to learn from diverse images by normalizing
pixel values. Furthermore, cropping focuses the model’s attention on the crucial regions of interest,
reducing the impact of noise and irrelevant information. By incorporating these techniques into our
study, we aim to improve the accuracy and reliability of our model’s predictions, ultimately leading to
better diagnostic outcomes and patient care in the field of thyroid nodule detection.</p>
      <p>
        In this comparison, the model proposed by Li et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], known as BTNet, achieved an IoU of 0.810
and a Dice coeficient of 0.892. Pan et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] introduced SGUNet, which reached a Dice coeficient
of 0.729, although the IoU value was not provided. Zheng et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] developed DSRU-Net, which
achieved an IoU of 0.858 and a Dice coeficient of 0.941.Lastly, our model, UNet-VGG16, achieved the
highest IoU of 0.9452 but had a lower Dice coeficient of 0.7113 compared to other models. For the test
set, we intentionally did not apply cropping to evaluate the performance of our model in successfully
segmenting thyroid cells. As illustrated in the figure 5, our model demonstrates successful detection and
accurate segmentation of the major thyroid cell structures. Additionally, our model exhibits remarkable
eficiency, with an impressive prediction time of only 21 milliseconds. This rapid processing time not
only ensures timely results but also facilitates the integration of our model into clinical workflows,
enabling doctors to make swift and accurate predictions for thyroid segmentation.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In essence, our research marks a notable progression in AI-driven thyroid nodule detection from
ultrasound images. By introducing innovative approaches, curating a comprehensive dataset, and
conducting thorough assessments, we have significantly enhanced the precision, efectiveness, and
practicality of thyroid nodule detection. Our discoveries hold promise for healthcare practitioners,
furnishing them with a dependable and streamlined tool for precise diagnosis and treatment of thyroid
ailments. Looking ahead, numerous promising directions for further research and advancement exist
in the realm of AI-driven thyroid nodule detection from ultrasound images. One key direction is the
integration of multimodal data, which involves incorporating additional information such as patient
demographics, clinical history, and data from other imaging modalities like CT or MRI. By leveraging
these diverse sources of data, we can enhance the accuracy and reliability of thyroid nodule detection,
enabling more comprehensive and informed decision making. Real time and point of care applications
represent another exciting avenue for future work. Developing algorithms that can process ultrasound
images in real time and provide immediate feedback to clinicians during examinations can significantly
improve eficiency and facilitate faster diagnosis and treatment decisions. Integration with portable
ultrasound devices and mobile applications can further extend the reach of AI-based detection to
resource constrained settings, making it more accessible and impactful.
The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V.</given-names>
            <surname>Uslar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Becker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weyhe</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Tabriz</surname>
          </string-name>
          , “
          <article-title>Thyroid disease-specific quality of life questionnaires - A systematic review</article-title>
          ,
          <source>” Endocrinol. Diabetes Metab.</source>
          , vol.
          <volume>5</volume>
          , no.
          <issue>5</issue>
          , p.
          <fpage>e357</fpage>
          ,
          <year>2022</year>
          , doi: 10.1002/edm2.
          <fpage>357</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>I. Girolami</surname>
          </string-name>
          et al.,
          <article-title>“Impact of image analysis and artificial intelligence in thyroid pathology, with particular reference to cytological aspects,” Cytopathology</article-title>
          , vol.
          <volume>31</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>432</fpage>
          -
          <lpage>444</lpage>
          ,
          <year>2020</year>
          , doi: 10.1111/cyt.12828.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N.</given-names>
            <surname>Benameur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mahmoudi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Benameur</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Mahmoudi</surname>
          </string-name>
          , ”
          <article-title>Deep Learning in Med-ical Imaging”</article-title>
          .
          <source>IntechOpen</source>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .5772/intechopen.111686.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Malik</surname>
          </string-name>
          , “
          <article-title>Evaluation of automated organ segmentation for total-body PET-CT</article-title>
          ,”
          <year>2023</year>
          . https://www.doria.fi/handle/10024/187173 (accessed Jun.
          <volume>11</volume>
          ,
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yamauchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Yatagawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ohtake</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Suzuki</surname>
          </string-name>
          , “
          <article-title>Bin-scanning: Segmentation of X-ray CT volume of binned parts using Morse skeleton graph of distance transform</article-title>
          ,
          <source>” Comput. Vis. Media</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>319</fpage>
          -
          <lpage>333</lpage>
          , Jun.
          <year>2023</year>
          , doi: 10.1007/s41095-022-0296-2.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>He</surname>
          </string-name>
          , G. Qi,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Cong</surname>
          </string-name>
          , and Y. Liu, “
          <article-title>Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI,” Inf</article-title>
          . Fusion, vol.
          <volume>91</volume>
          , pp.
          <fpage>376</fpage>
          -
          <lpage>387</lpage>
          , Mar.
          <year>2023</year>
          , doi: 10.1016/j.infus.
          <year>2022</year>
          .
          <volume>10</volume>
          .022.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baccouche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Garcia-Zapirain</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Castillo Olea, and</article-title>
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Elmaghraby</surname>
          </string-name>
          , “Connect-ed-UNets:
          <article-title>a deep learning architecture for breast mass segmentation,” Npj Breast Can-cer</article-title>
          , vol.
          <volume>7</volume>
          , no.
          <issue>1</issue>
          ,
          <string-name>
            <surname>Art</surname>
          </string-name>
          . no.
          <issue>1</issue>
          ,
          <string-name>
            <surname>Dec</surname>
          </string-name>
          .
          <year>2021</year>
          , doi: 10.1038/s41523-021-00358-x.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Ilesanmi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Chaumrattanakul</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Makhanov</surname>
          </string-name>
          , “
          <article-title>A method for segmentation of tumors in breast ultrasound images using the variant enhanced deep learning,”</article-title>
          <string-name>
            <surname>Biocybern. Biomed. Eng.</surname>
          </string-name>
          , vol.
          <volume>41</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>802</fpage>
          -
          <lpage>818</lpage>
          , Apr.
          <year>2021</year>
          , doi: 10.1016/j.bbe.
          <year>2021</year>
          .
          <volume>05</volume>
          .007.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Haleem</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Pecchia</surname>
          </string-name>
          , “
          <article-title>A Deep Learning Based ECG Segmentation Tool for Detection of ECG Beat Parameters</article-title>
          ,” in
          <source>2022 IEEE Symposium on Computers and Communications (ISCC)</source>
          ,
          <year>Jun</year>
          .
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISCC55528.
          <year>2022</year>
          .
          <volume>9912906</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>Ding</surname>
          </string-name>
          , “
          <article-title>A Novel Model of Thyroid Nodule Seg-mentation for Ultrasound Images,” Ultrasound Med</article-title>
          . Biol., vol.
          <volume>49</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>489</fpage>
          -
          <lpage>496</lpage>
          , Feb.
          <year>2023</year>
          , doi: 10.1016/j.ultrasmedbio.
          <year>2022</year>
          .
          <volume>09</volume>
          .017.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Latecki</surname>
          </string-name>
          , “SGUNET:
          <article-title>Semantic Guided UNET For Thyroid Nodule Segmentation</article-title>
          ,” in
          <source>2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)</source>
          ,
          <year>Apr</year>
          .
          <year>2021</year>
          , pp.
          <fpage>630</fpage>
          -
          <lpage>634</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISBI48211.
          <year>2021</year>
          .
          <volume>9434051</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Zheng</surname>
          </string-name>
          et al.,
          <article-title>“Segmentation of thyroid glands and nodules in ultrasound images us-ing the improved U-Net architecture</article-title>
          ,
          <source>” BMC Med</source>
          . Imaging, vol.
          <volume>23</volume>
          , no.
          <issue>1</issue>
          , p.
          <fpage>56</fpage>
          ,
          <string-name>
            <surname>Apr</surname>
          </string-name>
          .
          <year>2023</year>
          , doi: 10.1186/s12880-023- 01011-8.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pedraza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Vargas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Narváez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Durán</surname>
          </string-name>
          , E. Muñoz, and E. Romero, “
          <article-title>An open access thyroid ultrasound image database</article-title>
          ,
          <source>” Tenth International Symposium on Medical Information Processing and Analysis</source>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Romero</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Lepore</surname>
          </string-name>
          , Eds., Cartagena de Indias, Colombia, Jan.
          <year>2015</year>
          , p.
          <source>92870W. doi: 10.1117/12</source>
          .2073532.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>