<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>V. Lytvynenko);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Olena Chumachenko</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kherson National Technical University</institution>
          ,
          <addr-line>24, Beryslavske Shose, Kherson, 73008</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Aviation University</institution>
          ,
          <addr-line>1, Liubomyra Huzara ave., Kyiv, 03058</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”</institution>
          ,
          <addr-line>37, Prospect Beresteiskyi (former Peremohy), Kyiv, 03056</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>This work is focused on the intelligent processing of MRI brain images to detect malignant tumors, which in comparison with tumors of other organs have their own specificity, making their correct segmentation and classification difficult. Due to the time-consuming nature of labeling the training sample, a semisupervised learning method based on 4D atlas priors was developed in this paper to solve the segmentation problem for the efficient use of unlabeled images. In the proposed method, a probabilistic 4D atlas is constructed based on the coordinates and voxel intensities of the labeled segments, and a generalization of this atlas was made based on gaussian mixture models and consideration of tumor contrast values. Three uses of this atlas were proposed in the form of two loss functions and pseudomask validation. The performance of the method was tested on a real MRI dataset of brain tumors of T1 modality axial view. The results showed a positive increase in segmentation accuracy compared to existing methods.</p>
      </abstract>
      <kwd-group>
        <kwd>Semi-supervised learning</kwd>
        <kwd>brain tumor</kwd>
        <kwd>segmentation</kwd>
        <kwd>atlas prior</kwd>
        <kwd>loss function</kwd>
        <kwd>gaussian mixture model1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        According to recent studies [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], brain cancer remains a significant global health concern,
accounting for 1.9% of all cancer cases and 2.5% of cancer-related deaths worldwide. In 2019, 347,992
new cases were reported, with higher incidence rates in males (54%) compared to females (46%). The
highest age-standardized incidence rates were observed in Europe, while Africa reported the lowest.
Notably, Denmark had the highest incidence rate at 17.1 per 100,000 people. The mortality rate also
varied significantly, with Palestine reporting the highest at 7.2 per 100,000 people. Trends from 1990
to 2019 indicate a significant increase in incidence globally, highlighting the need for enhanced
research and preventive strategies.
      </p>
      <p>
        The main role of conventional/morphologic MRI in making the diagnosis is to determine the size
and anatomic location of the lesion in the brain for treatment or biopsy planning, to assess mass
effect and edema in surrounding healthy brain tissue, to assess the relationship to ventricular failure
of brain systems and vascular structures, and finally, along with other "functional" MRI sequences
to suggest a possible diagnosis [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>The primary task of MRI brain image processing is the segmentation task, which is efficiently
solved by deep neural networks. The main problem in training such a network, whether it is a
convolutional neural network or a vision transformer, is the availability of a sufficiently labeled
training sample. Unfortunately, in real-world situations, the samples are limited and with a small
number of labeled scans, since high-quality labeling requires the availability and a huge amount of
time of a qualified medical radiologist, which is not always possible.</p>
      <p>To address the problem of insufficient sample and limited resources for labeling and training the
model, there are different approaches, the most popular ones are:



transfer learning - transferring knowledge from a more general known dataset to a specific
limited dataset,
active learning - methods for identifying the most relevant data for manual partitioning by a
specialist,
semi-supervised learning - model training using unlabeled data among others.</p>
      <p>Each of the approaches deserves individual attention, but this paper proposes the use and
improvement of semi-supervised learning as one of the most promising approaches using unlabeled
sampling.</p>
      <p>The use of semi-supervised learning in medical image segmentation has several advantages. First,
it significantly reduces the need for large amounts of labeled data, which is time consuming and
expensive to create. In the medical domain, where labeling requires the expertise of specialists, this
is particularly important.</p>
      <p>In addition, SSL can improve model quality by using information from unlabeled data without
requiring additional datasets or labeling costs. This helps the model to better generalize and more
accurately segment medical images, even with a limited amount of labeled data.</p>
      <p>Another aspect is the ability to utilize a variety of data. SSL methods can efficiently handle
heterogeneous data, which increases their flexibility and applicability in various medical
applications.</p>
      <p>Ultimately, such methods help accelerate the development and implementation of more accurate
and reliable segmentation models in medical systems, which can lead to improved patient diagnosis
and treatment.</p>
      <p>
        One type of SSL is knowledge priors (KP) based learning [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>Prior knowledge is information that the learner already has before learning new information, and
sometimes it helps to cope with new tasks. Compared with non-medical images, medical images have
many anatomical priors such as shape and position of organs, and incorporating anatomical prior
knowledge into deep learning can improve the performance of medical image segmentation.</p>
      <p>In this paper, a new SSL method has been developed that uses KP as a 4D anomaly atlas and its
generalization in the form of gaussian mixture models (GMM) to capture potential anomalous
regions more generally. Novel is the use of not only organ position but also organ contrast, as well
as a probabilistic generalization to not be limited to only anomalies captured in a labeled sample.
Testing of the approach was performed on a dataset of brain tumors.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        In recent years, there has been a significant amount of research in the field of medical segmentation
with deep neural networks [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5-8</xref>
        ] and semi-supervised medical image segmentation [
        <xref ref-type="bibr" rid="ref10 ref11 ref9">9-13</xref>
        ]. This
review focuses on the integration of shape priors, atlas priors, and semi-supervised learning
approaches in various methods, highlighting their strengths and weaknesses.
      </p>
      <p>
        In the paper [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] the authors propose a dual-task framework to improve segmentation
performance by leveraging both labeled and unlabeled data. The framework consists of two tasks: a
pixel-wise segmentation task and a geometry-aware level set representation task. The dual-task
consistency regularization ensures that the predictions from both tasks are consistent, thereby
enhancing the model's ability to utilize unlabeled data. This method's strength lies in its ability to
incorporate geometric constraints, which helps in achieving more accurate and robust segmentation
results. However, the increased model complexity and the need for careful balance between the two
tasks can pose challenges in implementation and training.
      </p>
      <p>
        The [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] paper introduces a novel approach using transformer networks that leverage shape
priors through the use of template-based deformable models. This method uses a pre-defined
template that captures the general shape of the target organ and deforms it to match the specific
instance in the input image. The strength of this approach is its ability to maintain global shape
consistency while allowing local variations, which is particularly useful for anatomical structures
with high variability. However, the reliance on large amounts of training data and the computational
demands of transformer networks can be limiting factors.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] the authors utilize an atlas prior within a GAN framework to enhance liver segmentation.
The atlas prior provides a strong anatomical reference, guiding the segmentation process and
ensuring anatomical consistency. This approach effectively combines labeled and unlabeled data,
improving segmentation accuracy even with limited labeled data. The primary strength of this
method is its ability to incorporate detailed anatomical knowledge, which is crucial for accurately
segmenting complex organs like the liver. However, the adversarial training process can be unstable
and requires careful tuning of hyperparameters.
      </p>
      <p>The paper [12] presents a method that integrates dense networks with deep anatomical priors
and region adaptation techniques. This approach is particularly effective for fine segmentation tasks,
such as renal artery segmentation, where precise anatomical details are critical. The use of dense
networks allows for efficient capture of both local and global features, while the deep anatomical
priors ensure anatomical plausibility. However, the model's complexity and the need for extensive
labeled data for initial training can be significant drawbacks.</p>
      <p>The work on [13] introduces a unique approach to incorporating shape priors through topological
constraints. By using persistent homology, the method enforces topological consistency in the
segmentation results, which is particularly beneficial for capturing complex anatomical structures.
The primary advantage of this approach is its ability to ensure topological correctness in the
segmentation output, reducing the likelihood of anatomical errors. However, the computational
complexity of calculating persistent homology and the reliance on high-quality topological priors
can be limiting factors.</p>
      <p>A common limitation across the reviewed papers is their focus on specific aspects of the
segmentation problem without simultaneously addressing the proposed generalized localization of
anomalies and their pixel values. While each method brings innovative solutions to one or two
aspects of the segmentation task, none of them comprehensively integrate all of the critical factors.
And sometimes overly strict atlas or shape regularization can degrade the generalizability of the
model on new data.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset overview</title>
      <p>A private MRI dataset of brain tumors of the brain was used for experimentation and validation of
the proposed method (Fig. 1). T1 modality and axial view were used. The dataset consists of 34
patients and 1144 images. The labeling was performed manually by medical professionals with more
than 10 years of experience. An example of binary labeling of tumors by a specialist is shown in Fig.
2.</p>
      <p>Figure 1: The brain MRI dataset used
Figure 2: Manually segmented tumors on MRI images (represented in orange)</p>
    </sec>
    <sec id="sec-4">
      <title>4. Proposed method</title>
      <p>For the labeled sample  : ( ,  ), … , ( ,  ), where 
∈ 
× × is the MRI scan tensor and  ∈
{0, 1} × × is a binary mask of the same size as the original tensor, where element 0 means absence,
and 1 the presence of anomaly on a given voxel of the scan, and the unlabeled sample
 : ( , … ,</p>
      <p>) create a classifier  ( ) that correctly predicts the binary matrix  ∈ {0, 1} × ×
of the new scan tensor  ∈ 
× ×
utilizing both samples 
and  .
4.2</p>
      <sec id="sec-4-1">
        <title>Proposed solution</title>
        <p>
          Based on the works [
          <xref ref-type="bibr" rid="ref10 ref11 ref9">9-13</xref>
          ] about atlas and shape priors, we propose an improvement of this approach
in the form of generalizing pixel distributions and adding more dimensions to the atlas.
4.2.1
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4D atlas priors</title>
        <p>To account for anatomical structures and the natural location of anomalies, as well as their color, we
propose to create 4D atlas priors based on labeled data segments. The first three dimensions of the
atlas are the coordinates of the anomaly location in some section of the scan in 3D. The fourth
dimension is the observed color or contrast of anomaly pixels.</p>
        <p>To generalize the atlas and remove the limitation of the input data, we propose modeling the pixel
distributions of a given atlas through GMMs that would completely cover the atlas with centers at
the most frequent locations (Figs. 3b and 5). The use of modeling through GMM will help the models
generalize better to new data without restricting them to the existing tumor atlas, as GMM assigns
the probability of tumor presence to a wider range of pixels, unlike a conventional atlas.</p>
        <p>Let us consider a method of representing segment voxels as GMMs. For this purpose, let's set the
set of quadruples of all segment voxels from the dataset  :
 =
 ( ),  ( ),  ( ),  (( )) ( ) ( ), |  = 1,2, … ,  
 = 1,2, … ,  × 
×  ,
where  ( ) is the row number of the pth segment voxel on 3D scan  ;  ( ) is the column number of
the pth segment voxel on 3D scan  ;  ( )is the number of the 2D image on 3D scan  ;  ( ) ( ) ( ) is the
( )
voxel intensity value located in the  ( )th row,  ( )th column and  ( )th 2D image on the  th 3D scan;
 is the number of 3D scans in the labeled sample  ;  ,  ,  are the 3D scan dimensions.</p>
        <p>An example of segments is shown in Fig. 2. By aggregating segments from all images, their 2D
histogram can be constructed. An example of 2D histogram of the location of segments (tumors in
the brain) by image space is presented in Fig. 3a. It is worth noting that the pixel intensities were not
used for visualization, but only their 2D locations.</p>
        <p>This distribution of location and intensity of all voxels in the dataset is modeled as follows via
GMM:
 ~ ℊ( ) ≜ 
( ) =</p>
        <p>( |  ,  ),
 ( |  ,  ) =
− ( −   )</p>
        <p>( −   )
1
(2 ) | |


1
2
= 1
(1)
(2)
where  is the vector-quadruple from the dataset  ;  is the mean vector of the ith normal
distribution in the GMM; Σ is the covariance matrix of the ith normal distribution of the model; 
is the weight of the ith distribution in the model;  is the total number of Gaussian distributions in
the model.</p>
        <p>The parameters of GMM models can be found using the Expectation-Maximization algorithm
[14].</p>
        <p>An example of GMM representation of the location of aggregated 2D segments from Fig. 3a is
shown in Fig. 3b. In 3D, the tumor atlas may look as in Fig. 4, with its modeling via GMM shown in
Fig. 5 (no voxel intensity, only location).
An additional atlas for voxel estimation will be an averaged 3D atlas of the intensities of all regions.
,
(3)
where ⨀ is the element-by-element multiplication operation;  is the 3D tensor of the MRI image
scan of size  ×  × 
mask of size  ×  ×</p>
        <p>(height, width, and depth, respectively);  is the 3D tensor of the binary
with elements taking values 0 or 1;  is the number of 3D scans in the</p>
        <p>Three SSL training options based on the proposed 4D atlas are proposed:
A new loss function that maximizes the likelihood of the location and voxel intensities of
unlabeled image pseudomasks and atlas values in the region of the resulting pseudomask.
3. Loss function based on point 1, but likelihood is maximized by location only, and MSE based
on the intensity atlas</p>
        <p>is used for intensity.</p>
        <p>Validation of pseudomasks for atlas-based sampling expansion.
4.2.2</p>
      </sec>
      <sec id="sec-4-3">
        <title>4D GMM atlas loss</title>
        <p>We propose the negative log-likelihood (NLL) of the data under the GMM model as a loss function
based on the 4D GMM atlas (Eq. 2). The NLL for a single quadruple of a voxel is:
where ⨀ is the element-by-element multiplication operation; 
pseudo segment (Eq. 3) from a pseudo mask 
∈ {0, 1} × × ; 
∈ ℝ × × is a generated
∈ ℝ × × is the tensor of
the voxel intensity atlas;  ( )
the pseudomask 
values are 1.</p>
        <p>∈ ℝ × × are the voxel intensity atlas values only in voxels where
To calculate the second option of the loss function, the following steps are performed:
Pseudomask calculation</p>
        <p>∈ {0,1} × × .</p>
        <p>Creating triplets according to Eq. 1, but without pixel intensity.</p>
        <p>Calculation of 
Summation of 
for each resulting triplet (Eq. 4).</p>
        <p>of all triplets (Eq. 5).</p>
        <p>Pseudomask segment selection (Eq. 3).</p>
        <p>MSE calculation for intensities (Eq. 7).</p>
        <p>For a dataset with  samples, the total loss is the sum of the NLL over all data points (voxels):

=
Then to calculate the first option of the loss function the following steps are performed:
Then the full proposed loss function would look like Eq. 6, provided that the combined Dice +
Binary cross entropy (BCE) loss is used for labeled data:
4.2.3</p>
      </sec>
      <sec id="sec-4-4">
        <title>3D GMM atlas loss and atlas voxel intensity loss</title>
        <p>As a loss function of the second option, MSE is used for voxel intensity:</p>
        <p>Pseudomask calculation 
Creation of quadruples by Eq. 1.</p>
        <p>∈ {0,1} × × .</p>
        <p>Calculation of 
Summation of 
for each obtained quadruple (Eq. 4).</p>
        <p>of all quadruples (Eq. 5).
 =


+</p>
        <p>(
( )</p>
        <p>= 
) = 
⨀</p>
        <p>,
(
− 
( ) )










(4)
(5)
(6)
(7)
=
Then the full proposed loss function will look like this:
 = 
+ 
+ 
where  ,  ,  ,  are the corresponding weights of each component of the loss function, which are
included in the formula, they can be specified in advance.</p>
        <p>Eq. 8 to the training sample and iterative training of the model.</p>
        <p>Pseudomask validation based on the 4D atlas involves adding all masks, voxels preprocessed by

(  ) is its probability density from the GMM atlas (Eq. 2); 
is the probability of an
anomaly in a given voxel, calculated by the model segmenter;  is the threshold for selecting voxels
into the pseudomask.</p>
        <p>The training in this case is only on Dice Loss and BCE loss.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experiments and results</title>
      <p>The basizc segmenter neural network DeepLabV3+ was used for training. Training was
performed on different configurations of the proposed loss functions. The dataset was divided by
patient into training, validation and test samples by percentages of 70/10/20 respectively, in number
of patients - 23/4/7. Adam optimizer and proposed loss functions were used.</p>
      <p>Training the network on the full dataset by Dice Loss gave a baseline IoU metric of 0.93 for the
The results of semi-supervised learning of the network by the proposed methods on the test
Resulting IOU of the proposed approach on the test sample</p>
      <p>Method/Percentage of data used</p>
      <p>4D GMM atlas loss
3D GMM loss + voxel intensity loss</p>
      <p>Pseudomasks validation</p>
      <p>
        Method [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
Method [13]
5%
      </p>
      <p>The results showed that the proposed SSL methods based on the atlas achieved IoU almost similar
to that of training on the full dataset. The best performing method was 3D GMM loss + voxel
intensity loss, which achieved an accuracy of 0.9 with only 30% of the used data.</p>
      <p>
        Comparison with existing methods [
        <xref ref-type="bibr" rid="ref10">10, 13</xref>
        ] showed superior segmentation accuracy of the
proposed methods and an overall gain of 0.1 IoU, which confirms the validity of the developed
method.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this paper, we have introduced an advanced semi-supervised learning framework for brain tumor
segmentation that leverages 4D atlas priors to utilize both labeled and unlabeled data effectively. Our
approach constructs a probabilistic 4D atlas based on labeled segments, generalizes this atlas using
GMM, and incorporates additional dimensions to account for contrast variations within the
anomalies.</p>
      <p>The proposed method addresses the common limitations found in existing research, which often
focus on either shape priors, atlas priors, or semi-supervised learning in isolation. By integrating
generalized localization of anomalies, their contrast, and precise boundary delineation into a single
framework, we achieve a more comprehensive and robust segmentation solution.</p>
      <p>Our experiments on a real MRI dataset of brain tumors demonstrate that the proposed method
significantly improves segmentation accuracy compared to existing methods. The inclusion of 4D
atlas priors enhances the model's ability to generalize across different types of anomalies, ensuring
both anatomical plausibility and precise boundary detection.</p>
      <p>Future work will explore further enhancements to the atlas generation process and the
integration of additional modules into the neural network to extend the applicability of the proposed
method to other medical imaging scenarios. This research contributes to the field of medical image
analysis by providing a more effective and generalized approach to semi-supervised segmentation,
paving the way for improved diagnostic and treatment planning tools in clinical practice.</p>
    </sec>
    <sec id="sec-7">
      <title>References</title>
      <p>[12] He, Yuting &amp; Yang, Guanyu &amp; Yang, Jian &amp; Chen, Yang &amp; Kong, Youyong &amp; Wu, Jiasong &amp;
Tang, Lijun &amp; Zhu, Xiaomei &amp; Dillenseger, Jean-Louis &amp; Shao, Pengfei &amp; Zhang, Shaobo &amp; Shu,
Huazhong &amp; Coatrieux, Jean-Louis &amp; Li, Shuo. (2020). Dense Biased Networks with Deep Priori
Anatomy and Hard Region Adaptation: Semi-supervised Learning for Fine Renal Artery
Segmentation. Medical image analysis. 63. 10.1016/j.media.2020.101722.
[13] Clough, J.R., Byrne, N., Oksuz, I., Zimmer, V.A., Schnabel, J.A., &amp; King, A.P. (2019). A Topological
Loss Function for Deep-Learning Based Image Segmentation Using Persistent Homology. Ieee
Transactions on Pattern Analysis and Machine Intelligence, 44, 8766 - 8778.
[14] Reynolds, D. (2009). Gaussian Mixture Models. In: Li, S.Z., Jain, A. (eds) Encyclopedia of
Biometrics. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-73003-5_196.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Ilic</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ilic</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>International patterns and trends in the brain cancer incidence and mortality: An observational study based on the global burden of disease</article-title>
          .
          <source>Heliyon. 2023 Jul</source>
          <volume>13</volume>
          ;
          <issue>9</issue>
          (
          <issue>7</issue>
          ):e18222. doi:
          <volume>10</volume>
          .1016/j.heliyon.
          <year>2023</year>
          .e18222.
          <source>PMID: 37519769; PMCID: PMC10372320.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Miller</surname>
            <given-names>KD</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ostrom</surname>
            <given-names>QT</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kruchko</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patil</surname>
            <given-names>N</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tihan</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cioffi</surname>
            <given-names>G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fuchs</surname>
            <given-names>HE</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Waite</surname>
            <given-names>KA</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jemal</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Siegel</surname>
            <given-names>RL</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barnholtz-Sloan</surname>
            <given-names>JS</given-names>
          </string-name>
          .
          <article-title>Brain and other central nervous system tumor statistics</article-title>
          ,
          <year>2021</year>
          . CA Cancer J Clin.
          <year>2021</year>
          . https://doi.org/10.3322/caac.21693
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Villanueva-Meyer</surname>
            <given-names>JE</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mabray</surname>
            <given-names>MC</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cha</surname>
            <given-names>S. Current</given-names>
          </string-name>
          <article-title>Clinical Brain Tumor Imaging</article-title>
          .
          <source>Neurosurgery. 2017 Sep</source>
          <volume>1</volume>
          ;
          <issue>81</issue>
          (
          <issue>3</issue>
          ):
          <fpage>397</fpage>
          -
          <lpage>415</lpage>
          . doi:
          <volume>10</volume>
          .1093/neuros/nyx103. PMID: 28486641; PMCID:
          <fpage>PMC5581219</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Rushi</given-names>
            <surname>Jiao</surname>
          </string-name>
          , Yichi Zhang, Le Ding, Bingsen Xue, Jicong Zhang, Rong Cai, and Cheng Jin.
          <year>2024</year>
          .
          <article-title>Learning with limited annotations: A survey on deep semi-supervised learning for medical image segmentation</article-title>
          .
          <source>Comput. Biol. Med</source>
          . 169,
          <string-name>
            <surname>C (</surname>
          </string-name>
          Feb
          <year>2024</year>
          ). https://doi.org/10.1016/j.compbiomed.
          <year>2023</year>
          .107840
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sineglazov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Riazanovskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Klanovets</surname>
          </string-name>
          , E. Chumachenko, and
          <string-name>
            <given-names>N.</given-names>
            <surname>Linnik</surname>
          </string-name>
          , '
          <article-title>'Intelligent tuberculosis activity assessment system based on an ensemble of neural networks</article-title>
          ,
          <source>'' Comput. Biol. Med</source>
          ., vol.
          <volume>147</volume>
          ,
          <string-name>
            <surname>Aug</surname>
          </string-name>
          .
          <year>2022</year>
          , Art. no.
          <issue>105800</issue>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Rayed</surname>
            ,
            <given-names>Eshmam</given-names>
          </string-name>
          &amp; Islam,
          <string-name>
            <given-names>S M.</given-names>
            &amp;
            <surname>Niha</surname>
          </string-name>
          , Sadia &amp; Jim, Jamin &amp; Kabir, Md &amp; Mridha,
          <string-name>
            <surname>M.F..</surname>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Deep learning for medical image segmentation: State-of-the-art advancements and challenges</article-title>
          .
          <source>Informatics in Medicine Unlocked. 47. 101504. 10</source>
          .1016/j.imu.
          <year>2024</year>
          .
          <volume>101504</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Rizwan-</surname>
            i-Haque, Intisar &amp; Neubert,
            <given-names>Jeremiah.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Deep learning approaches to biomedical image segmentation</article-title>
          .
          <source>Informatics in Medicine Unlocked. 18. 100297. 10</source>
          .1016/j.imu.
          <year>2020</year>
          .
          <volume>100297</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Zgurovsky</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sineglazov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chumachenko</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <source>Formation of Hybrid Artificial Neural Networks Topologies. In: Artificial Intelligence Systems Based on Hybrid Neural Networks. Studies in Computational Intelligence</source>
          , vol
          <volume>904</volume>
          . Springer, Cham. https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -48453-
          <issue>8</issue>
          _
          <fpage>3</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Luo</surname>
            ,
            <given-names>Xiangde</given-names>
          </string-name>
          &amp; Chen, Jieneng &amp; Song, Tao &amp; Wang,
          <string-name>
            <surname>Guotai.</surname>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Semi-supervised Medical Image Segmentation through Dual-task Consistency</article-title>
          .
          <source>Proceedings of the AAAI Conference on Artificial Intelligence</source>
          .
          <volume>35</volume>
          .
          <fpage>8801</fpage>
          -
          <lpage>8809</lpage>
          .
          <fpage>10</fpage>
          .1609/aaai.v35i10.
          <fpage>17066</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>M. C. H. Lee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Petersen</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Pawlowski</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Glocker</surname>
            and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Schaap</surname>
          </string-name>
          ,
          <article-title>"TeTrIS: Template Transformer Networks for Image Segmentation With Shape Priors,"</article-title>
          <source>in IEEE Transactions on Medical Imaging</source>
          , vol.
          <volume>38</volume>
          , no.
          <issue>11</issue>
          , pp.
          <fpage>2596</fpage>
          -
          <lpage>2606</lpage>
          , Nov.
          <year>2019</year>
          , doi: 10.1109/TMI.
          <year>2019</year>
          .
          <volume>2905990</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Zheng</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          et al. (
          <year>2019</year>
          ).
          <article-title>Semi-supervised Segmentation of Liver Using Adversarial Learning with Deep Atlas Prior</article-title>
          . In: Shen,
          <string-name>
            <surname>D.</surname>
          </string-name>
          , et al.
          <source>Medical Image Computing and Computer Assisted Intervention - MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science()</source>
          , vol
          <volume>11769</volume>
          . Springer, Cham. https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -32226-7_
          <fpage>17</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>