=Paper=
{{Paper
|id=Vol-3861/paper11
|storemode=property
|title=Semi-supervised learning for medical images segmentation using 4D atlas priors
|pdfUrl=https://ceur-ws.org/Vol-3861/paper11.pdf
|volume=Vol-3861
|authors=Volodymyr Lytvynenko,Victor Sineglazov,Kirill Riazanovskiy,Olena Chumachenko
|dblpUrl=https://dblp.org/rec/conf/ciaw/LytvynenkoSRC24
}}
==Semi-supervised learning for medical images segmentation using 4D atlas priors==
Volodymyr Lytvynenko1,†, Victor Sineglazov2,∗,†, Kirill Riazanovskiy3,†and
Olena Chumachenko3,†
1
Kherson National Technical University, 24, Beryslavske Shose, Kherson, 73008, Ukraine
2
National Aviation University, 1, Liubomyra Huzara ave., Kyiv, 03058, Ukraine
3
National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, 37, Prospect Beresteiskyi (former
Peremohy), Kyiv, 03056, Ukraine
Abstract
This work is focused on the intelligent processing of MRI brain images to detect malignant tumors, which
in comparison with tumors of other organs have their own specificity, making their correct segmentation
and classification difficult. Due to the time-consuming nature of labeling the training sample, a semi-
supervised learning method based on 4D atlas priors was developed in this paper to solve the segmentation
problem for the efficient use of unlabeled images. In the proposed method, a probabilistic 4D atlas is
constructed based on the coordinates and voxel intensities of the labeled segments, and a generalization of
this atlas was made based on gaussian mixture models and consideration of tumor contrast values. Three
uses of this atlas were proposed in the form of two loss functions and pseudomask validation. The
performance of the method was tested on a real MRI dataset of brain tumors of T1 modality axial view. The
results showed a positive increase in segmentation accuracy compared to existing methods.
Keywords
Semi-supervised learning, brain tumor, segmentation, atlas prior, loss function, gaussian mixture model1
1. Introduction
According to recent studies [1, 2], brain cancer remains a significant global health concern,
accounting for 1.9% of all cancer cases and 2.5% of cancer-related deaths worldwide. In 2019, 347,992
new cases were reported, with higher incidence rates in males (54%) compared to females (46%). The
highest age-standardized incidence rates were observed in Europe, while Africa reported the lowest.
Notably, Denmark had the highest incidence rate at 17.1 per 100,000 people. The mortality rate also
varied significantly, with Palestine reporting the highest at 7.2 per 100,000 people. Trends from 1990
to 2019 indicate a significant increase in incidence globally, highlighting the need for enhanced
research and preventive strategies.
Glioblastoma, the most prevalent malignant brain tumor, comprises approximately 49% of all
malignant cases. Despite therapeutic advancements, the prognosis for glioblastoma remains poor,
with a five-year relative survival rate increasing only slightly from 4% in the mid-1970s to 7% in
recent years [2].
In the era of precision medicine, early diagnosis and accurate follow-up are essential for better
patient care. In this case, magnetic resonance imaging (MRI) contributes significantly to diagnosis
and plays a key role in therapy planning as well as in the assessment of response to treatment and/or
relapse.
CIAW-2024: Computational Intelligence Application Workshop, October 10-12, 2024, Lviv, Ukraine
∗
Corresponding author.
†
These authors contributed equally.
lytvynenko.volodymyr@kntu.net.ua (V. Lytvynenko); svm@nau.edu.ua (V. Sineglazov); k.riazanovskyi@kpi.ua (K.
Riazanovskiy); eliranvik@gmail.com (O. Chumachenko)
0000-0002-1536-5542 (V. Lytvynenko); 0000-0002-3297-9060 (V. Sineglazov); 0000-0002-8771-8060 (K. Riazanovskiy);
0000-0003-3006-7460 (O.Chumachenko)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
The main role of conventional/morphologic MRI in making the diagnosis is to determine the size
and anatomic location of the lesion in the brain for treatment or biopsy planning, to assess mass
effect and edema in surrounding healthy brain tissue, to assess the relationship to ventricular failure
of brain systems and vascular structures, and finally, along with other "functional" MRI sequences
to suggest a possible diagnosis [3].
The primary task of MRI brain image processing is the segmentation task, which is efficiently
solved by deep neural networks. The main problem in training such a network, whether it is a
convolutional neural network or a vision transformer, is the availability of a sufficiently labeled
training sample. Unfortunately, in real-world situations, the samples are limited and with a small
number of labeled scans, since high-quality labeling requires the availability and a huge amount of
time of a qualified medical radiologist, which is not always possible.
To address the problem of insufficient sample and limited resources for labeling and training the
model, there are different approaches, the most popular ones are:
transfer learning - transferring knowledge from a more general known dataset to a specific
limited dataset,
active learning - methods for identifying the most relevant data for manual partitioning by a
specialist,
semi-supervised learning - model training using unlabeled data among others.
Each of the approaches deserves individual attention, but this paper proposes the use and
improvement of semi-supervised learning as one of the most promising approaches using unlabeled
sampling.
The use of semi-supervised learning in medical image segmentation has several advantages. First,
it significantly reduces the need for large amounts of labeled data, which is time consuming and
expensive to create. In the medical domain, where labeling requires the expertise of specialists, this
is particularly important.
In addition, SSL can improve model quality by using information from unlabeled data without
requiring additional datasets or labeling costs. This helps the model to better generalize and more
accurately segment medical images, even with a limited amount of labeled data.
Another aspect is the ability to utilize a variety of data. SSL methods can efficiently handle
heterogeneous data, which increases their flexibility and applicability in various medical
applications.
Ultimately, such methods help accelerate the development and implementation of more accurate
and reliable segmentation models in medical systems, which can lead to improved patient diagnosis
and treatment.
One type of SSL is knowledge priors (KP) based learning [4].
Prior knowledge is information that the learner already has before learning new information, and
sometimes it helps to cope with new tasks. Compared with non-medical images, medical images have
many anatomical priors such as shape and position of organs, and incorporating anatomical prior
knowledge into deep learning can improve the performance of medical image segmentation.
In this paper, a new SSL method has been developed that uses KP as a 4D anomaly atlas and its
generalization in the form of gaussian mixture models (GMM) to capture potential anomalous
regions more generally. Novel is the use of not only organ position but also organ contrast, as well
as a probabilistic generalization to not be limited to only anomalies captured in a labeled sample.
Testing of the approach was performed on a dataset of brain tumors.
2. Related work
In recent years, there has been a significant amount of research in the field of medical segmentation
with deep neural networks [5-8] and semi-supervised medical image segmentation [9-13]. This
review focuses on the integration of shape priors, atlas priors, and semi-supervised learning
approaches in various methods, highlighting their strengths and weaknesses.
In the paper [9] the authors propose a dual-task framework to improve segmentation
performance by leveraging both labeled and unlabeled data. The framework consists of two tasks: a
pixel-wise segmentation task and a geometry-aware level set representation task. The dual-task
consistency regularization ensures that the predictions from both tasks are consistent, thereby
enhancing the model's ability to utilize unlabeled data. This method's strength lies in its ability to
incorporate geometric constraints, which helps in achieving more accurate and robust segmentation
results. However, the increased model complexity and the need for careful balance between the two
tasks can pose challenges in implementation and training.
The [10] paper introduces a novel approach using transformer networks that leverage shape
priors through the use of template-based deformable models. This method uses a pre-defined
template that captures the general shape of the target organ and deforms it to match the specific
instance in the input image. The strength of this approach is its ability to maintain global shape
consistency while allowing local variations, which is particularly useful for anatomical structures
with high variability. However, the reliance on large amounts of training data and the computational
demands of transformer networks can be limiting factors.
In [11] the authors utilize an atlas prior within a GAN framework to enhance liver segmentation.
The atlas prior provides a strong anatomical reference, guiding the segmentation process and
ensuring anatomical consistency. This approach effectively combines labeled and unlabeled data,
improving segmentation accuracy even with limited labeled data. The primary strength of this
method is its ability to incorporate detailed anatomical knowledge, which is crucial for accurately
segmenting complex organs like the liver. However, the adversarial training process can be unstable
and requires careful tuning of hyperparameters.
The paper [12] presents a method that integrates dense networks with deep anatomical priors
and region adaptation techniques. This approach is particularly effective for fine segmentation tasks,
such as renal artery segmentation, where precise anatomical details are critical. The use of dense
networks allows for efficient capture of both local and global features, while the deep anatomical
priors ensure anatomical plausibility. However, the model's complexity and the need for extensive
labeled data for initial training can be significant drawbacks.
The work on [13] introduces a unique approach to incorporating shape priors through topological
constraints. By using persistent homology, the method enforces topological consistency in the
segmentation results, which is particularly beneficial for capturing complex anatomical structures.
The primary advantage of this approach is its ability to ensure topological correctness in the
segmentation output, reducing the likelihood of anatomical errors. However, the computational
complexity of calculating persistent homology and the reliance on high-quality topological priors
can be limiting factors.
A common limitation across the reviewed papers is their focus on specific aspects of the
segmentation problem without simultaneously addressing the proposed generalized localization of
anomalies and their pixel values. While each method brings innovative solutions to one or two
aspects of the segmentation task, none of them comprehensively integrate all of the critical factors.
And sometimes overly strict atlas or shape regularization can degrade the generalizability of the
model on new data.
3. Dataset overview
A private MRI dataset of brain tumors of the brain was used for experimentation and validation of
the proposed method (Fig. 1). T1 modality and axial view were used. The dataset consists of 34
patients and 1144 images. The labeling was performed manually by medical professionals with more
than 10 years of experience. An example of binary labeling of tumors by a specialist is shown in Fig.
2.
Figure 1: The brain MRI dataset used
Figure 2: Manually segmented tumors on MRI images (represented in orange)
4. Proposed method
4.1 Formal problem definition
For the labeled sample 𝑆 : (𝑋 , 𝑌 ), … , (𝑋 , 𝑌 ), where 𝑋 ∈ 𝑅 × × is the MRI scan tensor and 𝑌 ∈
{0, 1} × × is a binary mask of the same size as the original tensor, where element 0 means absence,
and 1 the presence of anomaly on a given voxel of the scan, and the unlabeled sample
𝑆 : (𝑋 , … , 𝑋 ) create a classifier 𝑔(𝑋) that correctly predicts the binary matrix 𝑌 ∈ {0, 1} × ×
of the new scan tensor 𝑋 ∈ 𝑅 × × utilizing both samples 𝑆 and 𝑆 .
4.2 Proposed solution
Based on the works [9-13] about atlas and shape priors, we propose an improvement of this approach
in the form of generalizing pixel distributions and adding more dimensions to the atlas.
4.2.1 4D atlas priors
To account for anatomical structures and the natural location of anomalies, as well as their color, we
propose to create 4D atlas priors based on labeled data segments. The first three dimensions of the
atlas are the coordinates of the anomaly location in some section of the scan in 3D. The fourth
dimension is the observed color or contrast of anomaly pixels.
To generalize the atlas and remove the limitation of the input data, we propose modeling the pixel
distributions of a given atlas through GMMs that would completely cover the atlas with centers at
the most frequent locations (Figs. 3b and 5). The use of modeling through GMM will help the models
generalize better to new data without restricting them to the existing tumor atlas, as GMM assigns
the probability of tumor presence to a wider range of pixels, unlike a conventional atlas.
Let us consider a method of representing segment voxels as GMMs. For this purpose, let's set the
set of quadruples of all segment voxels from the dataset 𝑆 :
() () () ()
𝐺= 𝑖 , 𝑗 , 𝑘 , 𝑥 ( ) ( ) ( ) | 𝑙 = 1,2, … , 𝑁 𝑎𝑛𝑑 𝑝 = 1,2, … , 𝐻 × 𝑊 × 𝐷 , (1)
,
() ()
where 𝑖 is the row number of the pth segment voxel on 3D scan 𝑙; 𝑗 is the column number of
() ()
the pth segment voxel on 3D scan 𝑙; 𝑘 is the number of the 2D image on 3D scan 𝑙; 𝑥 ( ) ( ) ( ) is the
() () ()
voxel intensity value located in the 𝑖 th row, 𝑗 th column and 𝑘 th 2D image on the 𝑙th 3D scan;
𝑁 is the number of 3D scans in the labeled sample 𝑆 ; 𝐻, 𝑊, 𝐷 are the 3D scan dimensions.
An example of segments is shown in Fig. 2. By aggregating segments from all images, their 2D
histogram can be constructed. An example of 2D histogram of the location of segments (tumors in
the brain) by image space is presented in Fig. 3a. It is worth noting that the pixel intensities were not
used for visualization, but only their 2D locations.
This distribution of location and intensity of all voxels in the dataset is modeled as follows via
GMM:
𝐺 ~ ℊ(𝐺) ≜ 𝑃 (𝒈) = 𝑤 𝒩(𝒈|𝝁𝒊 , 𝛴 ), (2)
1 1
𝒩(𝒈|𝝁𝒊 , 𝛴 ) = 𝑒𝑥𝑝 − (𝒈 − 𝝁𝒊 ) 𝛴 (𝒈 − 𝝁𝒊 )
(2𝜋) |𝛴 | 2
𝑤 =1
where 𝒈 is the vector-quadruple from the dataset 𝐺; 𝜇 is the mean vector of the ith normal
distribution in the GMM; Σ is the covariance matrix of the ith normal distribution of the model; 𝑤
is the weight of the ith distribution in the model; 𝑁 is the total number of Gaussian distributions in
the model.
The parameters of GMM models can be found using the Expectation-Maximization algorithm
[14].
An example of GMM representation of the location of aggregated 2D segments from Fig. 3a is
shown in Fig. 3b. In 3D, the tumor atlas may look as in Fig. 4, with its modeling via GMM shown in
Fig. 5 (no voxel intensity, only location).
Figure 3: Example of (a) 2D atlas and (b) corresponding GMM model
Figure 4: Example of a 3D atlas for brain tumors
Figure 5: GMM 3D atlas example
An additional atlas for voxel estimation will be an averaged 3D atlas of the intensities of all regions.
Fig. 6 shows the 2D analog. Its construction requires extraction of segments and their intensities and
further averaging:
𝑆 = 𝑋⨀𝑌, (3)
∑ 𝑆
𝑆 = ,
𝑁
where ⨀ is the element-by-element multiplication operation; 𝑋 is the 3D tensor of the MRI image
scan of size 𝐻 × 𝑊 × 𝐷 (height, width, and depth, respectively); 𝑌 is the 3D tensor of the binary
mask of size 𝐻 × 𝑊 × 𝐷 with elements taking values 0 or 1; 𝑁 is the number of 3D scans in the
sample.
Figure 6: Averaged 2D values of tumor pixel intensities (2D pixel intensity atlas)
1. Three SSL training options based on the proposed 4D atlas are proposed:
2. A new loss function that maximizes the likelihood of the location and voxel intensities of
unlabeled image pseudomasks and atlas values in the region of the resulting pseudomask.
3. Loss function based on point 1, but likelihood is maximized by location only, and MSE based
on the intensity atlas 𝑆 is used for intensity.
4. Validation of pseudomasks for atlas-based sampling expansion.
4.2.2 4D GMM atlas loss
We propose the negative log-likelihood (NLL) of the data under the GMM model as a loss function
based on the 4D GMM atlas (Eq. 2). The NLL for a single quadruple of a voxel is:
𝑁𝐿𝐿(𝒈) = − 𝑙𝑜𝑔 𝑤 𝒩(𝒈|𝝁𝒊 , 𝛴 ) (4)
For a dataset with 𝑁 samples, the total loss is the sum of the NLL over all data points (voxels):
𝐿 = 𝑁𝐿𝐿(𝒈 ) (5)
Then to calculate the first option of the loss function the following steps are performed:
Pseudomask calculation 𝑌 ∈ {0,1} × × .
Creation of quadruples by Eq. 1.
Calculation of 𝑁𝐿𝐿 for each obtained quadruple (Eq. 4).
Summation of 𝑁𝐿𝐿 of all quadruples (Eq. 5).
Then the full proposed loss function would look like Eq. 6, provided that the combined Dice +
Binary cross entropy (BCE) loss is used for labeled data:
𝐿 + 𝐿 , 𝑓𝑜𝑟 𝑙𝑎𝑏𝑒𝑙𝑒𝑑 𝑑𝑎𝑡𝑎
𝐿=
𝐿 , 𝑓𝑜𝑟 𝑢𝑛𝑙𝑎𝑏𝑒𝑙𝑒𝑑 𝑑𝑎𝑡𝑎 (6)
4.2.3 3D GMM atlas loss and atlas voxel intensity loss
As a loss function of the second option, MSE is used for voxel intensity:
()
𝑆 =𝑆 ⨀𝑌 ,
()
𝐿 (𝑌 ) = 𝑀𝑆𝐸(𝑆 −𝑆 ) (7)
where ⨀ is the element-by-element multiplication operation; 𝑆 ∈ ℝ × ×
is a generated
pseudo segment (Eq. 3) from a pseudo mask 𝑌 ∈ {0, 1} × ×
;𝑆 ∈ℝ × ×
is the tensor of
() × ×
the voxel intensity atlas; 𝑆 ∈ℝ are the voxel intensity atlas values only in voxels where
the pseudomask 𝑌 values are 1.
To calculate the second option of the loss function, the following steps are performed:
Pseudomask calculation 𝑌 ∈ {0,1} × × .
Creating triplets according to Eq. 1, but without pixel intensity.
Calculation of 𝑁𝐿𝐿 for each resulting triplet (Eq. 4).
Summation of 𝑁𝐿𝐿 of all triplets (Eq. 5).
Pseudomask segment selection (Eq. 3).
MSE calculation for intensities (Eq. 7).
Then the full proposed loss function will look like this:
𝛼𝐿 + 𝛽𝐿 , 𝑓𝑜𝑟 𝑙𝑎𝑏𝑒𝑙𝑒𝑑 𝑑𝑎𝑡𝑎
𝐿= ,
𝛾𝐿 + 𝜎𝐿 , 𝑓𝑜𝑟 𝑢𝑛𝑙𝑎𝑏𝑒𝑙𝑒𝑑 𝑑𝑎𝑡𝑎
where 𝛼, 𝛽, 𝛾, 𝜎 are the corresponding weights of each component of the loss function, which are
included in the formula, they can be specified in advance.
Pseudomask validation based on the 4D atlas involves adding all masks, voxels preprocessed by
Eq. 8 to the training sample and iterative training of the model.
1, 𝑃 (𝒈 ) ⋅ 𝑃 ≥𝑇
𝑌 = (8)
0, 𝑃 (𝒈 ) ⋅ 𝑃 <𝑇
where 𝑌 is a future voxel of a validated pseudomask 𝑌 ; 𝒈 is a quadruplet of voxel i;
𝑃 (𝒈𝒊 ) is its probability density from the GMM atlas (Eq. 2); 𝑃 is the probability of an
anomaly in a given voxel, calculated by the model segmenter; 𝑇 is the threshold for selecting voxels
into the pseudomask.
The training in this case is only on Dice Loss and BCE loss.
5. Experiments and results
The basizc segmenter neural network DeepLabV3+ was used for training. Training was
performed on different configurations of the proposed loss functions. The dataset was divided by
patient into training, validation and test samples by percentages of 70/10/20 respectively, in number
of patients - 23/4/7. Adam optimizer and proposed loss functions were used.
Training the network on the full dataset by Dice Loss gave a baseline IoU metric of 0.93 for the
test sample.
The results of semi-supervised learning of the network by the proposed methods on the test
sample are presented in Table 1.
Table 1
Resulting IOU of the proposed approach on the test sample
Method/Percentage of data used 5% 10% 30% 50%
4D GMM atlas loss 0.66 0.76 0.88 0.9
3D GMM loss + voxel intensity loss 0.68 0.79 0.9 0.92
Pseudomasks validation 0.62 0.75 0.87 0.89
Method [10] 0.55 0.68 0.8 0.82
Method [13] 0.54 0.66 0.79 0.81
The results showed that the proposed SSL methods based on the atlas achieved IoU almost similar
to that of training on the full dataset. The best performing method was 3D GMM loss + voxel
intensity loss, which achieved an accuracy of 0.9 with only 30% of the used data.
Comparison with existing methods [10, 13] showed superior segmentation accuracy of the
proposed methods and an overall gain of 0.1 IoU, which confirms the validity of the developed
method.
6. Conclusion
In this paper, we have introduced an advanced semi-supervised learning framework for brain tumor
segmentation that leverages 4D atlas priors to utilize both labeled and unlabeled data effectively. Our
approach constructs a probabilistic 4D atlas based on labeled segments, generalizes this atlas using
GMM, and incorporates additional dimensions to account for contrast variations within the
anomalies.
The proposed method addresses the common limitations found in existing research, which often
focus on either shape priors, atlas priors, or semi-supervised learning in isolation. By integrating
generalized localization of anomalies, their contrast, and precise boundary delineation into a single
framework, we achieve a more comprehensive and robust segmentation solution.
Our experiments on a real MRI dataset of brain tumors demonstrate that the proposed method
significantly improves segmentation accuracy compared to existing methods. The inclusion of 4D
atlas priors enhances the model's ability to generalize across different types of anomalies, ensuring
both anatomical plausibility and precise boundary detection.
Future work will explore further enhancements to the atlas generation process and the
integration of additional modules into the neural network to extend the applicability of the proposed
method to other medical imaging scenarios. This research contributes to the field of medical image
analysis by providing a more effective and generalized approach to semi-supervised segmentation,
paving the way for improved diagnostic and treatment planning tools in clinical practice.
References
[1] Ilic I, Ilic M. International patterns and trends in the brain cancer incidence and mortality: An
observational study based on the global burden of disease. Heliyon. 2023 Jul 13;9(7):e18222. doi:
10.1016/j.heliyon.2023.e18222. PMID: 37519769; PMCID: PMC10372320.
[2] Miller KD, Ostrom QT, Kruchko C, Patil N, Tihan T, Cioffi G, Fuchs HE, Waite KA, Jemal A,
Siegel RL, Barnholtz-Sloan JS. Brain and other central nervous system tumor statistics, 2021. CA
Cancer J Clin. 2021. https://doi.org/10.3322/caac.21693
[3] Villanueva-Meyer JE, Mabray MC, Cha S. Current Clinical Brain Tumor Imaging. Neurosurgery.
2017 Sep 1;81(3):397-415. doi: 10.1093/neuros/nyx103. PMID: 28486641; PMCID: PMC5581219.
[4] Rushi Jiao, Yichi Zhang, Le Ding, Bingsen Xue, Jicong Zhang, Rong Cai, and Cheng Jin. 2024.
Learning with limited annotations: A survey on deep semi-supervised learning for medical
image segmentation. Comput. Biol. Med. 169, C (Feb 2024).
https://doi.org/10.1016/j.compbiomed.2023.107840
[5] V. Sineglazov, K. Riazanovskiy, A. Klanovets, E. Chumachenko, and N. Linnik, ‘‘Intelligent
tuberculosis activity assessment system based on an ensemble of neural networks,’’ Comput.
Biol. Med., vol. 147, Aug. 2022, Art. no. 105800.
[6] Rayed, Eshmam & Islam, S M. & Niha, Sadia & Jim, Jamin & Kabir, Md & Mridha, M.F.. (2024).
Deep learning for medical image segmentation: State-of-the-art advancements and challenges.
Informatics in Medicine Unlocked. 47. 101504. 10.1016/j.imu.2024.101504.
[7] Rizwan-i-Haque, Intisar & Neubert, Jeremiah. (2020). Deep learning approaches to biomedical
image segmentation. Informatics in Medicine Unlocked. 18. 100297. 10.1016/j.imu.2020.100297.
[8] Zgurovsky, M., Sineglazov, V., Chumachenko, E. (2021). Formation of Hybrid Artificial Neural
Networks Topologies. In: Artificial Intelligence Systems Based on Hybrid Neural Networks.
Studies in Computational Intelligence, vol 904. Springer, Cham. https://doi.org/10.1007/978-3-
030-48453-8_3
[9] Luo, Xiangde & Chen, Jieneng & Song, Tao & Wang, Guotai. (2021). Semi-supervised Medical
Image Segmentation through Dual-task Consistency. Proceedings of the AAAI Conference on
Artificial Intelligence. 35. 8801-8809. 10.1609/aaai.v35i10.17066.
[10] M. C. H. Lee, K. Petersen, N. Pawlowski, B. Glocker and M. Schaap, "TeTrIS: Template
Transformer Networks for Image Segmentation With Shape Priors," in IEEE Transactions on
Medical Imaging, vol. 38, no. 11, pp. 2596-2606, Nov. 2019, doi: 10.1109/TMI.2019.2905990.
[11] Zheng, H. et al. (2019). Semi-supervised Segmentation of Liver Using Adversarial Learning with
Deep Atlas Prior. In: Shen, D., et al. Medical Image Computing and Computer Assisted
Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11769.
Springer, Cham. https://doi.org/10.1007/978-3-030-32226-7_17
[12] He, Yuting & Yang, Guanyu & Yang, Jian & Chen, Yang & Kong, Youyong & Wu, Jiasong &
Tang, Lijun & Zhu, Xiaomei & Dillenseger, Jean-Louis & Shao, Pengfei & Zhang, Shaobo & Shu,
Huazhong & Coatrieux, Jean-Louis & Li, Shuo. (2020). Dense Biased Networks with Deep Priori
Anatomy and Hard Region Adaptation: Semi-supervised Learning for Fine Renal Artery
Segmentation. Medical image analysis. 63. 10.1016/j.media.2020.101722.
[13] Clough, J.R., Byrne, N., Oksuz, I., Zimmer, V.A., Schnabel, J.A., & King, A.P. (2019). A Topological
Loss Function for Deep-Learning Based Image Segmentation Using Persistent Homology. Ieee
Transactions on Pattern Analysis and Machine Intelligence, 44, 8766 - 8778.
[14] Reynolds, D. (2009). Gaussian Mixture Models. In: Li, S.Z., Jain, A. (eds) Encyclopedia of
Biometrics. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-73003-5_196.