<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main"></title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Volodymyr</forename><surname>Lytvynenko</surname></persName>
							<email>lytvynenko.volodymyr@kntu.net.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kherson National Technical University</orgName>
								<address>
									<addrLine>24, Beryslavske Shose</addrLine>
									<postCode>73008</postCode>
									<settlement>Kherson</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Victor</forename><surname>Sineglazov</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">National Aviation University</orgName>
								<address>
									<addrLine>1, Liubomyra Huzara ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kirill</forename><surname>Riazanovskiy</surname></persName>
							<email>k.riazanovskyi@kpi.ua</email>
							<affiliation key="aff2">
								<orgName type="institution">National Technical University of Ukraine &quot;Igor Sikorsky Kyiv Polytechnic Institute&quot;</orgName>
								<address>
									<addrLine>37, Prospect Beresteiskyi (former Peremohy)</addrLine>
									<postCode>03056</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olena</forename><surname>Chumachenko</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">National Technical University of Ukraine &quot;Igor Sikorsky Kyiv Polytechnic Institute&quot;</orgName>
								<address>
									<addrLine>37, Prospect Beresteiskyi (former Peremohy)</addrLine>
									<postCode>03056</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">FB1EACAF155D9A75364FE8CEB1B9D1E6</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Semi-supervised learning, brain tumor, segmentation, atlas prior, loss function, gaussian mixture model 1 O. Chumachenko) 0000-0002-1536-5542 (V. Lytvynenko)</term>
					<term>0000-0002-3297-9060 (V. Sineglazov)</term>
					<term>0000-0002-8771-8060 (K. Riazanovskiy)</term>
					<term>0000-0003-3006-7460 (O.Chumachenko)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This work is focused on the intelligent processing of MRI brain images to detect malignant tumors, which in comparison with tumors of other organs have their own specificity, making their correct segmentation and classification difficult. Due to the time-consuming nature of labeling the training sample, a semisupervised learning method based on 4D atlas priors was developed in this paper to solve the segmentation problem for the efficient use of unlabeled images. In the proposed method, a probabilistic 4D atlas is constructed based on the coordinates and voxel intensities of the labeled segments, and a generalization of this atlas was made based on gaussian mixture models and consideration of tumor contrast values. Three uses of this atlas were proposed in the form of two loss functions and pseudomask validation. The performance of the method was tested on a real MRI dataset of brain tumors of T1 modality axial view. The results showed a positive increase in segmentation accuracy compared to existing methods.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>According to recent studies <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>, brain cancer remains a significant global health concern, accounting for 1.9% of all cancer cases and 2.5% of cancer-related deaths worldwide. In 2019, 347,992 new cases were reported, with higher incidence rates in males (54%) compared to females (46%). The highest age-standardized incidence rates were observed in Europe, while Africa reported the lowest. Notably, Denmark had the highest incidence rate at 17.1 per 100,000 people. The mortality rate also varied significantly, with Palestine reporting the highest at 7.2 per 100,000 people. Trends from 1990 to 2019 indicate a significant increase in incidence globally, highlighting the need for enhanced research and preventive strategies.</p><p>Glioblastoma, the most prevalent malignant brain tumor, comprises approximately 49% of all malignant cases. Despite therapeutic advancements, the prognosis for glioblastoma remains poor, with a five-year relative survival rate increasing only slightly from 4% in the mid-1970s to 7% in recent years <ref type="bibr" target="#b1">[2]</ref>.</p><p>In the era of precision medicine, early diagnosis and accurate follow-up are essential for better patient care. In this case, magnetic resonance imaging (MRI) contributes significantly to diagnosis and plays a key role in therapy planning as well as in the assessment of response to treatment and/or relapse.</p><p>The main role of conventional/morphologic MRI in making the diagnosis is to determine the size and anatomic location of the lesion in the brain for treatment or biopsy planning, to assess mass effect and edema in surrounding healthy brain tissue, to assess the relationship to ventricular failure of brain systems and vascular structures, and finally, along with other "functional" MRI sequences to suggest a possible diagnosis <ref type="bibr" target="#b2">[3]</ref>.</p><p>The primary task of MRI brain image processing is the segmentation task, which is efficiently solved by deep neural networks. The main problem in training such a network, whether it is a convolutional neural network or a vision transformer, is the availability of a sufficiently labeled training sample. Unfortunately, in real-world situations, the samples are limited and with a small number of labeled scans, since high-quality labeling requires the availability and a huge amount of time of a qualified medical radiologist, which is not always possible.</p><p>To address the problem of insufficient sample and limited resources for labeling and training the model, there are different approaches, the most popular ones are:</p><p> transfer learning -transferring knowledge from a more general known dataset to a specific limited dataset,  active learning -methods for identifying the most relevant data for manual partitioning by a specialist,  semi-supervised learning -model training using unlabeled data among others.</p><p>Each of the approaches deserves individual attention, but this paper proposes the use and improvement of semi-supervised learning as one of the most promising approaches using unlabeled sampling.</p><p>The use of semi-supervised learning in medical image segmentation has several advantages. First, it significantly reduces the need for large amounts of labeled data, which is time consuming and expensive to create. In the medical domain, where labeling requires the expertise of specialists, this is particularly important.</p><p>In addition, SSL can improve model quality by using information from unlabeled data without requiring additional datasets or labeling costs. This helps the model to better generalize and more accurately segment medical images, even with a limited amount of labeled data.</p><p>Another aspect is the ability to utilize a variety of data. SSL methods can efficiently handle heterogeneous data, which increases their flexibility and applicability in various medical applications.</p><p>Ultimately, such methods help accelerate the development and implementation of more accurate and reliable segmentation models in medical systems, which can lead to improved patient diagnosis and treatment.</p><p>One type of SSL is knowledge priors (KP) based learning <ref type="bibr" target="#b3">[4]</ref>.</p><p>Prior knowledge is information that the learner already has before learning new information, and sometimes it helps to cope with new tasks. Compared with non-medical images, medical images have many anatomical priors such as shape and position of organs, and incorporating anatomical prior knowledge into deep learning can improve the performance of medical image segmentation.</p><p>In this paper, a new SSL method has been developed that uses KP as a 4D anomaly atlas and its generalization in the form of gaussian mixture models (GMM) to capture potential anomalous regions more generally. Novel is the use of not only organ position but also organ contrast, as well as a probabilistic generalization to not be limited to only anomalies captured in a labeled sample. Testing of the approach was performed on a dataset of brain tumors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related work</head><p>In recent years, there has been a significant amount of research in the field of medical segmentation with deep neural networks <ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref> and semi-supervised medical image segmentation <ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref>. This review focuses on the integration of shape priors, atlas priors, and semi-supervised learning approaches in various methods, highlighting their strengths and weaknesses.</p><p>In the paper <ref type="bibr" target="#b8">[9]</ref> the authors propose a dual-task framework to improve segmentation performance by leveraging both labeled and unlabeled data. The framework consists of two tasks: a pixel-wise segmentation task and a geometry-aware level set representation task. The dual-task consistency regularization ensures that the predictions from both tasks are consistent, thereby enhancing the model's ability to utilize unlabeled data. This method's strength lies in its ability to incorporate geometric constraints, which helps in achieving more accurate and robust segmentation results. However, the increased model complexity and the need for careful balance between the two tasks can pose challenges in implementation and training.</p><p>The <ref type="bibr" target="#b9">[10]</ref> paper introduces a novel approach using transformer networks that leverage shape priors through the use of template-based deformable models. This method uses a pre-defined template that captures the general shape of the target organ and deforms it to match the specific instance in the input image. The strength of this approach is its ability to maintain global shape consistency while allowing local variations, which is particularly useful for anatomical structures with high variability. However, the reliance on large amounts of training data and the computational demands of transformer networks can be limiting factors.</p><p>In <ref type="bibr" target="#b10">[11]</ref> the authors utilize an atlas prior within a GAN framework to enhance liver segmentation. The atlas prior provides a strong anatomical reference, guiding the segmentation process and ensuring anatomical consistency. This approach effectively combines labeled and unlabeled data, improving segmentation accuracy even with limited labeled data. The primary strength of this method is its ability to incorporate detailed anatomical knowledge, which is crucial for accurately segmenting complex organs like the liver. However, the adversarial training process can be unstable and requires careful tuning of hyperparameters.</p><p>The paper <ref type="bibr" target="#b11">[12]</ref> presents a method that integrates dense networks with deep anatomical priors and region adaptation techniques. This approach is particularly effective for fine segmentation tasks, such as renal artery segmentation, where precise anatomical details are critical. The use of dense networks allows for efficient capture of both local and global features, while the deep anatomical priors ensure anatomical plausibility. However, the model's complexity and the need for extensive labeled data for initial training can be significant drawbacks.</p><p>The work on <ref type="bibr" target="#b12">[13]</ref> introduces a unique approach to incorporating shape priors through topological constraints. By using persistent homology, the method enforces topological consistency in the segmentation results, which is particularly beneficial for capturing complex anatomical structures. The primary advantage of this approach is its ability to ensure topological correctness in the segmentation output, reducing the likelihood of anatomical errors. However, the computational complexity of calculating persistent homology and the reliance on high-quality topological priors can be limiting factors.</p><p>A common limitation across the reviewed papers is their focus on specific aspects of the segmentation problem without simultaneously addressing the proposed generalized localization of anomalies and their pixel values. While each method brings innovative solutions to one or two aspects of the segmentation task, none of them comprehensively integrate all of the critical factors. And sometimes overly strict atlas or shape regularization can degrade the generalizability of the model on new data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Dataset overview</head><p>A private MRI dataset of brain tumors of the brain was used for experimentation and validation of the proposed method (Fig. <ref type="figure" target="#fig_0">1</ref>). T1 modality and axial view were used. The dataset consists of 34 patients and 1144 images. The labeling was performed manually by medical professionals with more than 10 years of experience. An example of binary labeling of tumors by a specialist is shown in Fig. <ref type="figure" target="#fig_1">2</ref>.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Proposed method 4.1 Formal problem definition</head><p>For the labeled sample 𝑆 : (𝑋 , 𝑌 ), … , (𝑋 , 𝑌 ), where 𝑋 ∈ 𝑅 × × is the MRI scan tensor and 𝑌 ∈ {0, 1} × × is a binary mask of the same size as the original tensor, where element 0 means absence, and 1 the presence of anomaly on a given voxel of the scan, and the unlabeled sample 𝑆 : (𝑋 , … , 𝑋 ) create a classifier 𝑔(𝑋) that correctly predicts the binary matrix 𝑌 ∈ {0, 1} × × of the new scan tensor 𝑋 ∈ 𝑅 × × utilizing both samples 𝑆 and 𝑆 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Proposed solution</head><p>Based on the works [9-13] about atlas and shape priors, we propose an improvement of this approach in the form of generalizing pixel distributions and adding more dimensions to the atlas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.1">4D atlas priors</head><p>To account for anatomical structures and the natural location of anomalies, as well as their color, we propose to create 4D atlas priors based on labeled data segments. The first three dimensions of the atlas are the coordinates of the anomaly location in some section of the scan in 3D. The fourth dimension is the observed color or contrast of anomaly pixels.</p><p>To generalize the atlas and remove the limitation of the input data, we propose modeling the pixel distributions of a given atlas through GMMs that would completely cover the atlas with centers at the most frequent locations (Figs. <ref type="figure" target="#fig_5">3b and 5</ref>). The use of modeling through GMM will help the models generalize better to new data without restricting them to the existing tumor atlas, as GMM assigns the probability of tumor presence to a wider range of pixels, unlike a conventional atlas.</p><p>Let us consider a method of representing segment voxels as GMMs. For this purpose, let's set the set of quadruples of all segment voxels from the dataset 𝑆 : 𝑁 is the number of 3D scans in the labeled sample 𝑆 ; 𝐻, 𝑊, 𝐷 are the 3D scan dimensions. An example of segments is shown in Fig. <ref type="figure" target="#fig_1">2</ref>. By aggregating segments from all images, their 2D histogram can be constructed. An example of 2D histogram of the location of segments (tumors in the brain) by image space is presented in Fig. <ref type="figure" target="#fig_3">3a</ref>. It is worth noting that the pixel intensities were not used for visualization, but only their 2D locations.</p><p>This distribution of location and intensity of all voxels in the dataset is modeled as follows via GMM:</p><formula xml:id="formula_0">𝐺 ~ ℊ(𝐺) ≜ 𝑃 (𝒈) = 𝑤 𝒩(𝒈|𝝁 𝒊 , 𝛴 ),<label>(2)</label></formula><formula xml:id="formula_1">𝒩(𝒈|𝝁 𝒊 , 𝛴 ) = 1 (2𝜋) |𝛴 | 𝑒𝑥𝑝 − 1 2 (𝒈 − 𝝁 𝒊 ) 𝛴 (𝒈 − 𝝁 𝒊 ) 𝑤 = 1</formula><p>where 𝒈 is the vector-quadruple from the dataset 𝐺; 𝜇 is the mean vector of the ith normal distribution in the GMM; Σ is the covariance matrix of the ith normal distribution of the model; 𝑤 is the weight of the ith distribution in the model; 𝑁 is the total number of Gaussian distributions in the model.</p><p>The parameters of GMM models can be found using the Expectation-Maximization algorithm <ref type="bibr" target="#b13">[14]</ref>.</p><p>An example of GMM representation of the location of aggregated 2D segments from Fig. <ref type="figure" target="#fig_3">3a</ref> is shown in Fig. <ref type="figure" target="#fig_3">3b</ref>. In 3D, the tumor atlas may look as in Fig. <ref type="figure" target="#fig_4">4</ref>, with its modeling via GMM shown in Fig. <ref type="figure" target="#fig_5">5</ref> (no voxel intensity, only location).   </p><formula xml:id="formula_2">𝑆 = 𝑋⨀𝑌,<label>(3)</label></formula><formula xml:id="formula_3">𝑆 = ∑ 𝑆 𝑁 ,</formula><p>where ⨀ is the element-by-element multiplication operation; 𝑋 is the 3D tensor of the MRI image scan of size 𝐻 × 𝑊 × 𝐷 (height, width, and depth, respectively); 𝑌 is the 3D tensor of the binary mask of size 𝐻 × 𝑊 × 𝐷 with elements taking values 0 or 1; 𝑁 is the number of 3D scans in the sample. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.2">4D GMM atlas loss</head><p>We propose the negative log-likelihood (NLL) of the data under the GMM model as a loss function based on the 4D GMM atlas (Eq. 2). The NLL for a single quadruple of a voxel is:</p><formula xml:id="formula_4">𝑁𝐿𝐿(𝒈) = − 𝑙𝑜𝑔 𝑤 𝒩(𝒈|𝝁 𝒊 , 𝛴 )<label>(4)</label></formula><p>For a dataset with 𝑁 samples, the total loss is the sum of the NLL over all data points (voxels):</p><formula xml:id="formula_5">𝐿 = 𝑁𝐿𝐿(𝒈 )<label>(5)</label></formula><p>Then to calculate the first option of the loss function the following steps are performed:</p><formula xml:id="formula_6"> Pseudomask calculation 𝑌 ∈ {0,1} × × .</formula><p> Creation of quadruples by Eq. 1.</p><p> Calculation of 𝑁𝐿𝐿 for each obtained quadruple (Eq. 4).  Summation of 𝑁𝐿𝐿 of all quadruples (Eq. 5).</p><p>Then the full proposed loss function would look like Eq. 6, provided that the combined Dice + Binary cross entropy (BCE) loss is used for labeled data: 𝐿 = 𝐿 + 𝐿 , 𝑓𝑜𝑟 𝑙𝑎𝑏𝑒𝑙𝑒𝑑 𝑑𝑎𝑡𝑎 𝐿 , 𝑓𝑜𝑟 𝑢𝑛𝑙𝑎𝑏𝑒𝑙𝑒𝑑 𝑑𝑎𝑡𝑎 (6)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.3">3D GMM atlas loss and atlas voxel intensity loss</head><p>As a loss function of the second option, MSE is used for voxel intensity:</p><formula xml:id="formula_7">𝑆 ( ) = 𝑆 ⨀𝑌 , 𝐿 (𝑌 ) = 𝑀𝑆𝐸(𝑆 − 𝑆 ( ) )<label>(7)</label></formula><p>where ⨀ is the element-by-element multiplication operation; 𝑆 ∈ ℝ × × is a generated pseudo segment (Eq. 3) from a pseudo mask 𝑌 ∈ {0, 1} </p><p>where 𝑌 is a future voxel of a validated pseudomask 𝑌 ; 𝒈 is a quadruplet of voxel i; 𝑃 (𝒈 𝒊 ) is its probability density from the GMM atlas (Eq. 2); 𝑃 is the probability of an anomaly in a given voxel, calculated by the model segmenter; 𝑇 is the threshold for selecting voxels into the pseudomask.</p><p>The training in this case is only on Dice Loss and BCE loss.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experiments and results</head><p>The Training the network on the full dataset by Dice Loss gave a baseline IoU metric of 0.93 for the test sample.</p><p>The results of semi-supervised learning of the network by the proposed methods on the test sample are presented in Table <ref type="table" target="#tab_1">1</ref>. The results showed that the proposed SSL methods based on the atlas achieved IoU almost similar to that of training on the full dataset. The best performing method was 3D GMM loss + voxel intensity loss, which achieved an accuracy of 0.9 with only 30% of the used data.</p><p>Comparison with existing methods <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b12">13]</ref> showed superior segmentation accuracy of the proposed methods and an overall gain of 0.1 IoU, which confirms the validity of the developed method.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>In this paper, we have introduced an advanced semi-supervised learning framework for brain tumor segmentation that leverages 4D atlas priors to utilize both labeled and unlabeled data effectively. Our approach constructs a probabilistic 4D atlas based on labeled segments, generalizes this atlas using GMM, and incorporates additional dimensions to account for contrast variations within the anomalies.</p><p>The proposed method addresses the common limitations found in existing research, which often focus on either shape priors, atlas priors, or semi-supervised learning in isolation. By integrating generalized localization of anomalies, their contrast, and precise boundary delineation into a single framework, we achieve a more comprehensive and robust segmentation solution.</p><p>Our experiments on a real MRI dataset of brain tumors demonstrate that the proposed method significantly improves segmentation accuracy compared to existing methods. The inclusion of 4D atlas priors enhances the model's ability to generalize across different types of anomalies, ensuring both anatomical plausibility and precise boundary detection.</p><p>Future work will explore further enhancements to the atlas generation process and the integration of additional modules into the neural network to extend the applicability of the proposed method to other medical imaging scenarios. This research contributes to the field of medical image analysis by providing a more effective and generalized approach to semi-supervised segmentation, paving the way for improved diagnostic and treatment planning tools in clinical practice.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The brain MRI dataset used</figDesc><graphic coords="4,76.56,62.40,450.96,677.88" type="vector_box" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Manually segmented tumors on MRI images (represented in orange)</figDesc><graphic coords="5,76.56,62.40,450.96,673.68" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>the row number of the pth segment voxel on 3D scan 𝑙; 𝑗 ( ) is the column number of the pth segment voxel on 3D scan 𝑙; 𝑘 ( ) is the number of the 2D image on 3D scan 𝑙; 𝑥 ( ) ( ) ( ) ( ) is the voxel intensity value located in the 𝑖 ( ) th row, 𝑗 ( ) th column and 𝑘 ( ) th 2D image on the 𝑙th 3D scan;</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Example of (a) 2D atlas and (b) corresponding GMM model</figDesc><graphic coords="7,79.92,174.60,440.76,228.12" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Example of a 3D atlas for brain tumors</figDesc><graphic coords="7,111.12,438.24,377.76,321.84" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: GMM 3D atlas example</figDesc><graphic coords="8,144.48,62.16,311.52,241.44" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Averaged 2D values of tumor pixel intensities (2D pixel intensity atlas)</figDesc><graphic coords="8,129.24,498.72,342.24,256.56" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head></head><label></label><figDesc>basizc segmenter neural network DeepLabV3+ was used for training. Training was performed on different configurations of the proposed loss functions. The dataset was divided by patient into training, validation and test samples by percentages of 70/10/20 respectively, in number of patients -23/4/7. Adam optimizer and proposed loss functions were used.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>𝛽, 𝛾, 𝜎 are the corresponding weights of each component of the loss function, which are included in the formula, they can be specified in advance.Pseudomask validation based on the 4D atlas involves adding all masks, voxels preprocessed by Eq. 8 to the training sample and iterative training of the model.</figDesc><table><row><cell cols="5">Then the full proposed loss function will look like this:</cell></row><row><cell></cell><cell>𝐿 =</cell><cell>𝛼𝐿 𝛾𝐿</cell><cell cols="3">+ 𝛽𝐿 , 𝑓𝑜𝑟 𝑙𝑎𝑏𝑒𝑙𝑒𝑑 𝑑𝑎𝑡𝑎 + 𝜎𝐿 , 𝑓𝑜𝑟 𝑢𝑛𝑙𝑎𝑏𝑒𝑙𝑒𝑑 𝑑𝑎𝑡𝑎</cell><cell>,</cell></row><row><cell cols="2">where 𝛼, 𝑌</cell><cell>=</cell><cell>1, 𝑃 0, 𝑃</cell><cell>(𝒈 ) ⋅ 𝑃 (𝒈 ) ⋅ 𝑃</cell><cell>≥ 𝑇 &lt; 𝑇</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>× × ; 𝑆</cell><cell>∈ ℝ × × is the tensor of</cell></row><row><cell>the voxel intensity atlas; 𝑆</cell><cell>( )</cell><cell cols="4">∈ ℝ × × are the voxel intensity atlas values only in voxels where</cell></row><row><cell cols="3">the pseudomask 𝑌 values are 1.</cell><cell></cell><cell></cell></row><row><cell cols="6">To calculate the second option of the loss function, the following steps are performed:</cell></row><row><cell cols="5"> Pseudomask calculation 𝑌 ∈ {0,1} × × .</cell></row><row><cell cols="6"> Creating triplets according to Eq. 1, but without pixel intensity.</cell></row><row><cell cols="6"> Calculation of 𝑁𝐿𝐿 for each resulting triplet (Eq. 4).</cell></row><row><cell cols="5"> Summation of 𝑁𝐿𝐿 of all triplets (Eq. 5).</cell></row><row><cell cols="5"> Pseudomask segment selection (Eq. 3).</cell></row><row><cell cols="5"> MSE calculation for intensities (Eq. 7).</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1</head><label>1</label><figDesc>Resulting IOU of the proposed approach on the test sample</figDesc><table><row><cell>Method/Percentage of data used</cell><cell>5%</cell><cell>10%</cell><cell>30%</cell><cell>50%</cell></row><row><cell>4D GMM atlas loss</cell><cell>0.66</cell><cell>0.76</cell><cell>0.88</cell><cell>0.9</cell></row><row><cell>3D GMM loss + voxel intensity loss</cell><cell>0.68</cell><cell>0.79</cell><cell>0.9</cell><cell>0.92</cell></row><row><cell>Pseudomasks validation</cell><cell>0.62</cell><cell>0.75</cell><cell>0.87</cell><cell>0.89</cell></row><row><cell>Method [10]</cell><cell>0.55</cell><cell>0.68</cell><cell>0.8</cell><cell>0.82</cell></row><row><cell>Method [13]</cell><cell>0.54</cell><cell>0.66</cell><cell>0.79</cell><cell>0.81</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">International patterns and trends in the brain cancer incidence and mortality: An observational study based on the global burden of disease</title>
		<author>
			<persName><forename type="first">I</forename><surname>Ilic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ilic</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.heliyon.2023.e18222</idno>
		<idno type="PMID">37519769</idno>
		<idno type="PMCID">PMC10372320</idno>
	</analytic>
	<monogr>
		<title level="j">Heliyon</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page">e18222</biblScope>
			<date type="published" when="2023-07-13">2023 Jul 13</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Brain and other central nervous system tumor statistics</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">D</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">T</forename><surname>Ostrom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kruchko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Patil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Tihan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Cioffi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">E</forename><surname>Fuchs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Waite</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jemal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">L</forename><surname>Siegel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Barnholtz-Sloan</surname></persName>
		</author>
		<idno type="DOI">10.3322/caac.21693</idno>
		<ptr target="https://doi.org/10.3322/caac.21693" />
	</analytic>
	<monogr>
		<title level="j">CA Cancer J Clin</title>
		<imprint>
			<date type="published" when="2021">2021. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Current Clinical Brain Tumor Imaging</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Villanueva-Meyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Mabray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cha</surname></persName>
		</author>
		<idno type="DOI">10.1093/neuros/nyx103</idno>
		<idno type="PMID">28486641</idno>
		<idno type="PMCID">PMC5581219</idno>
	</analytic>
	<monogr>
		<title level="j">Neurosurgery</title>
		<imprint>
			<biblScope unit="volume">81</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="397" to="415" />
			<date type="published" when="2017-09-01">2017 Sep 1</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Learning with limited annotations: A survey on deep semi-supervised learning for medical image segmentation</title>
		<author>
			<persName><forename type="first">Rushi</forename><surname>Jiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yichi</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Le</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bingsen</forename><surname>Xue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jicong</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rong</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cheng</forename><surname>Jin</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compbiomed.2023.107840</idno>
		<ptr target="https://doi.org/10.1016/j.compbiomed.2023.107840" />
	</analytic>
	<monogr>
		<title level="j">Comput. Biol. Med</title>
		<imprint>
			<biblScope unit="volume">169</biblScope>
			<date type="published" when="2024-02">2024. Feb 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Intelligent tuberculosis activity assessment system based on an ensemble of neural networks</title>
		<author>
			<persName><forename type="first">V</forename><surname>Sineglazov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Riazanovskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Klanovets</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Chumachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Linnik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Comput. Biol. Med</title>
		<imprint>
			<biblScope unit="volume">147</biblScope>
			<biblScope unit="page">105800</biblScope>
			<date type="published" when="2022-08">Aug. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Deep learning for medical image segmentation: State-of-the-art advancements and challenges</title>
		<author>
			<persName><forename type="first">Eshmam</forename><forename type="middle">&amp;</forename><surname>Rayed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Islam</surname></persName>
		</author>
		<author>
			<persName><surname>Niha</surname></persName>
		</author>
		<author>
			<persName><surname>Sadia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jamin</forename><forename type="middle">&amp;</forename><surname>Jim</surname></persName>
		</author>
		<author>
			<persName><surname>Kabir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Md &amp; Mridha</surname></persName>
		</author>
		<idno type="DOI">.10.1016/j.imu.2024.101504</idno>
	</analytic>
	<monogr>
		<title level="j">Informatics in Medicine Unlocked</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page">101504</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Deep learning approaches to biomedical image segmentation</title>
		<author>
			<persName><surname>Rizwan-I-Haque</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeremiah</forename><surname>Intisar &amp; Neubert</surname></persName>
		</author>
		<idno type="DOI">.10.1016/j.imu.2020.100297</idno>
	</analytic>
	<monogr>
		<title level="j">Informatics in Medicine Unlocked</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page">100297</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Formation of Hybrid Artificial Neural Networks Topologies</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zgurovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Sineglazov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Chumachenko</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-48453-8_3</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-48453-8_3" />
	</analytic>
	<monogr>
		<title level="m">Artificial Intelligence Systems Based on Hybrid Neural Networks. Studies in Computational Intelligence</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">904</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Semi-supervised Medical Image Segmentation through Dual-task Consistency</title>
		<author>
			<persName><forename type="first">Xiangde</forename><forename type="middle">&amp;</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jieneng</forename><forename type="middle">&amp;</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><surname>Song</surname></persName>
		</author>
		<author>
			<persName><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Guotai</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v35i10.17066</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence</title>
				<meeting>the AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="8801" to="8809" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">TeTrIS: Template Transformer Networks for Image Segmentation With Shape Priors</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Petersen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Pawlowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Glocker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schaap</surname></persName>
		</author>
		<idno type="DOI">10.1109/TMI.2019.2905990</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Medical Imaging</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="2596" to="2606" />
			<date type="published" when="2019-11">Nov. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Semi-supervised Segmentation of Liver Using Adversarial Learning with Deep Atlas Prior</title>
		<author>
			<persName><forename type="first">H</forename><surname>Zheng</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-32226-7_17</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-32226-7_17" />
	</analytic>
	<monogr>
		<title level="m">Medical Image Computing and Computer Assisted Intervention -MICCAI 2019</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">D</forename><surname>Shen</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">11769</biblScope>
		</imprint>
	</monogr>
	<note>MICCAI 2019</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Dense Biased Networks with Deep Priori Anatomy and Hard Region Adaptation: Semi-supervised Learning for Fine Renal Artery Segmentation</title>
		<author>
			<persName><forename type="first">Yuting</forename><forename type="middle">&amp;</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Guanyu</forename><forename type="middle">&amp;</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jian</forename><forename type="middle">&amp;</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yang</forename><forename type="middle">&amp;</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Youyong</forename><forename type="middle">&amp;</forename><surname>Kong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jiasong</forename><forename type="middle">&amp;</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lijun</forename><forename type="middle">&amp;</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><surname>Xiaomei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jean-Louis &amp;</forename><surname>Dillenseger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pengfei</forename><forename type="middle">&amp;</forename><surname>Shao</surname></persName>
		</author>
		<author>
			<persName><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><surname>Shaobo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Huazhong</forename><forename type="middle">&amp;</forename><surname>Shu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jean-Louis &amp;</forename><surname>Coatrieux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shuo</forename><surname>Li</surname></persName>
		</author>
		<idno>.1016/j.media.2020.101722</idno>
	</analytic>
	<monogr>
		<title level="j">Medical image analysis</title>
		<imprint>
			<biblScope unit="volume">63</biblScope>
			<biblScope unit="page">10</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A Topological Loss Function for Deep-Learning Based Image Segmentation Using Persistent Homology</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Clough</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Byrne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Oksuz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Zimmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Schnabel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>King</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ieee Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="page" from="8766" to="8778" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Gaussian Mixture Models</title>
		<author>
			<persName><forename type="first">D</forename><surname>Reynolds</surname></persName>
		</author>
		<idno type="DOI">10.1007/978</idno>
		<idno>-0-387-73003-5_196</idno>
		<ptr target="https://doi.org/10.1007/978" />
	</analytic>
	<monogr>
		<title level="m">Encyclopedia of Biometrics</title>
				<editor>
			<persName><forename type="first">S</forename><forename type="middle">Z</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Jain</surname></persName>
		</editor>
		<meeting><address><addrLine>Boston, MA</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
