<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Improving unsupervised graph-based skull stripping: enhancements and comparative analysis with state-of-the-art methods</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Maria</forename><surname>Popa</surname></persName>
							<email>maria.popa@ubbcluj.ro</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Faculty of Mathematics and Computer Science</orgName>
								<orgName type="department" key="dep2">Department of Computer Science</orgName>
								<orgName type="institution">Babeș-Bolyai University</orgName>
								<address>
									<addrLine>Mihail Kogălniceanu 1</addrLine>
									<postCode>400084</postCode>
									<settlement>Cluj-Napoca</settlement>
									<country key="RO">Romania</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Anca</forename><surname>Andreica</surname></persName>
							<email>anca.andreica@ubbcluj.ro</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Faculty of Mathematics and Computer Science</orgName>
								<orgName type="department" key="dep2">Department of Computer Science</orgName>
								<orgName type="institution">Babeș-Bolyai University</orgName>
								<address>
									<addrLine>Mihail Kogălniceanu 1</addrLine>
									<postCode>400084</postCode>
									<settlement>Cluj-Napoca</settlement>
									<country key="RO">Romania</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Improving unsupervised graph-based skull stripping: enhancements and comparative analysis with state-of-the-art methods</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">046D0E9C2F0412607FF17B4CC3B65A8C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:10+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Skull Stripping</term>
					<term>Brain Extraction</term>
					<term>Graph-Based Segmentation</term>
					<term>Unsupervised Segmentation</term>
					<term>BET</term>
					<term>BSE</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Brain disorders are increasingly prevalent today, making accurate brain segmentation essential for effective treatment and recovery. This paper introduces an enhanced unsupervised graph-based brain segmentation method that employs an ellipsoid to select the nodes forming the graph. The method was rigorously evaluated on T1 and T2 modalities using four diverse datasets: the complete NFBS dataset, 48 MRIs from the IXI dataset, 16 images featuring infant data from the QIN dataset, and 36 images from the FMS dataset. Comparative analysis with two widely used state-of-the-art approaches, BET2 and BSE, revealed that the proposed method significantly improved segmentation results. On the infant dataset, the method achieved a 21% increase in sensitivity compared to BSE, along with a 14% improvement in precision and a 13% increase in the Jaccard index compared to BET2. On the NFBS dataset, it demonstrated a 10% improvement in precision over BET2. However, on the T2-weighted dataset, only slight improvements were observed compared to both BSE and BET2. This advancement in segmentation techniques holds promise for better diagnosis and treatment of various brain disorders, potentially leading to improved patient outcomes and more efficient clinical workflows.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>According to the World Health Organization (WHO), approximately 38 million people are affected by Alzheimer's disease, the most prevalent form of dementia. Epilepsy, a chronic noncommunicable brain disorder, can impact individuals of all ages, with an estimated 50 million people worldwide experiencing this condition. Accurate segmentation is a crucial step in early detection and regular examinations of brain disorders, as it is essential for identifying suitable treatments and ultimately promoting healing. The extensive use of MRI, a painless and rapid diagnostic tool, is prevalent in screening. The necessity for precise computer-assisted systems arises because manual segmentation is time-consuming and imposes additional workload on the medical staff.</p><p>Accurate segmentation is a crucial step in early detection and regular examinations of brain disorders, as it is essential for identifying suitable treatments and ultimately promoting healing. The extensive use of MRI, a painless and rapid diagnostic tool, is prevalent in screening. The necessity for precise computer-assisted systems arises because manual segmentation is time-consuming and imposes additional workload on the medical staff.</p><p>Brain segmentation, also known as skull stripping, involves the process of separating the skull from the brain. While various methods have been proposed in the literature, both supervised and unsupervised, for this purpose, the absence of a universally perfect method persists due to the diverse range of systems and the multitude of brain-related issues.</p><p>The Brain Extraction Tool (BET) <ref type="bibr" target="#b0">[1]</ref> is an unsupervised method used for skull stripping. Its widespread adoption is due to its speed and robustness. The algorithm is based on surface tessellation, approximating the brain with a sphere, and extracting it through 1000 iterations. One drawback of the method is its inability to effectively segment the bottom and top of the brain, leading to the inclusion of non-brain tissue in the segmentation. Although BET* <ref type="bibr" target="#b1">[2]</ref> attempts to address these issues by reducing the number of iterations to 50 and approximating the brain with an ellipsoid, it still incorporates non-brain tissue into the segmentation <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>GUBS <ref type="bibr" target="#b4">[5]</ref> is an unsupervised graph-based segmentation method that represents MRI volumes as a weighted graph, with nodes corresponding to voxels and edges capturing relations between them. The weight for each edge is determined by calculating the absolute difference in intensity between the two nodes. Then, the algorithm classifies voxels into three categories, namely, nodes inside the brain, nodes in the non-brain tissue(skull) and nodes from the background. Subsequently, a minimal spanning tree is constructed by collapsing the entire graph to the selected nodes. The node selection process depends on the dataset and the user interaction. Analyzing the dataset is crucial for determining the threshold above which nodes are selected, as well as establishing the boundary for the skull <ref type="bibr" target="#b2">[3]</ref>. Some recent studies <ref type="bibr" target="#b3">[4]</ref> and <ref type="bibr" target="#b2">[3]</ref> aim to overcome these problems by eliminating user interaction and dependency on parameters for each dataset. These methods reduce the number of node categories to just two: nodes inside the brain and nodes in the background. The node selection approach eliminates user interaction by approximating the brain with either a sphere <ref type="bibr" target="#b2">[3]</ref> or an ellipsoid <ref type="bibr" target="#b3">[4]</ref>. In both approaches, the center of the geometric bodies is set at the center of the mass of the image. These methods show improved results compared to the GUBS approach and halve the time needed to process one MRI. Although the method was only tested on NFBS <ref type="bibr" target="#b5">[6]</ref> dataset and it still includes non-brain tissue into segmentation.</p><p>The paper introduces an enhanced 3D unsupervised graph-based method for brain segmentation. By addressing the limitations of Ellipsoid-GUBS <ref type="bibr" target="#b3">[4]</ref> and maintaining zero user interaction, it achieves heightened segmentation accuracy. The method is tested across various datasets and compared to two state-of-the-art methods, demonstrating improved results. The following are the key contributions of our work:</p><p>1. Geometric Centering: The novelty of the proposed method lies in the shift from center-of-mass placement of the ellipsoid to a fixed geometric center within the image. This adjustment reduces sensitivity to asymmetrical or irregular mass distributions, leading to more stable segmentation outcomes across varying datasets 2. Comprehensive Validation Across Datasets: Another key contribution is the comprehensive evaluation of the method on four diverse datasets, showcasing its robustness and adaptability in comparison to the previous version, which was tested on a single dataset 3. Benchmarking Against State-of-the-Art: ncluding comparisons with two state-of-the-art methods is crucial for benchmarking and demonstrates that our approach has notable advantages. Specifically, it shows superior robustness (working effectively with T2 modality and infant data) and offers accurate segmentation.</p><p>The remaining sections of the paper are organized as follows: Section 2 provides an overview of related work, Section 3 introduces a new approach, Section 4 outlines the experiments and their results, Section 5 delves into a discussion, and Section 6 concludes with remarks on future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related work</head><p>Graph-based applications have gained prominence in modern methodologies due to their robust capability to depict complex relationships within data. These applications find versatile utility across a range of fields, including social network analysis and biological systems modeling, where the interconnected nature of entities can be effectively represented and analyzed. The adaptability of graph structures positions them as invaluable tools for tackling intricate problems that require an understanding and utilization of complex connections among different data points. Consequently, the prevalence of graphbased approaches has grown in modern data-driven applications, underscoring their significance and effectiveness in capturing and interpreting intricate data relationships.</p><p>Graph-CUTS <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b2">3]</ref> stands out as a widely adopted skull stripping method employing morphological operations for brain segmentation. In the segmentation phase, region growing is employed to estimate the white matter volume. The subsequent step involves transforming the resulting MRI into a graph and applying graph-cuts to eliminate narrow connections. One drawback of this method lies in its dependency on region growth, which may introduce a time-consuming aspect.</p><p>GUBS <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b2">3]</ref>, along with the methodologies introduced in <ref type="bibr" target="#b2">[3]</ref> and <ref type="bibr" target="#b3">[4]</ref>, employs a Minimum Spanning Tree (MST) for brain segmentation. The MRI is initially translated into a graph, and subsequently, a MST is constructed by collapsing the nodes. In <ref type="bibr" target="#b2">[3]</ref>, the drawbacks of GUBS are acknowledged, and a user-friendly interaction approach is introduced, resulting in enhanced outcomes compared to GUBS. While the results are not flawless, the method demonstrates improvement by sampling nodes within an ellipsoid, as outlined in <ref type="bibr" target="#b3">[4]</ref>.</p><p>BET2 <ref type="bibr" target="#b7">[8]</ref>, an enhanced version of the BET algorithm, is part of the FSL Tool suite <ref type="bibr" target="#b8">[9]</ref>. BET2 is optimized for high-resolution T1 and T2-weighted images and ideally requires paired T1 and T2-weighted scans with a resolution of approximately 2 mm. Initially, the brain surface is identified in the T1 image using the original BET algorithm, after which the T2 image is registered to the T1 scan <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>. Compared to BET, BET2 achieves more accurate segmentation results <ref type="bibr" target="#b10">[11]</ref>.</p><p>The Brain Surface Extraction (BSE) method utilizes anisotropic diffusion to enhance brain boundaries <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>. Edge detection is performed with a 2D Marr-Hildreth operator, combining low-pass filtering using a Gaussian kernel and locating zero crossings in the Laplacian of the filtered image. BSE disconnects the brain from surrounding tissues via morphological erosion. Once the brain is identified through a connected component operation, a corresponding dilation is applied to reverse the effects of erosion. Finally, BSE uses a morphological closing operation to fill small pits and holes on the brain surface. The method relies on fixed parameters, including diffusion iterations, diffusion constant, edge constant, and erosion size. However, since BSE is edge-based, it can struggle with images that have poor contrast <ref type="bibr" target="#b11">[12]</ref>.</p><p>Supervised approaches also leverage graphs. In <ref type="bibr" target="#b12">[13]</ref>, a supervised graph-based neural network (GNN) is employed for brain tumor segmentation. The 3D MRI undergoes division into supervoxels using the Simple Linear Iterative Clustering (SLIC) algorithm <ref type="bibr" target="#b13">[14]</ref> to prevent the graph from becoming overly complex. SLIC is executed with 15,000 clusters. To mitigate graph and network complexity, the graph is constructed solely with the supervoxels generated by SLIC. Despite showcasing promising results, this method demands several hours for training and relies on labeled data.</p><p>SynthStrip <ref type="bibr" target="#b14">[15]</ref> is an innovative supervised deep learning method for skull stripping, utilizing a U-Net architecture. Trained on datasets with diverse resolutions and dimensions, it demonstrates superior performance compared to existing methods. However, despite its advancements, there is room for improvement as the method, in certain cases, includes non-brain tissue in the segmentation.</p><p>Deep learning methods are increasingly used in various segmentation tasks, showing promising results. The U-Net architecture, in particular, is widely employed for medical image segmentation. In <ref type="bibr" target="#b15">[16]</ref>, a modified U-Net model is applied to segment newborn brain images by training on 243 adult data and only 5 newborn data. This approach demonstrates good results and is compared to SynthStrip. However, as mentioned in <ref type="bibr" target="#b15">[16]</ref>, manually labeling a single brain volume takes approximately 8 hours, which is time-consuming for medical staff. For the skull-stripping task, where brain structures are relatively consistent, unsupervised methods, such as the proposed approach, could be a promising alternative.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Outlined method</head><p>The method described here is founded on the concept presented in <ref type="bibr" target="#b3">[4]</ref>. It involves transforming each MRI into a weighted graph, where the voxels in the MRI serve as nodes, and adjacent nodes are connected by edges. The weight of each edge is determined by calculating the absolute difference in intensity between the two connected nodes. Similar to <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b2">[3]</ref>, and <ref type="bibr" target="#b4">[5]</ref>, the segmentation involves the utilization of a Minimum Spanning Tree (MST). In contrast to the approach presented in <ref type="bibr" target="#b4">[5]</ref> and akin to the methods in <ref type="bibr" target="#b3">[4]</ref> and <ref type="bibr" target="#b2">[3]</ref>, nodes are chosen from two categories-specifically, nodes within the brain and nodes from the background. Nodes within the brain are selected within an ellipsoid, similar to the approach in <ref type="bibr" target="#b3">[4]</ref>. The key distinction from <ref type="bibr" target="#b3">[4]</ref> lies in the fact that the node selection employs an ellipsoid centered at the center of the image, rather than at the center of mass as proposed in <ref type="bibr" target="#b3">[4]</ref>. The method is divided in the following steps, which are also illustrated in Figure <ref type="figure" target="#fig_0">1</ref>:</p><formula xml:id="formula_0">• Image processing • Nodes sampling • MST construction &amp; Brain extraction</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Image processing</head><p>During the processing phase, a singular operation takes place: applying binary closing to fill gaps, a technique commonly used when sampling nodes in the background. To maintain resolution and details, the images are employed in their original dimensions, preventing any loss.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Nodes sampling</head><p>Constructing a graph involves the creation of nodes and edges. In this scenario, voxels represent the nodes, and an edge is established between every two adjacent nodes. Nodes are selected from two distinct categories: nodes within the brain and nodes in the background. The background nodes follow the methodology outlined in <ref type="bibr" target="#b4">[5]</ref>. Initially, a binary image is computed using Otsu thresholding. Subsequently, as described in the processing phase, binary closing is applied. After obtaining the transformed binary image, 20,000 voxels are randomly selected from the six faces.</p><p>Sampling nodes from the brain involves constructing an ellipsoid. The inspiration for using an ellipsoid is drawn from <ref type="bibr" target="#b1">[2]</ref>, where the brain was approximated using this geometric shape. In previous methods, the center of these geometrical bodies was positioned at the center of mass in the image, a concept derived from the BET method. However, in certain images, the center of mass might be located in a corner, leading to the oversight of crucial parts of the brain during node sampling. To address this issue, the proposed approach sets the center of the ellipsoid at the image's center, calculated for each axis. For an MRI with the dimensions (𝑑𝑖𝑚 𝑥 , 𝑑𝑖𝑚 𝑦 , 𝑑𝑖𝑚 𝑧 ) the center of the image will be located in</p><formula xml:id="formula_1">( 𝑑𝑖𝑚 𝑥 2 , 𝑑𝑖𝑚 𝑦 2 , 𝑑𝑖𝑚 𝑧 2 ).</formula><p>For the x, y, and z axes, any nodes that meet the conditions specified in <ref type="bibr" target="#b0">(1)</ref>, which represents the ellipsoid equation, are identified as nodes within the brain. r is determined by considering the volume of voxels that surpass the multi-Otsu threshold. Increasing the dimensions of the ellipsoid axes results in a growth in the number of the selected nodes, which denotes a more complex graph. On the other hand, reducing the size for the dimensions results in a too small graph.</p><formula xml:id="formula_2">9 ⋅ 𝑥 2 𝑟 2 + 9 ⋅ 𝑦 2 𝑟 2 + 9 16 ⋅ 𝑧 2 𝑟 2 ≤ 1<label>(1)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">MST construction &amp; Brain segmentation</head><p>MST construction follows a similar approach as described in <ref type="bibr" target="#b4">[5]</ref> and <ref type="bibr" target="#b3">[4]</ref>. Constructing the MST involves transforming the initial graph, representing the MRI into a smaller graph that can be processed more easily. Nodes from the two categories are combined into a single representative node. The graph transformation involves collapsing nodes based on the following rules: if two nodes connected by an edge are part of the multitude of sampling nodes, the edge is discarded, and both nodes are replaced by a single representative node for each category. Subsequently, for the remaining edges containing one node from the sampled category, that node is replaced by the single representative node, and all other nodes from the sampled category are removed except for the single representative one. The segmentation process concludes with a step that divides the image into two regions: the brain and the background. This is accomplished by removing the edge with the highest weight from the Minimum Spanning Tree (MST) path <ref type="bibr" target="#b3">[4]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head><p>To assess the effectiveness of the proposed method, multiple datasets were utilized. The Neurofeedback Skull-stripped repository (NFBS) comprises 125 T1w MRI images from subjects aged between 21 and 45, representing a diverse range of clinical and subclinical psychiatric conditions. Additionally, a dataset used for testing and validation in <ref type="bibr" target="#b14">[15]</ref> consists of 625 images sourced from seven public datasets, each offering distinct modalities and resolutions.</p><p>The method was specifically tested on a subset of the <ref type="bibr" target="#b14">[15]</ref> dataset, focusing on 48 T1w MRI images from the IXI dataset<ref type="foot" target="#foot_0">1</ref> , 36 T2 MRI images from the FSM dataset <ref type="bibr" target="#b16">[17]</ref>, and 16 infant T1w MRIs from <ref type="bibr" target="#b17">[18]</ref>. The choice of datasets and subsets of images adheres to the methodology outlined in <ref type="bibr" target="#b14">[15]</ref>.</p><p>We compared our results to two state-of-the-art methods which are broadly used, BET2 from FSL Tool <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b19">20]</ref> and the BSE from Brain Suite Tool <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Evaluation metrics</head><p>To evaluate the effectiveness of the proposed method, six metrics were employed: accuracy, precision, sensitivity, specificity, Jaccard Index, and Dice Coefficient. These metrics were computed by comparing the predicted MRI images with the ground truth. In this context, TP denotes voxels correctly identified as brain tissue, TN represents voxels inaccurately identified as non-brain tissue, FP indicates voxels mistakenly identified as brain tissue, and FN refers to voxels within the brain region inaccurately identified as non-brain tissue <ref type="bibr" target="#b2">[3]</ref>.</p><p>Voxel accuracy <ref type="bibr" target="#b22">[23,</ref><ref type="bibr" target="#b2">3]</ref> is defined as 2 and denotes the proportion of accurately classified voxels.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇 𝑃 + 𝑇 𝑁 𝑇 𝑃 + 𝑇 𝑁 + 𝐹 𝑃 + 𝐹 𝑁</head><p>(2)</p><p>Precision <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b2">3]</ref> is computed with the formula 3 and denotes the percentage of the accurately classified voxels in the brain tissue.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇 𝑃 𝑇 𝑃 + 𝐹 𝑃</head><p>(3)</p><p>Sensitivity <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b2">3]</ref>, calculated as 4 measures the percentage of brain tissue voxels in the ground truth that are accurately detected as brain tissue in the prediction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 =</head><p>𝑇 𝑃 𝑇 𝑃 + 𝐹 𝑁 (4) Specificity <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b2">3]</ref>, determined with 5 represents the ratio of non-brain tissue voxel in the ground truth that are correctly identified as non-brain in the prediction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑆𝑝𝑒𝑐𝑖𝑓 𝑖𝑐𝑖𝑡𝑦 = 𝑇 𝑁 𝑇 𝑁 + 𝐹 𝑁</head><p>(5)</p><p>Jaccard Index <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b2">3]</ref>, defined as 6 presents the overlap between the ground truth and segmentation results, divided by the union between the ground truth and segmentation results</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝐽 𝑎𝑐𝑐𝑎𝑟𝑑𝐼 𝑛𝑑𝑒𝑥 = 𝑇 𝑃 𝑇 𝑃 + 𝐹 𝑃 + 𝐹 𝑁 (6)</head><p>Dice Coefficient <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b2">3]</ref>, having the formula 7 quantifies the resemblance between the two sets of labels.</p><p>𝐷𝑖𝑐𝑒𝑐𝑜𝑒𝑓 𝑓 𝑖𝑐𝑖𝑒𝑛𝑡 = 2𝑇 𝑃 2𝑇 𝑃 + 𝐹 𝑃 + 𝐹 𝑁 (7)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Numerical results</head><p>Table <ref type="table" target="#tab_0">1</ref> provides a comparative analysis of Ellipsoid-GUBS, the newly proposed approach, BET2 and BSE methods on the NFBS <ref type="bibr" target="#b5">[6]</ref> dataset. The new proposed approach slightly improved in the Sensitivity with 4% in comparison to Ellipsoid-GUBS. For the NFBS dataset, the new proposed approach shows slight improvements in almost all metrics compared to BET2, with the exception of sensitivity. This means BET2 identifies true positives more effectively than the new approach. However, the new method demonstrates a notable 10% improvement in precision compared to BET2, indicating that it correctly identifies positive cases more accurately. However, BSE demonstrated the best performance. Table <ref type="table" target="#tab_1">2</ref> summarizes the results for infant T1w images from the QIN <ref type="bibr" target="#b24">[25]</ref> dataset. The novel approach shows slight improvements in all metrics compared to Ellipsoid-GUBS, including a 3% increase in both the Dice coefficient and Jaccard index. When compared to BET2, the proposed method achieves a 14% increase in precision and a 13% increase in the Jaccard index. Furthermore, it improves sensitivity by 21% and the Dice coefficient by 6% when compared to the BSE method.</p><p>Table <ref type="table">3</ref> reports the results of experiments on 48 MRIs from the IXI dataset. All methods yield comparable results, though BET2 achieves slightly better scores overall. Table <ref type="table" target="#tab_2">4</ref> presents the outcomes for the FMS T2w dataset. The new approach performs similarly to Ellipsoid-GUBS but shows improvements in all metrics compared to both BET2 and BSE.</p><p>We conducted the Repeated Measures ANOVA test on the four datasets to evaluate the performance differences among the four methods. The results indicated a significant difference (p &lt; 0.05) in most cases, with one exception (Infant dataset). Additionally, a Wilcoxon test revealed that the proposed new approach was significantly different from the BSE method on FSM dataset. However, for the other datasets, the Wilcoxon tests did not show a significant difference for the proposed method, while some differences were significant for the other methods. This lack of significance in the Wilcoxon tests is likely due to the small sample size.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Visual results</head><p>Figure <ref type="figure" target="#fig_1">2</ref> displays the visual results for each dataset. Although the numerical data may not fully highlight the differences, the visual comparison reveals the improvements achieved by the proposed method. For the NFBS dataset, the novel approach is closer to the ground truth than the Ellipsoid-GUBS method, which removed parts of the brain. Also, in the case of the IXI dataset, a considerable part of the skull was removed, which results in an overall better segmentation in comparison to the Ellipsoid-GUBS method. The Infant dataset also shows enhanced segmentation with the new approach, leaving only a small part of the skull. Conversely, for the FMS dataset, the performance of the methods is similar.</p><p>The implementation was done in python programming language and the experiments were run on an i7 processor (Core i7-8750HCPU @ 2.2GHz). In terms of experimental timing, it takes almost 45 seconds for the algorithm to perform the segmentation using images at their original size.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussion</head><p>The proposed method introduces several key contributions, including a shift to geometric centering for improved stability across datasets and comprehensive validation on four diverse datasets, demonstrating its robustness. Additionally, the method's benchmarking against two state-of-the-art techniques, BET2 and BSE, highlights its superior performance, particularly with T2-weighted and infant data.</p><p>The method presented underwent testing on four diverse datasets with varying resolutions and dimensions. Employing a minimal spanning tree, the novel approach extracted brain structures by constructing a graph from nodes within the brain and the background. Specifically, voxels situated within an ellipsoid centered at the image's geometric center were considered nodes representing the brain. The method was compared to two state-of-the-art techniques, yielding comparable results and even achieving better segmentation on certain datasets.</p><p>While acknowledging the advancements achieved by the presented method and its independence from the type of MRI, it is noteworthy that there remains potential for further enhancement. The current results indicate a positive trajectory, yet ongoing refinement is crucial to push the boundaries of segmentation accuracy. Notably, the method stands out for its efficiency and speed, a notable advantage in the context of medical imaging where swift processing is often imperative.</p><p>Although the numerical results between the new approach and the two state-of-the-art methods are comparable, the segmentation performance varies across different datasets: For the FMS T2w dataset, BSE failed to segment the brain, whereas the new approach and BET2 performed better, with the new approach achieving the most accurate segmentation. However, BET2 still included some non-brain parts. For the Infant dataset, BET2 was unable to remove the skull, while the new approach and BSE both performed similarly, close to the ground truth. In the IXI dataset, the new approach retained some non-brain parts, while the others successfully removed the skull. On the NFBS dataset, BSE provided the best segmentation, followed by the new approach, whereas BET2 included some skull remnants. Overall, improvements between the Ellipsoid-GUBS and the new approach are evident in all datasets except the FMS dataset, where the results are similar.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions and Future work</head><p>The paper introduced an enhanced unsupervised graph-based segmentation method that exhibits improved results across the T1w datasets subjected to testing. When applied to the T2w dataset, the method shows comparable outcomes to the method it was compared against. The segmentation process involves utilizing a Minimum Spanning Tree (MST), and the node selection is performed using an ellipsoid centered within the image.</p><p>The proposed method was evaluated against two state-of-the-art methods, BET2 and BSE, and demonstrated superior performance. On the infant dataset, it achieved a 21% increase in sensitivity compared to the BSE method, along with a 14% improvement in precision and a 13% increase in the Jaccard index compared to BET2. On the NFBS dataset, the method showed a 10% improvement in precision over BET2. However, on the T2w dataset, the method provided only slight improvements compared to both BSE and BET2.</p><p>Future endeavors encompass expanding the method's evaluation to additional datasets, conducting comparisons with other state-of-the-art and deep learning methods. Additionally, there are plans for collaboration with a hospital to acquire real-world data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>During the preparation of this work, the authors used ChatGPT and Grammarly in order to: Grammar and spelling check, Paraphrase and reword. After using this tools, the authors reviewed and edited the content as needed and take full responsibility for the publication's content.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Overview of the main steps of the proposed approach</figDesc><graphic coords="4,72.00,65.60,451.28,253.84" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Visual comparison for each dataset used in the evaluation for the axial slice, first column represents the original slice, second the ground truth, third Ellipsoid-GUBS, the fourth the segmentation obtained for the novel proposed approach, the fifth represents the segmentation obtained with BET and the last one with BSE methods</figDesc><graphic coords="8,72.00,65.61,451.27,538.61" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Results on the NFBS Dataset</figDesc><table><row><cell></cell><cell>Ellipsoid-GUBS</cell><cell>New approach</cell><cell>BET2</cell><cell>BSE</cell></row><row><cell>Accuracy</cell><cell>0.9520 ± 0.0237</cell><cell>0.9484 ± 0.0263</cell><cell>0.9322 ± 0.0434</cell><cell>0.9905 ± 0.0195</cell></row><row><cell>Precision</cell><cell>0.8218 ± 0.1359</cell><cell>0.7795 ± 0.1388</cell><cell>0.6783 ± 0.1870</cell><cell>0.9483 ± 0.0889</cell></row><row><cell>Jaccard</cell><cell>0.6105 ± 0.1529</cell><cell>0.6138 ± 0.0912</cell><cell>0.5979 ± 0.1489</cell><cell>0.9202 ± 0.0872</cell></row><row><cell>Dice</cell><cell>0.7437 ± 0.1534</cell><cell>0.7565 ± 0.0748</cell><cell>0.7371 ± 0.1215</cell><cell>0.9545 ± 0.0871</cell></row><row><cell>Specificity</cell><cell>0.9781 ± 0.0267</cell><cell>0.9700 ± 0.0309</cell><cell>0.9419 ± 0.0489</cell><cell>0.9937 ± 0.0133</cell></row><row><cell>Sensitivity</cell><cell>0.7195 ± 0.1806</cell><cell>0.7557 ± 0.0803</cell><cell>0.8472 ± 0.0541</cell><cell>0.9615 ± 0.0892</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Results on the Infant T1w dataset</figDesc><table><row><cell></cell><cell>Ellipsoid-GUBS</cell><cell>New approach</cell><cell>BET2</cell><cell>BSE</cell></row><row><cell>Accuracy</cell><cell>0.9553 ± 0.0625</cell><cell>0.9583 ± 0.0635</cell><cell>0.9438 ± 0.0428</cell><cell>0.9414 ± 0.0942</cell></row><row><cell>Precision</cell><cell>0.6038 ± 0.1982</cell><cell>0.6408 ± 0.1849</cell><cell>0.5085 ± 0.1957</cell><cell>0.6566 ± 0.3646</cell></row><row><cell>Jaccard</cell><cell>0.6006 ± 0.1946</cell><cell>0.6374 ± 0.1816</cell><cell>0.5075 ± 0.1949</cell><cell>0.6385 ± 0.3540</cell></row><row><cell>Dice</cell><cell>0.7306 ± 0.1646</cell><cell>0.7615 ± 0.1548</cell><cell>0.6496 ± 0.1853</cell><cell>0.7031 ± 0.3588</cell></row><row><cell>Specificity</cell><cell>0.9530 ± 0.0673</cell><cell>0.9561 ± 0.0683</cell><cell>0.9411 ± 0.0443</cell><cell>0.9504 ± 0.0775</cell></row><row><cell>Sensitivity</cell><cell>0.9951 ± 0.0081</cell><cell>0.9948 ± 0.0081</cell><cell>0.9981 ± 0.0036</cell><cell>0.7874 ± 0.3795</cell></row><row><cell>Table 3</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="2">Results on the IXI T1w dataset</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>Ellipsoid-GUBS</cell><cell>New approach</cell><cell>BET2</cell><cell>BSE</cell></row><row><cell>Accuracy</cell><cell>0.9294 ± 0.0382</cell><cell>0.9275 ± 0.0379</cell><cell>0.9606 ± 0.0307</cell><cell>0.9672 ± 0.0460</cell></row><row><cell>Precision</cell><cell>0.7222 ± 0.1307</cell><cell>0.7128 ± 0.0129</cell><cell>0.8377 ± 0.1179</cell><cell>0.9283 ± 0.1643</cell></row><row><cell>Jaccard</cell><cell>0.6807 ± 0.1118</cell><cell>0.6739 ± 0.1105</cell><cell>0.7945 ± 0.1109</cell><cell>0.8426 ± 0.1361</cell></row><row><cell>Dice</cell><cell>0.8047 ± 0.0796</cell><cell>0.7999 ± 0.0801</cell><cell>0.8808 ± 0.0760</cell><cell>0.9074 ± 0.0974</cell></row><row><cell>Specificity</cell><cell>0.9290 ± 0.0484</cell><cell>0.9263 ± 0.0475</cell><cell>0.9643 ± 0.0351</cell><cell>0.9770 ± 0.0577</cell></row><row><cell>Sensitivity</cell><cell>0.9309 ± 0.0455</cell><cell>0.9338 ± 0.4330</cell><cell>0.9393 ± 0.0193</cell><cell>0.9099 ± 0.0294</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 4</head><label>4</label><figDesc>Results on the FMS T2w dataset for the Graph based approach, BET2 and BSE</figDesc><table><row><cell></cell><cell>Ellipsoid-GUBS</cell><cell>New approach</cell><cell>BET2</cell><cell>BSE</cell></row><row><cell>Accuracy</cell><cell>0.9899 ± 0.0017</cell><cell>0.9883 ± 0.0026</cell><cell>0.9836 ± 0.0142</cell><cell>0.8331 ± 0.0503</cell></row><row><cell>Precision</cell><cell>0.9734 ± 0.0155</cell><cell>0.9556 ± 0.0251</cell><cell>0.8668 ± 0.0908</cell><cell>0.2019 ± 0.2646</cell></row><row><cell>Jaccard</cell><cell>0.8940 ± 0.0139</cell><cell>0.8806 ± 0.0208</cell><cell>0.8564 ± 0.0892</cell><cell>0.1633 ± 0.2038</cell></row><row><cell>Dice</cell><cell>0.9439 ± 0.0078</cell><cell>0.9363 ± 0.0119</cell><cell>0.9200 ± 0.0600</cell><cell>0.2337 ± 0.2724</cell></row><row><cell>Specificity</cell><cell>0.9973 ± 0.0016</cell><cell>0.9955 ± 0.0028</cell><cell>0.9834 ± 0.01547</cell><cell>0.8841 ± 0.0424</cell></row><row><cell>Sensitivity</cell><cell>0.9164 ± 0.0086</cell><cell>0.9179 ± 0.0080</cell><cell>0.9863 ± 0.0060</cell><cell>0.3258 ± 0.3760</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">ℎ𝑡𝑡𝑝 ∶ //𝑏𝑟𝑎𝑖𝑛 − 𝑑𝑒𝑣𝑒𝑙𝑜𝑝𝑚𝑒𝑛𝑡.𝑜𝑟𝑔/𝑖𝑥𝑖 − 𝑑𝑎𝑡𝑎𝑠𝑒𝑡/</note>
		</body>
		<back>

			<div type="availability">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Data availability</head><p>The datasets utilized in this study can be accessed from their original websites: NFBS: http://preprocessed-connectomes-project.org/NFB_skullstripped/ SYNTHSTRIP (which contains the images for the IXI dataset, Infant T1w dataset, FMS T2w dataset): https://surfer.nmr.mgh.harvard.edu/docs/synthstrip/#dataset</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Fast robust automated brain extraction</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Smith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Hum Brain Mapp</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="143" to="155" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">An improved bet method for brain segmentation</title>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zwiggelaar</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICPR.2014.555</idno>
	</analytic>
	<monogr>
		<title level="m">2014 22nd International Conference on Pattern Recognition</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="3221" to="3226" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">An 3d mri unsupervised graph-based skull stripping algorithm</title>
		<author>
			<persName><forename type="first">M</forename><surname>Popa</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.procs.2023.10.157</idno>
		<ptr target="https://doi.org/10.1016/j.procs.2023.10.157" />
	</analytic>
	<monogr>
		<title level="m">27th International Conference on Knowledge Based and Intelligent Information and Engineering Sytems</title>
				<meeting><address><addrLine>KES</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="volume">225</biblScope>
			<biblScope unit="page" from="1682" to="1690" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Towards an improved unsupervised graph-based mri brain segmentation method</title>
		<author>
			<persName><forename type="first">M</forename><surname>Popa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Andreica</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Cooperative Information Systems</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Sellami</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M.-E</forename><surname>Vidal</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Van Dongen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><surname>Gaaloul</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Panetto</surname></persName>
		</editor>
		<meeting><address><addrLine>Nature Switzerland, Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="480" to="487" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Gubs: Graph-based unsupervised brain segmentation in mri images</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mayala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Herdlevaer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">B</forename><surname>Haugsøen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Anandan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Blaser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gavasso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brun</surname></persName>
		</author>
		<idno type="DOI">10.3390/jimaging8100262</idno>
		<ptr target="https://www.mdpi.com/2313-433X/8/10/262.doi:10.3390/jimaging8100262" />
	</analytic>
	<monogr>
		<title level="j">Journal of Imaging</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data</title>
		<author>
			<persName><forename type="first">B</forename><surname>Puccio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Pooley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Pellman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">C</forename><surname>Taverna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Craddock</surname></persName>
		</author>
		<idno type="DOI">10.1186/s13742-016-0150-5</idno>
		<ptr target="016-0150-5" />
	</analytic>
	<monogr>
		<title level="j">GigaScience</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page">13742</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Skull stripping using graph cuts</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Sadananthan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Chee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Zagorodnov</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuroimage.2009.08.050</idno>
		<ptr target="https://doi.org/10.1016/j.neuroimage.2009.08.050" />
	</analytic>
	<monogr>
		<title level="j">NeuroImage</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="225" to="239" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Bet2: Mr-based estimation of brain, skull and scalp surfaces</title>
		<author>
			<persName><forename type="first">M</forename><surname>Jenkinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Eleventh Annual Meeting of the Organization for Human Brain Mapping</title>
				<imprint>
			<date type="published" when="2005">2005. 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title/>
		<author>
			<persName><forename type="first">M</forename><surname>Jenkinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">F</forename><surname>Beckmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">E</forename><surname>Behrens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Woolrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Smith</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuroimage.2011.09.015</idno>
		<ptr target="https://doi.org/10.1016/j.neuroimage.2011.09.015,20YEARSOFfMRI" />
	</analytic>
	<monogr>
		<title level="j">NeuroImage</title>
		<imprint>
			<biblScope unit="volume">62</biblScope>
			<biblScope unit="page" from="782" to="790" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
	<note>Fsl</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Automatic brain extraction from mri of human head scans using helmholtz free energy principle and morphological operations</title>
		<author>
			<persName><forename type="first">K</forename><surname>Ezhilarasan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Praveenkumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Somasundaram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kalaiselvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Magesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kiruthika</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jeevarekha</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.bspc.2020.102270</idno>
		<ptr target="https://doi.org/10.1016/j.bspc.2020.102270" />
	</analytic>
	<monogr>
		<title level="j">Biomedical Signal Processing and Control</title>
		<imprint>
			<biblScope unit="volume">64</biblScope>
			<biblScope unit="page">102270</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A general skull stripping of multiparametric brain mris using 3d convolutional neural network</title>
		<author>
			<persName><forename type="first">L</forename><surname>Pei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">H M</forename><surname>Tahon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zenkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Alkarawi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kamal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yilmaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Er</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Scientific Reports</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">10826</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Methods on skull stripping of mri head scan images-a review</title>
		<author>
			<persName><forename type="first">K</forename><surname>Palanisamy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Prasath</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10278-015-9847-8</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Digital Imaging</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Exploring graph-based neural networks for automatic brain tumor segmentation</title>
		<author>
			<persName><forename type="first">C</forename><surname>Saueressig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Berkley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Munbodh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Singh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">From Data to Models and Back</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Bowles</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Broccia</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Nanni</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="18" to="37" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Slic superpixels compared to stateof-the-art superpixel methods</title>
		<author>
			<persName><forename type="first">R</forename><surname>Achanta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shaji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lucchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Süsstrunk</surname></persName>
		</author>
		<idno type="DOI">10.1109/TPAMI.2012.120</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="2274" to="2282" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Synthstrip: skull-stripping for any brain image</title>
		<author>
			<persName><forename type="first">A</forename><surname>Hoopes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Mora</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Dalca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Fischl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hoffmann</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuroimage.2022.119474</idno>
		<idno>.119474</idno>
		<ptr target="https://doi.org/10.1016/j.neuroimage.2022" />
	</analytic>
	<monogr>
		<title level="j">NeuroImage</title>
		<imprint>
			<biblScope unit="volume">260</biblScope>
			<biblScope unit="page">119474</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Unsupervised domain adaptation of mri skull-stripping trained on adult data to newborns</title>
		<author>
			<persName><forename type="first">A</forename><surname>Omidi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mohammadshahi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Gianchandani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>King</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Leijser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Souza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)</title>
				<meeting>the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)</meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="7718" to="7727" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A deep learning toolbox for automatic segmentation of subcortical limbic structures from mri images</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">N</forename><surname>Greve</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Billot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cordero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hoopes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hoffmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Dalca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Fischl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Iglesias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Augustinack</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuroimage.2021.118610</idno>
		<idno>.118610</idno>
		<ptr target="https://doi.org/10.1016/j.neuroimage.2021" />
	</analytic>
	<monogr>
		<title level="j">NeuroImage</title>
		<imprint>
			<biblScope unit="volume">244</biblScope>
			<biblScope unit="page">118610</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A freesurfer-compliant consistent manual segmentation of infant brains spanning the 0-2 year age range</title>
		<author>
			<persName><forename type="first">K</forename><surname>De Macedo Rodrigues</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ben-Avi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Sliva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>-S. Choe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Drottar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Fischl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">E</forename><surname>Grant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zöllei</surname></persName>
		</author>
		<idno type="DOI">10.3389/fnhum.2015.00021</idno>
		<ptr target="https://www.frontiersin.org/articles/10.3389/fnhum.2015.00021.doi:10.3389/fnhum.2015.00021" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Human Neuroscience</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Advances in functional and structural mr image analysis and implementation as fsl</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jenkinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Woolrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">F</forename><surname>Beckmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">E</forename><surname>Behrens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Johansen-Berg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">R</forename><surname>Bannister</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>De Luca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Drobnjak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Flitney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Niazy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Saunders</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vickers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">De</forename><surname>Stefano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Brady</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Matthews</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuroimage.2004.07.051</idno>
		<ptr target="mathematicsinBrainImaging" />
	</analytic>
	<monogr>
		<title level="j">NeuroImage</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="S208" to="S219" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Bayesian analysis of neuroimaging data in fsl</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Woolrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jbabdi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Patenaude</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chappell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Makni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Behrens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Beckmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jenkinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Smith</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuroimage.2008.10.055</idno>
		<ptr target="mathematicsinBrainImaging" />
	</analytic>
	<monogr>
		<title level="j">NeuroImage</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="S173" to="S186" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Surface-based labeling of cortical anatomy using a deformable atlas</title>
		<author>
			<persName><forename type="first">S</forename><surname>Sandor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Leahy</surname></persName>
		</author>
		<idno type="DOI">10.1109/42.552054</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Medical Imaging</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="41" to="54" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Brainsuite: An automated cortical surface identification tool</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Shattuck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Leahy</surname></persName>
		</author>
		<idno type="DOI">10.1016/S1361-8415(02)00054-3</idno>
		<ptr target="https://doi.org/10.1016/S1361-8415(02)00054-3" />
	</analytic>
	<monogr>
		<title level="j">Medical Image Analysis</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="129" to="142" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Image segmentation evaluation: A survey of unsupervised methods</title>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Fritts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Goldman</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cviu.2007.08.003</idno>
		<ptr target="https://doi.org/10.1016/j.cviu.2007.08.003" />
	</analytic>
	<monogr>
		<title level="j">Computer Vision and Image Understanding</title>
		<imprint>
			<biblScope unit="volume">110</biblScope>
			<biblScope unit="page" from="260" to="280" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Metrics for evaluating 3d medical image segmentation: analysis, selection, and tool</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Taha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hanbury</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">BMC Medical Imaging</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">The cancer imaging archive (tcia): Maintaining and operating a public information repository</title>
		<author>
			<persName><forename type="first">K</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Vendt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Freymann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kirby</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Koppel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Moore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Phillips</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Maffitt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pringle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Tarbox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Prior</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10278-013-9622-7</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of digital imaging</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
