<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>NeuroImage</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.3389/fnhum.2015.00021</article-id>
      <title-group>
        <article-title>Improving unsupervised graph-based skull stripping: enhancements and comparative analysis with state-of-the-art methods</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maria Popa</string-name>
          <email>maria.popa@ubbcluj.ro</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anca Andreica</string-name>
          <email>anca.andreica@ubbcluj.ro</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          ,
          <addr-line>400084, Cluj-Napoca</addr-line>
          ,
          <country country="RO">Romania</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Skull Stripping</institution>
          ,
          <addr-line>Brain Extraction, Graph-Based Segmentation, Unsupervised Segmentation, BET, BSE</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>260</volume>
      <issue>2022</issue>
      <fpage>7718</fpage>
      <lpage>7727</lpage>
      <abstract>
        <p>Brain disorders are increasingly prevalent today, making accurate brain segmentation essential for efective treatment and recovery. This paper introduces an enhanced unsupervised graph-based brain segmentation method that employs an ellipsoid to select the nodes forming the graph. The method was rigorously evaluated on T1 and T2 modalities using four diverse datasets: the complete NFBS dataset, 48 MRIs from the IXI dataset, 16 images featuring infant data from the QIN dataset, and 36 images from the FMS dataset. Comparative analysis with two widely used state-of-the-art approaches, BET2 and BSE, revealed that the proposed method significantly improved segmentation results. On the infant dataset, the method achieved a 21% increase in sensitivity compared to BSE, along with a 14% improvement in precision and a 13% increase in the Jaccard index compared to BET2. On the NFBS dataset, it demonstrated a 10% improvement in precision over BET2. However, on the T2-weighted dataset, only slight improvements were observed compared to both BSE and BET2. This advancement in segmentation techniques holds promise for better diagnosis and treatment of various brain disorders, potentially leading to improved patient outcomes and more eficient clinical workflows.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>According to the World Health Organization (WHO), approximately 38 million people are afected by
Alzheimer’s disease, the most prevalent form of dementia. Epilepsy, a chronic noncommunicable brain
disorder, can impact individuals of all ages, with an estimated 50 million people worldwide experiencing
this condition. Accurate segmentation is a crucial step in early detection and regular examinations of
brain disorders, as it is essential for identifying suitable treatments and ultimately promoting healing.
The extensive use of MRI, a painless and rapid diagnostic tool, is prevalent in screening. The necessity
for precise computer-assisted systems arises because manual segmentation is time-consuming and
imposes additional workload on the medical staf.</p>
      <p>Accurate segmentation is a crucial step in early detection and regular examinations of brain
disorders, as it is essential for identifying suitable treatments and ultimately promoting healing. The
extensive use of MRI, a painless and rapid diagnostic tool, is prevalent in screening. The necessity for
precise computer-assisted systems arises because manual segmentation is time-consuming and imposes
additional workload on the medical staf.</p>
      <p>Brain segmentation, also known as skull stripping, involves the process of separating the skull
from the brain. While various methods have been proposed in the literature, both supervised and
unsupervised, for this purpose, the absence of a universally perfect method persists due to the diverse
range of systems and the multitude of brain-related issues.</p>
      <p>
        The Brain Extraction Tool (BET) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is an unsupervised method used for skull stripping. Its widespread
adoption is due to its speed and robustness. The algorithm is based on surface tessellation, approximating
      </p>
      <p>CEUR</p>
      <p>
        ceur-ws.org
the brain with a sphere, and extracting it through 1000 iterations. One drawback of the method is its
inability to efectively segment the bottom and top of the brain, leading to the inclusion of non-brain
tissue in the segmentation. Although BET* [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] attempts to address these issues by reducing the number
of iterations to 50 and approximating the brain with an ellipsoid, it still incorporates non-brain tissue
into the segmentation [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
      </p>
      <p>
        GUBS [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is an unsupervised graph-based segmentation method that represents MRI volumes as a
weighted graph, with nodes corresponding to voxels and edges capturing relations between them. The
weight for each edge is determined by calculating the absolute diference in intensity between the two
nodes. Then, the algorithm classifies voxels into three categories, namely, nodes inside the brain, nodes
in the non-brain tissue(skull) and nodes from the background. Subsequently, a minimal spanning tree
is constructed by collapsing the entire graph to the selected nodes. The node selection process depends
on the dataset and the user interaction. Analyzing the dataset is crucial for determining the threshold
above which nodes are selected, as well as establishing the boundary for the skull [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Some recent studies [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] aim to overcome these problems by eliminating user interaction and
dependency on parameters for each dataset. These methods reduce the number of node categories to just
two: nodes inside the brain and nodes in the background. The node selection approach eliminates user
interaction by approximating the brain with either a sphere [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] or an ellipsoid [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In both approaches,
the center of the geometric bodies is set at the center of the mass of the image. These methods show
improved results compared to the GUBS approach and halve the time needed to process one MRI.
Although the method was only tested on NFBS [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] dataset and it still includes non-brain tissue into
segmentation.
      </p>
      <p>
        The paper introduces an enhanced 3D unsupervised graph-based method for brain segmentation.
By addressing the limitations of Ellipsoid-GUBS [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and maintaining zero user interaction, it achieves
heightened segmentation accuracy. The method is tested across various datasets and compared to two
state-of-the-art methods, demonstrating improved results. The following are the key contributions of
our work:
1. Geometric Centering: The novelty of the proposed method lies in the shift from center-of-mass
placement of the ellipsoid to a fixed geometric center within the image. This adjustment reduces
sensitivity to asymmetrical or irregular mass distributions, leading to more stable segmentation
outcomes across varying datasets
2. Comprehensive Validation Across Datasets: Another key contribution is the comprehensive
evaluation of the method on four diverse datasets, showcasing its robustness and adaptability in
comparison to the previous version, which was tested on a single dataset
3. Benchmarking Against State-of-the-Art: ncluding comparisons with two state-of-the-art
methods is crucial for benchmarking and demonstrates that our approach has notable advantages.
Specifically, it shows superior robustness (working efectively with T2 modality and infant data)
and ofers accurate segmentation.
      </p>
      <p>The remaining sections of the paper are organized as follows: Section 2 provides an overview of
related work, Section 3 introduces a new approach, Section 4 outlines the experiments and their results,
Section 5 delves into a discussion, and Section 6 concludes with remarks on future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>Graph-based applications have gained prominence in modern methodologies due to their robust
capability to depict complex relationships within data. These applications find versatile utility across a range
of fields, including social network analysis and biological systems modeling, where the interconnected
nature of entities can be efectively represented and analyzed. The adaptability of graph structures
positions them as invaluable tools for tackling intricate problems that require an understanding and
utilization of complex connections among diferent data points. Consequently, the prevalence of
graphbased approaches has grown in modern data-driven applications, underscoring their significance and
efectiveness in capturing and interpreting intricate data relationships.</p>
      <p>
        Graph-CUTS [
        <xref ref-type="bibr" rid="ref3 ref7">7, 3</xref>
        ] stands out as a widely adopted skull stripping method employing morphological
operations for brain segmentation. In the segmentation phase, region growing is employed to estimate
the white matter volume. The subsequent step involves transforming the resulting MRI into a graph
and applying graph-cuts to eliminate narrow connections. One drawback of this method lies in its
dependency on region growth, which may introduce a time-consuming aspect.
      </p>
      <p>
        GUBS [
        <xref ref-type="bibr" rid="ref3 ref5">5, 3</xref>
        ], along with the methodologies introduced in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], employs a Minimum Spanning
Tree (MST) for brain segmentation. The MRI is initially translated into a graph, and subsequently, a
MST is constructed by collapsing the nodes. In [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], the drawbacks of GUBS are acknowledged, and a
user-friendly interaction approach is introduced, resulting in enhanced outcomes compared to GUBS.
While the results are not flawless, the method demonstrates improvement by sampling nodes within an
ellipsoid, as outlined in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        BET2 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], an enhanced version of the BET algorithm, is part of the FSL Tool suite [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. BET2 is optimized
for high-resolution T1 and T2-weighted images and ideally requires paired T1 and T2-weighted scans
with a resolution of approximately 2 mm. Initially, the brain surface is identified in the T1 image using
the original BET algorithm, after which the T2 image is registered to the T1 scan [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. Compared to
BET, BET2 achieves more accurate segmentation results [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        The Brain Surface Extraction (BSE) method utilizes anisotropic difusion to enhance brain boundaries
[
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. Edge detection is performed with a 2D Marr-Hildreth operator, combining low-pass filtering
using a Gaussian kernel and locating zero crossings in the Laplacian of the filtered image. BSE
disconnects the brain from surrounding tissues via morphological erosion. Once the brain is identified
through a connected component operation, a corresponding dilation is applied to reverse the efects of
erosion. Finally, BSE uses a morphological closing operation to fill small pits and holes on the brain
surface. The method relies on fixed parameters, including difusion iterations, difusion constant, edge
constant, and erosion size. However, since BSE is edge-based, it can struggle with images that have
poor contrast [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        Supervised approaches also leverage graphs. In [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], a supervised graph-based neural network (GNN)
is employed for brain tumor segmentation. The 3D MRI undergoes division into supervoxels using the
Simple Linear Iterative Clustering (SLIC) algorithm [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] to prevent the graph from becoming overly
complex. SLIC is executed with 15,000 clusters. To mitigate graph and network complexity, the graph
is constructed solely with the supervoxels generated by SLIC. Despite showcasing promising results,
this method demands several hours for training and relies on labeled data.
      </p>
      <p>
        SynthStrip [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] is an innovative supervised deep learning method for skull stripping, utilizing a U-Net
architecture. Trained on datasets with diverse resolutions and dimensions, it demonstrates superior
performance compared to existing methods. However, despite its advancements, there is room for
improvement as the method, in certain cases, includes non-brain tissue in the segmentation.
      </p>
      <p>Deep learning methods are increasingly used in various segmentation tasks, showing promising
results. The U-Net architecture, in particular, is widely employed for medical image segmentation. In
[16], a modified U-Net model is applied to segment newborn brain images by training on 243 adult data
and only 5 newborn data. This approach demonstrates good results and is compared to SynthStrip.
However, as mentioned in [16], manually labeling a single brain volume takes approximately 8 hours,
which is time-consuming for medical staf. For the skull-stripping task, where brain structures are
relatively consistent, unsupervised methods, such as the proposed approach, could be a promising
alternative.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Outlined method</title>
      <p>
        The method described here is founded on the concept presented in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. It involves transforming each MRI
into a weighted graph, where the voxels in the MRI serve as nodes, and adjacent nodes are connected by
edges. The weight of each edge is determined by calculating the absolute diference in intensity between
the two connected nodes. Similar to [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], the segmentation involves the utilization of a
Minimum Spanning Tree (MST). In contrast to the approach presented in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and akin to the methods
in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], nodes are chosen from two categories—specifically, nodes within the brain and nodes
from the background. Nodes within the brain are selected within an ellipsoid, similar to the approach
in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The key distinction from [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] lies in the fact that the node selection employs an ellipsoid centered
at the center of the image, rather than at the center of mass as proposed in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The method is divided
in the following steps, which are also illustrated in Figure 1:
• Image processing
• Nodes sampling
• MST construction &amp; Brain extraction
      </p>
      <sec id="sec-3-1">
        <title>3.1. Image processing</title>
        <p>During the processing phase, a singular operation takes place: applying binary closing to fill gaps, a
technique commonly used when sampling nodes in the background. To maintain resolution and details,
the images are employed in their original dimensions, preventing any loss.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Nodes sampling</title>
        <p>
          Constructing a graph involves the creation of nodes and edges. In this scenario, voxels represent
the nodes, and an edge is established between every two adjacent nodes. Nodes are selected from
two distinct categories: nodes within the brain and nodes in the background. The background nodes
follow the methodology outlined in [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Initially, a binary image is computed using Otsu thresholding.
Subsequently, as described in the processing phase, binary closing is applied. After obtaining the
transformed binary image, 20,000 voxels are randomly selected from the six faces.
        </p>
        <p>
          Sampling nodes from the brain involves constructing an ellipsoid. The inspiration for using an
ellipsoid is drawn from [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], where the brain was approximated using this geometric shape. In previous
methods, the center of these geometrical bodies was positioned at the center of mass in the image, a
concept derived from the BET method. However, in certain images, the center of mass might be located
in a corner, leading to the oversight of crucial parts of the brain during node sampling. To address
this issue, the proposed approach sets the center of the ellipsoid at the image’s center, calculated for
each axis. For an MRI with the dimensions (
 , 
 ,   ) the center of the image will be located in
(  2  ,  2  ,  2  ).
        </p>
        <p>For the x, y, and z axes, any nodes that meet the conditions specified in (1), which represents the
ellipsoid equation, are identified as nodes within the brain. r is determined by considering the volume
of voxels that surpass the multi-Otsu threshold. Increasing the dimensions of the ellipsoid axes results
in a growth in the number of the selected nodes, which denotes a more complex graph. On the other
hand, reducing the size for the dimensions results in a too small graph.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. MST construction &amp; Brain segmentation</title>
        <p>
          MST construction follows a similar approach as described in [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] and [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Constructing the MST involves
transforming the initial graph, representing the MRI into a smaller graph that can be processed more
easily. Nodes from the two categories are combined into a single representative node. The graph
transformation involves collapsing nodes based on the following rules: if two nodes connected by an
edge are part of the multitude of sampling nodes, the edge is discarded, and both nodes are replaced by
a single representative node for each category. Subsequently, for the remaining edges containing one
node from the sampled category, that node is replaced by the single representative node, and all other
nodes from the sampled category are removed except for the single representative one.
        </p>
        <p>
          The segmentation process concludes with a step that divides the image into two regions: the brain
and the background. This is accomplished by removing the edge with the highest weight from the
Minimum Spanning Tree (MST) path [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
1ℎ  ∶ // − ./ − /
   =
        </p>
        <p>+  
  +   +   +  
(1)
(2)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>
        To assess the efectiveness of the proposed method, multiple datasets were utilized. The Neurofeedback
Skull-stripped repository (NFBS) comprises 125 T1w MRI images from subjects aged between 21 and 45,
representing a diverse range of clinical and subclinical psychiatric conditions. Additionally, a dataset
used for testing and validation in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] consists of 625 images sourced from seven public datasets, each
ofering distinct modalities and resolutions.
      </p>
      <p>
        The method was specifically tested on a subset of the [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] dataset, focusing on 48 T1w MRI images
from the IXI dataset 1, 36 T2 MRI images from the FSM dataset [17], and 16 infant T1w MRIs from [18].
The choice of datasets and subsets of images adheres to the methodology outlined in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        We compared our results to two state-of-the-art methods which are broadly used, BET2 from FSL
Tool [
        <xref ref-type="bibr" rid="ref9">9, 19, 20</xref>
        ] and the BSE from Brain Suite Tool [21, 22].
      </p>
      <sec id="sec-4-1">
        <title>4.1. Evaluation metrics</title>
        <p>
          To evaluate the efectiveness of the proposed method, six metrics were employed: accuracy, precision,
sensitivity, specificity, Jaccard Index, and Dice Coeficient. These metrics were computed by comparing
the predicted MRI images with the ground truth. In this context, TP denotes voxels correctly identified
as brain tissue, TN represents voxels inaccurately identified as non-brain tissue, FP indicates voxels
mistakenly identified as brain tissue, and FN refers to voxels within the brain region inaccurately
identified as non-brain tissue [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          Voxel accuracy [
          <xref ref-type="bibr" rid="ref3">23, 3</xref>
          ] is defined as 2 and denotes the proportion of accurately classified voxels.
        </p>
        <p>
          Precision [
          <xref ref-type="bibr" rid="ref3">24, 3</xref>
          ] is computed with the formula 3 and denotes the percentage of the accurately classified
voxels in the brain tissue.
        </p>
        <p>=   (3)</p>
        <p>+</p>
        <p>
          Sensitivity [
          <xref ref-type="bibr" rid="ref3">24, 3</xref>
          ], calculated as 4 measures the percentage of brain tissue voxels in the ground truth
that are accurately detected as brain tissue in the prediction.
        </p>
        <p>
          Specificity [
          <xref ref-type="bibr" rid="ref3">24, 3</xref>
          ], determined with 5 represents the ratio of non-brain tissue voxel in the ground
truth that are correctly identified as non-brain in the prediction.
        </p>
        <p>
          Jaccard Index [
          <xref ref-type="bibr" rid="ref3">24, 3</xref>
          ], defined as 6 presents the overlap between the ground truth and segmentation
results, divided by the union between the ground truth and segmentation results
  =
  =
        </p>
        <p>+</p>
        <p>+  
     =</p>
        <p>+   +  
   =</p>
        <p>2 
2  +   +  
(4)
(5)
(6)
(7)</p>
        <p>Dice Coeficient
labels.</p>
        <p>
          [
          <xref ref-type="bibr" rid="ref3">24, 3</xref>
          ], having the formula 7 quantifies the resemblance between the two sets of
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Numerical results</title>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Visual results</title>
        <p>Figure 2 displays the visual results for each dataset. Although the numerical data may not fully highlight
the diferences, the visual comparison reveals the improvements achieved by the proposed method. For
the NFBS dataset, the novel approach is closer to the ground truth than the Ellipsoid-GUBS method,
which removed parts of the brain. Also, in the case of the IXI dataset, a considerable part of the skull
was removed, which results in an overall better segmentation in comparison to the Ellipsoid-GUBS
method. The Infant dataset also shows enhanced segmentation with the new approach, leaving only a
small part of the skull. Conversely, for the FMS dataset, the performance of the methods is similar.</p>
        <p>The implementation was done in python programming language and the experiments were run on
an i7 processor (Core i7-8750HCPU @ 2.2GHz). In terms of experimental timing, it takes almost 45
seconds for the algorithm to perform the segmentation using images at their original size.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>The proposed method introduces several key contributions, including a shift to geometric centering for
improved stability across datasets and comprehensive validation on four diverse datasets, demonstrating
its robustness. Additionally, the method’s benchmarking against two state-of-the-art techniques, BET2
and BSE, highlights its superior performance, particularly with T2-weighted and infant data.</p>
      <p>The method presented underwent testing on four diverse datasets with varying resolutions and
dimensions. Employing a minimal spanning tree, the novel approach extracted brain structures by
constructing a graph from nodes within the brain and the background. Specifically, voxels situated
within an ellipsoid centered at the image’s geometric center were considered nodes representing the
brain. The method was compared to two state-of-the-art techniques, yielding comparable results and
even achieving better segmentation on certain datasets.</p>
      <p>While acknowledging the advancements achieved by the presented method and its independence
from the type of MRI, it is noteworthy that there remains potential for further enhancement. The
current results indicate a positive trajectory, yet ongoing refinement is crucial to push the boundaries of
segmentation accuracy. Notably, the method stands out for its eficiency and speed, a notable advantage
in the context of medical imaging where swift processing is often imperative.</p>
      <p>Although the numerical results between the new approach and the two state-of-the-art methods are
comparable, the segmentation performance varies across diferent datasets: For the FMS T2w dataset,
BSE failed to segment the brain, whereas the new approach and BET2 performed better, with the new
approach achieving the most accurate segmentation. However, BET2 still included some non-brain
parts. For the Infant dataset, BET2 was unable to remove the skull, while the new approach and BSE
both performed similarly, close to the ground truth. In the IXI dataset, the new approach retained some
non-brain parts, while the others successfully removed the skull. On the NFBS dataset, BSE provided
the best segmentation, followed by the new approach, whereas BET2 included some skull remnants.
Overall, improvements between the Ellipsoid-GUBS and the new approach are evident in all datasets
except the FMS dataset, where the results are similar.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions and Future work</title>
      <p>The paper introduced an enhanced unsupervised graph-based segmentation method that exhibits
improved results across the T1w datasets subjected to testing. When applied to the T2w dataset, the
method shows comparable outcomes to the method it was compared against. The segmentation process
involves utilizing a Minimum Spanning Tree (MST), and the node selection is performed using an
ellipsoid centered within the image.</p>
      <p>The proposed method was evaluated against two state-of-the-art methods, BET2 and BSE, and
demonstrated superior performance. On the infant dataset, it achieved a 21% increase in sensitivity
compared to the BSE method, along with a 14% improvement in precision and a 13% increase in the
Jaccard index compared to BET2. On the NFBS dataset, the method showed a 10% improvement in
precision over BET2. However, on the T2w dataset, the method provided only slight improvements
compared to both BSE and BET2.</p>
      <p>Future endeavors encompass expanding the method’s evaluation to additional datasets, conducting
comparisons with other state-of-the-art and deep learning methods. Additionally, there are plans for
collaboration with a hospital to acquire real-world data.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT and Grammarly in order to: Grammar
and spelling check, Paraphrase and reword. After using this tools, the authors reviewed and edited the
content as needed and take full responsibility for the publication’s content.</p>
    </sec>
    <sec id="sec-8">
      <title>Data availability</title>
      <p>The datasets utilized in this study can be accessed from their original websites:</p>
      <p>NFBS: http://preprocessed-connectomes-project.org/NFB_skullstripped/</p>
      <p>SYNTHSTRIP (which contains the images for the IXI dataset, Infant T1w dataset, FMS T2w dataset):
https://surfer.nmr.mgh.harvard.edu/docs/synthstrip/#dataset</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>Fast robust automated brain extraction</article-title>
          ,
          <source>Hum Brain Mapp</source>
          <volume>17</volume>
          (
          <year>2002</year>
          )
          <fpage>143</fpage>
          -
          <lpage>155</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zwiggelaar</surname>
          </string-name>
          ,
          <article-title>An improved bet method for brain segmentation</article-title>
          ,
          <source>in: 2014 22nd International Conference on Pattern Recognition</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>3221</fpage>
          -
          <lpage>3226</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICPR.
          <year>2014</year>
          .
          <volume>555</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Popa</surname>
          </string-name>
          ,
          <article-title>An 3d mri unsupervised graph-based skull stripping algorithm</article-title>
          ,
          <source>Procedia Computer Science</source>
          <volume>225</volume>
          (
          <year>2023</year>
          )
          <fpage>1682</fpage>
          -
          <lpage>1690</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/ S1877050923013157. doi:https://doi.org/10.1016/j.procs.
          <year>2023</year>
          .
          <volume>10</volume>
          .157,
          <source>27th International Conference on Knowledge Based and Intelligent Information and Engineering</source>
          Sytems (KES
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Popa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Andreica</surname>
          </string-name>
          ,
          <article-title>Towards an improved unsupervised graph-based mri brain segmentation method</article-title>
          , in: M.
          <string-name>
            <surname>Sellami</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-E. Vidal</surname>
            ,
            <given-names>B. van Dongen</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Gaaloul</surname>
          </string-name>
          , H. Panetto (Eds.),
          <source>Cooperative Information Systems</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>480</fpage>
          -
          <lpage>487</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mayala</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Herdlevaer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Haugsøen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Anandan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Blaser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gavasso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brun</surname>
          </string-name>
          , Gubs:
          <article-title>Graph-based unsupervised brain segmentation in mri images</article-title>
          ,
          <source>Journal of Imaging</source>
          <volume>8</volume>
          (
          <year>2022</year>
          ). URL: https://www.mdpi.com/2313-433X/8/10/262. doi:
          <volume>10</volume>
          .3390/jimaging8100262.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Puccio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Pooley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Pellman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. C.</given-names>
            <surname>Taverna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Craddock</surname>
          </string-name>
          ,
          <article-title>The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data</article-title>
          ,
          <source>GigaScience</source>
          <volume>5</volume>
          (
          <year>2016</year>
          ). URL: https://doi.org/10.1186/s13742-016-0150-5. doi:
          <volume>10</volume>
          . 1186/s13742-016-0150-5. arXiv:https://academic.oup.com/gigascience/articlepdf/5/1/s13742-016-0150-5/25513149/author_comments-benjamin
          <source>_puccio_v2.pdf, s13742-016-0150-5.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Sadananthan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Chee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Zagorodnov</surname>
          </string-name>
          ,
          <article-title>Skull stripping using graph cuts</article-title>
          ,
          <source>NeuroImage</source>
          <volume>49</volume>
          (
          <year>2010</year>
          )
          <fpage>225</fpage>
          -
          <lpage>239</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/ S1053811909009604. doi:https://doi.org/10.1016/j.neuroimage.
          <year>2009</year>
          .
          <volume>08</volume>
          .050.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jenkinson</surname>
          </string-name>
          , Bet2:
          <article-title>Mr-based estimation of brain, skull and scalp surfaces, in: Eleventh Annual Meeting of the Organization for Human Brain Mapping,</article-title>
          <year>2005</year>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jenkinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. F.</given-names>
            <surname>Beckmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. E.</given-names>
            <surname>Behrens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Woolrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Smith</surname>
          </string-name>
          , Fsl, NeuroImage
          <volume>62</volume>
          (
          <year>2012</year>
          )
          <fpage>782</fpage>
          -
          <lpage>790</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S1053811911010603. doi:https://doi.org/10.1016/j.neuroimage.
          <year>2011</year>
          .
          <volume>09</volume>
          .015, 20 YEARS OF fMRI.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Ezhilarasan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Praveenkumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Somasundaram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kalaiselvi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Magesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kiruthika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jeevarekha</surname>
          </string-name>
          ,
          <article-title>Automatic brain extraction from mri of human head scans using helmholtz free energy principle and morphological operations</article-title>
          ,
          <source>Biomedical Signal Processing and Control</source>
          <volume>64</volume>
          (
          <year>2021</year>
          )
          <article-title>102270</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S1746809420303955. doi:https://doi.org/10.1016/j.bspc.
          <year>2020</year>
          .
          <volume>102270</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. H. M.</given-names>
            <surname>Tahon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zenkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Alkarawi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kamal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yilmaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Er</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ak</surname>
          </string-name>
          , et al.,
          <article-title>A general skull stripping of multiparametric brain mris using 3d convolutional neural network</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>12</volume>
          (
          <year>2022</year>
          )
          <fpage>10826</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>Palanisamy</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Prasath,</surname>
          </string-name>
          <article-title>Methods on skull stripping of mri head scan images-a review</article-title>
          ,
          <source>Journal of Digital Imaging</source>
          <volume>29</volume>
          (
          <year>2015</year>
          ).
          <source>doi:10.1007/s10278-015-9847-8.</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Saueressig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Berkley</surname>
          </string-name>
          , E. Kang,
          <string-name>
            <given-names>R.</given-names>
            <surname>Munbodh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Exploring graph-based neural networks for automatic brain tumor segmentation</article-title>
          , in: J.
          <string-name>
            <surname>Bowles</surname>
          </string-name>
          , G. Broccia, M. Nanni (Eds.),
          <source>From Data to Models and Back</source>
          , Springer International Publishing, Cham,
          <year>2021</year>
          , pp.
          <fpage>18</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Achanta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shaji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lucchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Süsstrunk</surname>
          </string-name>
          ,
          <article-title>Slic superpixels compared to stateof-the-art superpixel methods</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>34</volume>
          (
          <year>2012</year>
          )
          <fpage>2274</fpage>
          -
          <lpage>2282</lpage>
          . doi:
          <volume>10</volume>
          .1109/TPAMI.
          <year>2012</year>
          .
          <volume>120</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hoopes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Mora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Dalca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Fischl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <article-title>Synthstrip: skull-stripping for any</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>