<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>VISCERAL | VISual Concept Extraction challenge in RAdioLogy: ISBI 2014 Challenge Organization</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oscar Alfonso Jimenez del Toro</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Orcun Goksel</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bjoern Menze</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Henning Muller</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Georg Langs</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andre Weber</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ivan Eggel</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Katharina Gruenberg</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Markus Holzer</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andras Jakab</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Georgios Kontokotsios</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Markus Krenn</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tomas Salas Fernandez</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roger Schaer</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abdel Aziz Taha</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marianne Winterstein</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Allan Hanbury</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Catalan Agency for Health Information</institution>
          ,
          <addr-line>Assessment and Quality</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Medical University of Vienna</institution>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Swiss Federal Institute of Technology (ETH) Zurich</institution>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Applied Sciences Western Switzerland</institution>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Heidelberg</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>Vienna University of Technology</institution>
          ,
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <fpage>6</fpage>
      <lpage>15</lpage>
      <abstract>
        <p>The VISual Concept Extraction challenge in RAdioLogy (VISCERAL) project has been developed as a cloud{based infrastructure for the evaluation of medical image data in large data sets. As part of this project, the ISBI 2014 (International Symposium for Biomedical Imaging) challenge was organized using the VISCERAL data set and shared cloud{ framework. Two tasks were selected to exploit and compare multiple state{of{the{art solutions designed for big data medical image analysis. Segmentation and landmark localization results from the submitted algorithms were compared to manually annotated ground truth in the VISCERAL data set. This paper presents an overview of the challenge setup and data set used as well as the evaluation metrics from the various results submitted to the challenge. The participants presented their algorithms during an organized session at ISBI 2014. There were lively discussions in which the importance of comparing approaches on tasks sharing a common data set was highlighted.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Copyright c by the paper's authors. Copying permitted only for private and academic purposes.
In: O. Goksel (ed.): Proceedings of the VISCERAL Organ Segmentation and Landmark Detection Benchmark at
the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), Beijing, China, May 1st, 2014
published at http://ceur-ws.org
1</p>
    </sec>
    <sec id="sec-2">
      <title>Introduction</title>
      <p>Computational approaches that can be scaled to large amounts of medical data are needed to
tackle the ever{growing data resources obtained daily from the hospitals [Doi05]. Handling this
enormous amount of medical data during clinical routine by health professionals has complexity
and scaling limitations. It is also very time{consuming, and hence requires unsupervised and
automatic methods to perform the necessary data analysis and processing for data interpretation.
There are already many algorithms and techniques for big data analysis, however, most research
groups do not have access to large-scale annotated medical data to develop such approaches for
medical images. Distributing these big data sets (on the order of terabytes) requires e cient and
scalable storing and computing capabilities. Evaluation campaigns and benchmarks can objectively
compare multiple state{of{the art algorithms to determine the optimal solution for a certain clinical
task [HMLM14, GSdHKCDF+13].</p>
      <p>The Visual Concept Extraction Challenge in Radiology (VISCERAL) project was developed as
a cloud{based infrastructure for the evaluation of medical image analysis techniques on large data
sets [LMMH13]. The shared cloud environment in which the VISCERAL project takes place allows
access and processing of these data without having to duplicate the data or move it to participants'
side. Since the data are stored centrally, and not distributed outside the cloud environment,
the legal and ethical requirements of such data sets can also be satis ed, so also con dential
data sets can be benchmarked in this way as only a small training data set can be accessed by
participants [EILI+10]. The cloud infrastructure is provided and funded by the VISCERAL project.
The participants are provided with computationally powerful virtual machines that can be accessed
remotely in the shared cloud infrastructure while working on the training data and tuning their
algorithms. Participant access is withdrawn during the evaluation phase and only the organizers
access the machines. The algorithms are brought to the data to perform automated processing and
data mining. The evaluation of the performance of these methods can therefore be done with real
clinical imaging data and the outcomes can be reused to improve the methods.</p>
      <p>The whole body 3D medical imaging data including manual labels that is provided by
VISCERAL includes a small subset with ground truth annotated by experienced radiologists. Through
evaluation campaigns, challenges, benchmarks and competitions, tasks of general interest can be
selected to compare the algorithms on a large scale. This manually annotated gold corpus can be
used to identify high quality methods that can also be combined to create a much larger
\reasonably annotated" data set, satisfactory but perhaps not as reliable as manual annotation. Using
fusion techniques this silver corpus will be created with the agreement between the segmentations
of the algorithms on a large{scale data set. This maximizes the gain of manual annotation and also
identi es strong di erences between participating systems on the annotated organs.
2</p>
    </sec>
    <sec id="sec-3">
      <title>ISBI Challenge Framework</title>
      <p>The registration procedure for the ISBI challenge was that of the VISCERAL series benchmark
that includes several campaigns. The participants lled information details and uploaded a signed
participation agreement form, which corresponds to ethics requests for usage of the data. Since
the VISCERAL data set is stored on the Azure Cloud, each participant then received access to an
Azure virtual cloud{computing instance. There were 5 operating systems available to choose from
including Windows 2012, Windows 2008, Ubuntu Server 14.04 LTS, openSUSE 13.1 and CentOS
6.5. All cloud{ computing instances have an 8{core CPU with 16 GB RAM to provide the same
computing capabilities to di erent solutions proposed. The participant gets administrator rights
on their virtual machine (VM) and can access remotely to deploy their algorithms and add any
supporting library/applications to their VM. The VISCERAL training data set can then be accessed
and downloaded securely within the VMs through secured URL links.
2.1</p>
      <sec id="sec-3-1">
        <title>Data Set</title>
        <p>The medical images contained in the VISCERAL data set have been acquired during daily clinical
routine work. Data sets of children (&lt;18 years) were not included based on the recommendations
of the ethical committee. In the provided data sets multiple organs are visible and depicted in a
resolution su cient to reliably detect an organ and delineate its borders. This is to enforce that
a large number of organs and structures can be segmented in one data set. The data set consists
of computed tomography (CT) scans and magnetic resonance (MR) imaging with and without
contrast enhancement to evaluate the participants algorithms on several modalities, contrasts and
MR sequence directions, making sure that algorithms are not optimized for one speci c machine
or protocol.</p>
        <p>The available training set from VISCERAL Anatomy2 benchmark was used by the participants
of the ISBI VISCERAL challenge. The contents of this dataset are elaborated below.
2.1.1</p>
      </sec>
      <sec id="sec-3-2">
        <title>CT Scans</title>
        <p>There are 15 unenhanced whole{body CT volumes acquired from patients with bone marrow
neoplasms, such as multiple myeloma, to detect osteolysis. The eld{of{view spans from and
including the head to the knee (see Fig. 2, A). The in{plane resolution ranges between 0.977/0.977 to
1.405/1.405 mm, and the in{between plane resolution is 3 mm or higher.</p>
        <p>15 contrast{enhanced CT scans of the trunk that have been acquired in patients with malignant
lymphoma are also included. They have a large eld{of{view from the corpus mandibulae to the
lower part of the pelvis (see Fig. 2, B). They have an in{plane resolution of between 0.604/0.604
and 0.793/0.793 mm, and an in{between plane resolution of at least 3 mm or higher.
2.1.2</p>
      </sec>
      <sec id="sec-3-3">
        <title>MR Scans</title>
        <p>15 whole{body MR scans in two sequences (30 in total) are also part of the training set. They were
acquired in patients with multiple myeloma to detect focal and=or di use bone marrow in ltration.
Both a coronal T1{weighted and fat-suppressed T2{weighted or STIR (short tau inversion recovery)
sequence of the whole body are available for each of the 15 patients. The eld{of{view starts and
includes the head and ends at the feet (see Fig. 2, C). The in{plane resolution is 1.250/1.250 mm,
and the in{between plane resolution is 5 mm.</p>
        <p>To improve the segmentation of smaller organs (such as the adrenal glands), 15 T1 contrast{
enhanced fat saturated MR scans of the abdomen are also included. They were acquired in
oncological patients with likely metastases within the abdomen. The eld{of{view starts at the top of
the diaphragm and extends to the lower part of the pelvis (see Fig. 2, D). They have an in plane
resolution of between 0.840/0.804 to 1.302/1.302 mm, and an in{between plane resolution of 3 mm.
2.1.3</p>
      </sec>
      <sec id="sec-3-4">
        <title>Annotated Structures and Landmarks</title>
        <p>There are in total 60 manually annotated volumes in this ISBI challenge training set. The available
data contains segmentation and landmarks of several di erent anatomical structures in di erent
imaging modalities, e.g. CT and MRI.</p>
        <p>The two categories of annotations and results are:</p>
        <p>Region segmentations: These regions correspond to anatomical structures (e.g. right lung), or
sub{parts in volume data. The 20 anatomical structures that make up the training set are:
trachea, left/right lungs, sternum, vertebra L1, left/right kidneys, left/right adrenal glands,
left/right psoas major muscles, left/right rectus abdominis, thyroid gland, liver, spleen,
gallbladder, pancreas, urinary bladder and aorta. Not all structures are visible or within the
eld{of{view in the images, therefore leading to varying numbers of annotations per structure
(see Fig. 1 for a detailed break{down).</p>
        <p>Landmarks: Anatomical landmarks are the locations of selected anatomical structures that
should be identi able in the di erent image sequences available in the data set. There can
be up to 53 anatomical landmarks (see Fig. 1) located in the data set volumes: left/right
clavicles, left/right crista iliaca, symphysis, left/right trochanter major, left/right trochanter
minor, aortic arch, trachea bifurcation, aorta bifurcation, vertebrae C2-C7, Th1-Th12, L1-L5,
xyphoideus, aortic valve, left/right sternoclavicular, VCI bifurcation, left/right tuberculums,
left/right renal pelvises, left/right bronchus, left/right eyes, left/right ventricles, left/right
ischiadicum and coronaria.</p>
        <p>In total the 60 training set volumes containing 890 manually segmented anatomical structures and
2420 manually located anatomical landmarks make up the training set. Some of the anatomical
structures in the volumes were not segmented if the annotators considered there was insu cient
tissue contrast to perform the segmentation or to locate the landmark. Other structures are
missing or not included in the training set because of anatomical variations (e.g. missing kidney) or
radiologic pathological signs (e.g. aortic aneurysm). Landmarks are easy and quick to annotate
whereas precise organ segmentation is time{consuming even when using automatic tools.
2.1.4</p>
      </sec>
      <sec id="sec-3-5">
        <title>Test Set</title>
        <p>The test set contains 20 manually annotated volumes. Each modality (whole{body CT, thorax
and abdomen contrast{enhanced CT, whole{body MR and abdomen contrast enhanced MR) is
represented by 5 volumes. The anatomical structures and landmarks contained in the selected
volumes were used to evaluate the participants' algorithms.
2.2</p>
      </sec>
      <sec id="sec-3-6">
        <title>ISBI VISCERAL Challenge Submission</title>
        <p>The participants can select the structures and modalities in which they choose to participate.
The outputs are therefore evaluated per structure and per modality. The evaluation of the ISBI
challenge has been organized di erently than the general VISCERAL evaluation framework to allow
for the evaluation results to complete in the given relatively short time{frame. For this challenge,
the test set volumes were made available in the cloud some weeks ahead of the challenge. The
participants themselves computed the annotations (segmentations and/or landmark locations) in
their VMs and stored them on their VM storage. The les could then be submitted within their
VM through an uploading script provided to the participants. The script stored their output les
in a corresponding cloud container created for the challenge individual for each participant. A list
containing the available ground truth segmentations of the test set ltered duplicates or output
les with incorrect le names. It also ensured all les were coherent with the participant ID list
from the organizers.
2.3</p>
      </sec>
      <sec id="sec-3-7">
        <title>Evaluation Software</title>
        <p>To evaluate the output segmentations and landmark locations against the ground truth, the
VISCERAL evaluation tool was used. This software was also included in the VM assigned to each
participant. This evaluation tool has di erent evaluation metrics implemented such as (1) distance{
based metrics, (2) spatial overlap metrics and (3) probabilistic and information theoretic metrics.
The most suitable subset of the metrics was used in the analysis of the results and all metrics
were made available to the participants. For the output segmentations of the ISBI challenge the
following evaluation metrics were selected:</p>
        <p>DICE coe cient [ZWB+04]
Adjusted Rand Index [VPYM11]
Interclass Correlation [GJC01]
Average distance [KCAB09]</p>
        <p>Only one label is considered per image. The voxel value can be either zero (background) or one for
the voxels containing the segmentation. A threshold is set at 0.5 to create binary images in case
the output label has a fuzzy membership or a probability map.</p>
        <p>For the landmark localization evaluation the same VISCERAL tool measures the landmark{
speci c average error (Euclidean distance) error between all the results and the manually located
landmarks. The percentage of detected landmarks per volume (i.e. landmarks detected / landmarks
in the volume) is also computed.
2.4</p>
      </sec>
      <sec id="sec-3-8">
        <title>Participation</title>
        <p>The ISBI training and test set volumes were made available through the Azure cloud framework
for all the registered participants of the VISCERAL Anatomy2 benchmark. In total 18 groups
got access to the challenge training set and the 60 training volumes of the data set. The research
groups that submitted working virtual machines had a chance to present their methods and results
at the "VISCERAL Organ Segmentation and Landmark Detection Challenge" at the 2014 IEEE
International Symposium on Biomedical Imaging (ISBI).</p>
        <p>A single{blind review process was applied to the initial abstract submissions. The accepted
abstracts were then invited to submit a short paper presenting their methods and results in the
challenge. There were 5 high{quality submissions accepted and included in these proceedings.</p>
        <p>Spanier et al. [SJ14] submitted segmentations for ve organs in CT contrast{enhanced volumes.
Their multi{step algorithm combines thresholding and region growing techniques to segment each
organ individually. It starts with the location of a region of interest and identi cation of the largest
axial cross{section slices of the selected structure. It then improves the initial segmentation with
morphological operators and a nal step performs 3D region growing.</p>
        <p>Huang et al. [HLJ14] proposed a coarse{to{ ne liver segmentation using prior models for the
shape, pro le appearance and contextual information of the liver. An AdaBoost voxel{based
classi er creates a liver probability map that is re ned in the last step with free{form deformation with
a gradient appearance model.</p>
        <p>Wang et al. [WS14] segmented 10 anatomical structures in CT contrast{enhanced and
unenhanced scans. Their multi{organ segmentation pipeline performs in a top{down approach by a
model{based level{set segmentation of the ventral cavity. After dividing the cavity in thoracic and
abdominopelvic cavity, the major structures are segmented and their location information is passed
to the lower{level structures.</p>
        <p>Jimenez del Toro et al. [JdTM14] segmented structures in CT and contrast{enhanced CT scans
with a hierarchical multi{atlas approach. Based on the spatial anatomical correlations between the
organs, the bigger and high{contrasted organs are rst segmented. These then de ne the initial
volume transformations for the smaller structures with less de ned boundaries.</p>
        <p>Goksel et al. [GGS14] submitted segmentations for both CT and MR anatomical structure
segmentation. They also submitted results for the landmark localization task. For the segmentations
they use a multi{atlas based technique that implements Markov Random Fields to guide the
registrations. A multi{atlas template{based approach fuses the di erent estimations to detect the
landmarks.
3</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Results</title>
      <p>There were approximately 500 structure segmentations and 211 landmark locations submitted to
the VISCERAL ISBI challenge. Four participants submitted results for the segmentation tasks in
multiple organs using whole{body CT or contrast{enhanced scans with results presented in Table 1
and Fig. 3. There was one participant that contributed segmentations on both the whole{body MR
scans and the contrast{enhanced MR abdomen volumes with results presented in Table 3. Only one
participant submitted landmark localization results, with Table 4 showing their evaluation results.
4</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions and Future Work</title>
      <p>The VISCERAL project has the evaluation of algorithms on large data sets as its main objective.
The proposed VISCERAL infrastructure allows evaluations with private or restricted data, such as
electronic health records, without the participants access to the test data by using a fully cloud{
based approach. This infrastructure also avoid moving data, which is potentially hard for very
large data sets. The algorithms are brought to the data and not the data to the algorithms.</p>
      <p>Both gold corpus and silver corpus will be available as a resource to the community. The ISBI
test set volumes and annotations are now available and are part of the VISCERAL Anatomy2
benchmark training set.</p>
      <p>So far, both past VISCERAL anatomy benchmarks have addressed organ segmentation and
landmark localization tasks. There are two more benchmarks under development in the VISCERAL
project, a retrieval benchmark and a detection benchmark. The retrieval benchmark will be the
retrieval of similar cases based on both visual information and radiology reports. The detection
benchmark will focus in the detection of lesions in MR and CT images.</p>
      <p>In the future, the automation of the evaluation process is intended to reduce the need for
intervention from the organizers to a minimum and to provide faster evaluation feedback to the participants.
The participants will then be able to submit their algorithms through the cloud virtual machines
and obtain the calculated metrics directly from the system. Such a system could then store the
results from all the algorithms submitted and perform an objective comparison with state-of-the
art algorithms. Through the involvement of the research community, the VISCERAL framework
could produce novel tools for the clinical work ow that has substantial impact on diagnosis quality
and treatment success. Having all tools and algorithms in the same cloud environment can also
help us to combine tools and approaches with very little additional e ort, which expectedly yields
better results.
5</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The research leading to these results has received funding from the European Union Seventh
Framework Programme (FP7/2007-2013) under grant agreement n 318068 VISCERAL. We would also
like to thank Microsoft research for their nancial and information support in using the Azure cloud
for the benchmark.
[Doi05]
[EILI+10]</p>
      <p>K Doi. Current status and future potential of computer{aided diagnosis in
medical imaging. British Journal of Radiology, 78:3{19, 2005.</p>
      <p>Bernice Elger, Jimison Iavindrasana, Luigi Lo Iacono, Henning Muller, Nicolas
Roduit, Paul Summers, and Jessica Wright. Strategies for health data exchange
for secondary, cross{institutional clinical research. Computer Methods and
Programs in Biomedicine, 99(3):230{251, September 2010.</p>
      <p>Orcun Goksel, Tobias Gass, and Gabor Szekely. Segmentation and landmark
localization based on multiple atlases. In Orcun Goksel, editor, Proceedings of
the VISCERAL Challenge at ISBI, CEUR Workshop Proceedings, pages 37{43,
Beijing, China, May 2014.</p>
      <p>Guido Gerig, Matthieu Jomier, and Miranda Chakos. A new validation tool for
assessing and improving 3D object segmentation. In Wiro J. Niessen and Max A.
Viergever, editors, Medical Image Computing and Computer-Assisted
Intervention - MICCAI 2001, volume 2208 of Lecture Notes in Computer Science, pages
516{523. Springer Berlin Heidelberg, 2001.
[GSdHKCDF+13] Alba Garc a Seco de Herrera, Jayashree Kalpathy-Cramer, Dina Demner
Fushman, Sameer Antani, and Henning Muller. Overview of the ImageCLEF 2013
medical tasks. In Working Notes of CLEF 2013 (Cross Language Evaluation
Forum), September 2013.
[HLJ14]
[HMLM14]
[ZWB+04]</p>
      <p>Cheng Huang, Xuhui Li, and Fucang Jia. Automatic liver segmentation using
multiple prior knowledge models and free{form deformation. In Orcun
Goksel, editor, Proceedings of the VISCERAL Challenge at ISBI, CEUR Workshop
Proceedings, pages 22{24, Beijing, China, May 2014.</p>
      <p>Allan Hanbury, Henning Muller, Georg Langs, and Bjoern H. Menze. Cloud{
based evaluation framework for big data. In Alex Galis and Anastasius Gavras,
editors, Future Internet Assembly (FIA) book 2013, Springer LNCS, pages 104{
114. Springer Berlin Heidelberg, 2014.</p>
      <p>Oscar Alfonso Jimenez del Toro and Henning Muller. Hierarchical multi{
structure segmentation guided by anatomical correlations. In Orcun Goksel,
editor, Proceedings of the VISCERAL Challenge at ISBI, CEUR Workshop
Proceedings, pages 32{36, Beijing, China, May 2014.</p>
      <p>Hassan Khotanlou, Olivier Colliot, Jamal Atif, and Isabelle Bloch. 3d brain
tumor segmentation in MRI using fuzzy classi cation, symmetry analysis and
spatially constrained deformable models. Fuzzy Sets and Systems, 160(10):1457{
1473, 2009. Special Issue: Fuzzy Sets in Interdisciplinary Perception and
Intelligence.</p>
      <p>Georg Langs, Henning Muller, Bjoern H. Menze, and Allan Hanbury. Visceral:
Towards large data in medical imaging { challenges and directions. Lecture Notes
in Computer Science, 7723:92{98, 2013.</p>
      <p>Assaf B. Spanier and Leo Joskowicz. Rule{based ventral cavity multi{organ
automatic segmentation in ct scans. In Orcun Goksel, editor, Proceedings of
the VISCERAL Challenge at ISBI, CEUR Workshop Proceedings, pages 16{21,
Beijing, China, May 2014.</p>
      <p>Nagesh Vadaparthi, Suresh Varma Penumatsa, Srinivas Yarramalle, and P. S. R.
Murthy. Segmentation of Brain MR Images based on Finite Skew Gaussian
Mixture Model with Fuzzy C{Means Clustering and EM Algorithm. International
Journal of Computer Applications, 28:18{26, 2011.</p>
      <p>Chunliang Wang and O rjan Smedby. Automatic multi{organ segmentation using
fast model based level set method and hierarchical shape priors. In Orcun
Goksel, editor, Proceedings of the VISCERAL Challenge at ISBI, CEUR Workshop
Proceedings, pages 25{31, Beijing, China, May 2014.</p>
      <p>Kelly H. Zou, Simon K. War eld, Aditya Bharatha, Clare M.C. Tempany,
Michael R. Kaus, Steven J. Haker, William M. Wells III, Ferenc A. Jolesz, and
Ron Kikinis. Statistical validation of image segmentation quality based on a
spatial overlap index1: scienti c reports. Academic Radiology, 11(2):178 { 189,
2004.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>