<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Fully Automatic Multi-organ Segmentation based on Multi-boost Learning and Statistical Shape Model Search</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Baochun He, Cheng Huang, Fucang Jia Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 1068 Xueyuan Avenue, Xili University Town</institution>
          ,
          <addr-line>Shenzhen, 518055</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <abstract>
        <p>In this paper, an automatic multi-organ segmentation based on multi-boost learning and statistical shape model search was proposed. First, simple but robust Multi-Boost Classi er was trained to hierarchically locate and pre-segment multiple organs. To ensure the generalization ability of the classi er relative location information between organs, organ and whole body is exploited. Left lung and right lung are rst localized and pre-segmented, then liver and spleen are detected upon its location in whole body and its relative location to lungs, kidney is nally detected upon the features of relative location to liver and left lung. Second, shape and appearance models are constructed for model tting. The nal re nement delineation is performed by best point searching guided by appearance pro le classi er and is constrained with multi-boost classi ed probabilities, intensity and gradient features. The method was tested on 30 unseen CT and 30 unseen enhanced CT (CTce) datasets from ISBI 2015 VISCERAL challenge. The results demonstrated that the multi-boost learning can be used to locate multi-organ robustly and segment lung and kidney accurately. The liver and spleen segmentation based on statistical shape searching has shown good performance too.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Abdominal organ segmentation is an essential step in the multi-organ visualization, clinical
diagnosis and therapy. Up to now, some methods [Okada12, Wang14] have been proposed, and all
of them showed that information about the spatial relationship among organs is very bene cial to
automatic 3D multi-organ localization. Previous studies also indicated that segmentation in a
hierarchical way is more robust [Wang14, Selver14]. In our previous work [Li14], we used Adaboost
and statistic shape model (SSM) prior knowledge to segment liver successfully. Now we extend
this framework in multi-organ segmentation as shown in Figure 1. The di erences are in two-fold.
Firstly, Multi-Boost [Ben12] is employed to classify two organs one time in a top-down order. The
last organ segmentation result will be used to classify the next level organs. Secondly, to acquire
a customized speci c shape result, free searching is directed by K Nearest Neighbor (KNN) and is
constrained with voxel-based information such as probability, intensity and gradient features.</p>
    </sec>
    <sec id="sec-2">
      <title>Method</title>
      <sec id="sec-2-1">
        <title>Model Construction</title>
        <p>SSM model was constructed from 20 CT and 20 CTce training binary segmentations. At rst,
reasonable region of interest (ROI) of the training binary images is extracted and generalized
Procrustes aligned. Then one smooth and normal reference mesh is obtained using marching cubes
method. Finally a set of corresponding shapes are created by elastic registration of the reference
shape to the aligned binary images. The SSM is constructed by Statismo toolkit [Luthi12] and
represented by Simplex mesh. The local appearance model of each organ is established by a KNN
classi er trained on both intensity and gradient pro les information inside, outside and at the true
organ boundary as suggested in [Heimann07].
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Multi-organ Localization</title>
        <p>Image features such as intensity, location and contextual information are used to train a
multiboost classi er. To ensure the generalization ability of the classi errelative location information
between organs, organ and whole body were exploited. Template matching is employed to extract
the organ ROI as shown in Figure 2(a). Localization and segmentation is performed in a top-down
order - rst left and right lung, then liver and spleen, at last left and right kidney, as seen in Figure
2(b). Thresholding was applied to the probability image of the boosting classi ed ROI image to
get the pre-segmentation mask. Due to good boosting classi cation precision for lung and kidney,
the pre-segmentation mask is used as the nal segmentation.</p>
        <p>(a)
(b)
(c)
Similarity and shape transform parameters are initialized rst by registration of SSM shape to the
distance map of the pre-segmentation image. Appearance model is utilized for accurate parameters
searching [Cootes95]. Previous trained KNN-classi er shifts each landmark to its optimal
displacement position, similarity and shape parameters are then calculated through matrix operations.
This process is performed iteratively until the parameters converge.
2.4</p>
      </sec>
      <sec id="sec-2-3">
        <title>Appearance Pro le Classi er directed Boundary Searching</title>
        <p>In this step, the goal is to nd the optimal con dence position for each mesh vertex. Due to high
accuracy of the KNN, it is still used as boundary pro le classi cation method. However, in step 2.3,
the best positions calculated by KNN may over ow or fail to reach the true boundary as illustrated
in Figure 2(c). The target position around the one searched by KNN is named as KNN position for
convenience. The points around the KNN position are selected as candidate points. Each candidate
point is assigned by previous Adaboost probability obtained in step 2.2, where both the intensity
and the gradient are scaled to [-1,1]. The point with maximum voting value will be the optimal
con dence position. To preserve the smoothness of the shape, the point can only move to the
computed best position in a constrained step. This process stops after iteration of user-speci ed
numbers.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Results</title>
      <p>Twenty non-contrast CT and twenty contrast enhanced CT (CTce) training volumes were used
for each multi-boost classi er and KNN boundary classi er training. SSM was built on all thirty
datasets. There are 2562 landmarks for the mean liver shape model and 1520 ones for the mean
spleen shape model. The experiment was run on 30 unseen CT and CTce datasets and evaluated
by Dice coe cient and average Hausdor distance (AvgD). The evaluation results are shown in
Table 1.</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>In this paper, a robust and automatic multi-organ segmentation method was proposed. The method
exploits and combines di erent prior knowledge, such as interrelations of organs, intensity, boundary
pro les and shape variation information, for robust model localization, model tting and free
searching. The method has been validated on ISBI 2015 VISCERAL challenge and showed good
performance. Future work will extend the framework to more abdominal organ segmentation.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>
        This work was supported by the grants as follows: NSFC-Guangdong Union Foundation (Grant
No. U1401254); Guangdong Science and Technology Project (Grant No. 2012A080203013 an
        <xref ref-type="bibr" rid="ref5">d
2012</xref>
        A030400013).
[Li14]
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Okada12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Okada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Linguraru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hori</surname>
          </string-name>
          . et al.
          <article-title>Multi-organ segmentation in abdominal CT images</article-title>
          .
          <source>2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)</source>
          ,
          <source>DOI: 10.1109/EMBC</source>
          .
          <year>2012</year>
          .
          <volume>6346840</volume>
          , pp.
          <fpage>3986</fpage>
          -
          <lpage>3989</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>[Selver14] M.</surname>
          </string-name>
          <article-title>A. Selver Segmentation of abdominal organs from CT using a multi-level, hierarchical neural network strategy</article-title>
          .
          <source>Computer Methods</source>
          and Programs in Biomedicine,
          <volume>113</volume>
          (
          <issue>3</issue>
          ):
          <fpage>830</fpage>
          -
          <lpage>852</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Wang14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Smedby</surname>
          </string-name>
          .
          <article-title>Automatic multi-organ segmentation in nonenhanced CT datasets using hierarchical shape priors</article-title>
          .
          <source>The 22nd International Conference on Pattern Recognition (ICPR)</source>
          , pp.
          <fpage>3327</fpage>
          -
          <lpage>3332</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fan</surname>
          </string-name>
          .
          <article-title>Automatic liver segmentation using statistical prior models and free-form deformation</article-title>
          . In: Menze,
          <string-name>
            <surname>B.</surname>
          </string-name>
          , et al. (eds.)
          <article-title>MCV 2014</article-title>
          .
          <article-title>LNCS</article-title>
          , vol.
          <volume>8848</volume>
          , pp.
          <fpage>181</fpage>
          -
          <lpage>188</lpage>
          . Springer, Heidelberg (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>D.</given-names>
            <surname>Benbouzid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Busa-Fekete</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Casagrande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. -D.</given-names>
            <surname>Collin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kgl. MULTIBOOST:</surname>
          </string-name>
          <article-title>a multi-purpose boosting package</article-title>
          .
          <source>The Journal of Machine Learning Research</source>
          ,
          <volume>13</volume>
          , pp.
          <fpage>549</fpage>
          -
          <issue>553</issue>
          ,
          <issue>3</issue>
          /1/2012
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [Heimann07]
          <string-name>
            <given-names>T.</given-names>
            <surname>Heimann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Meinzer</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Wolf.</surname>
          </string-name>
          <article-title>A statistical deformable model for the segmentation of liver CT volumes</article-title>
          . MICCAI Workshop:
          <article-title>3D Segmentation in the clinic: A grand challenge</article-title>
          ,
          <fpage>161</fpage>
          -
          <lpage>166</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>[Luthi12] M. Luthi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Blanc</surname>
            , T. ALbrecht, T. Gass,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Goksel</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Buchler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Kistler</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Bousleiman</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Reyes</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Cattin</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Vetter</surname>
          </string-name>
          .
          <article-title>Statismo - A framework for PCA based statistical models</article-title>
          .
          <source>The Insight Journal</source>
          ,
          <volume>1</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [Cootes95]
          <string-name>
            <given-names>T.</given-names>
            <surname>Cootes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Taylo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Cooper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Graham</surname>
          </string-name>
          .
          <article-title>Active shape models - their training and application</article-title>
          .
          <source>Computer Vision</source>
          and Image Understanding,
          <volume>61</volume>
          (
          <issue>1</issue>
          ):
          <fpage>38</fpage>
          -
          <lpage>59</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>