<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ImageCLEF 2017: Supervoxels and Co-occurrence for Tuberculosis CT Image Classi cation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vitali Liauchuk</string-name>
          <email>vitali.liauchuk@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vassili Kovalev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>United Institute of Informatics Problems</institution>
          ,
          <addr-line>Minsk</addr-line>
          ,
          <country country="BY">Belarus</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper presents image description and classi cation methods which were used by United Institute of Informatics Problems (UIIP) group for tuberculosis image classi cation task. A method based on cooccurrence of adjacent supervoxels in 3D computed tomography (CT) images was used for subtask #1 which was dedicated to image-based recognition of multi-drug resistant tuberculosis. For subtask #2 which is dedicated to automated categorization of tuberculosis patients into one of ve types of tuberculosis, extended multidimensional multi-sort co-occurrence matrices were used for describing the CT scans. Both two submitted runs were ranked 7th in both subtasks.</p>
      </abstract>
      <kwd-group>
        <kwd>Supervoxels</kwd>
        <kwd>Dictionary</kwd>
        <kwd>Co-occurrence</kwd>
        <kwd>Image Classi cation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The tuberculosis task [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] of ImageCLEF 2017 Challenge [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] considers two
subtasks both dealing with 3D CT images. The subtask #1 is dedicated to the
problem of single image-based distinguishing between multi-drug resistant
tuberculosis (MDR TB) cases and drug sensitive (DS) ones. The task itself is very
challenging and so far there are no techniques reported which allow robust and
accurate prediction of tuberculosis drug resistance based solely on lung CT
images. Several studies reported the statistically signi cant links between presence
of visually detected radiological ndings and drug resistance status [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Also
some research was carried out to detect statistically signi cant links between
the drug resistance and structural features of radiological images of lung [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and
some trials were made to assess the possible prediction accuracy [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However
in those studies the datasets used were not large enough and contained relapse
tuberculosis cases which have much higher probability of begin drug-resistant.
Instead, the dataset collected for ImageCLEF 2017 tuberculosis subtask #1
contained only DS and MDR cases without transitional single-drug resistant and
poly-drug resistant ones and no relapses in order to make the dataset as
unbiased as possible. Training set in this subtask included 230 CT images: 134 drug
sensitive and 96 drug resistant cases. Test set consisted of 214 CT images and
was slightly biased towards MDR tuberculosis: 101 drug sensitive and 113 drug
resistant cases.
      </p>
      <p>The subtask #2 of ImageCLEF 2017 tuberculosis task is aimed at
automatic categorization of CT images into one of ve types of tuberculosis types:
In ltrative, Focal, Tuberculoma, Miliary and Fibro-cavernous. Having 500 CT
scans in training set and 300 images in tests set, the subtask provides a
valuable benchmark for computerized methods of CT image content description and
classi cation.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Data Preparation</title>
      <sec id="sec-2-1">
        <title>Segmentation of lung regions</title>
        <p>
          For extraction of lung regions in both subtasks, a domestic implementation of
a conventional approach of segmentation using non-rigid image registration was
used instead of the one proposed by the organizers. In our case the method
utilized 130 reference CT scans with manually segmented lungs. These 130 CT
scans represent completely separate dataset and have no respect to any of
ImageCLEF tuberculosis datasets. For each target CT scan, a similarity measure
was calculated between the target image and the reference images and top-5
most similar reference images were selected. The selected images along with the
corresponding lung masks were registered using 'elastix' software tool [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], the
nal segmentation mask was obtained by means of averaging. The implemented
method demonstrated high robustness to the presence of large lesion in lungs
(see Fig. 1).
For subtask #1 we used an image description method which operates with
socalled image supervoxels which are basically a 3D-version of conventional 2D
image superpixels [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. The method considers categorization of image supervoxels
into classes according to a precalculated supervoxel dictionary similarly to the
well known bag-of-words approach [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. For composing a meaningful supervoxels
dictionary, an auxiliary image dataset was composed. The dataset included 229
small 3D CT image regions of size 128 128 128 voxels extracted from CT
scans. Before extraction of regions, the original CT images were re-sampled using
nearest-neighbor interpolation in order to equalize sizes of voxel along all the
three axes, i.e. make them cubic-shaped (see examples in Fig. 2).
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Subtask #1: Drug Resistance prediction</title>
      <p>
        For subtask #1, a method for quantitative description of biomedical images
based on supervoxel representation and utilizing the co-occurrence concept was
used. To our best knowledge the potential of superpixel/supervoxel-based image
description has not been extensively researched yet [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
3.1
      </p>
      <sec id="sec-3-1">
        <title>General scheme of image description method</title>
        <p>The proposed image description method includes two major stages: (a)
generating a supervoxel dictionary and (b) describing the images using the obtained
dictionary. With this study, superpixel dictionaries were represented by the sets
of features of the most typical supervoxels occurring on the images of a given
type. The generating superpixel dictionary stage included the following steps:
{ selection of a certain number of representative images of given type;
{ extraction of supervoxels from the selected images;
{ extraction of supervoxels features;
{ splitting the supervoxels feature-space into N clusters;
{ calculating cluster (class) centroids;
{ composing the supervoxel dictionary (set of centroids).</p>
        <p>
          The image description stage was based on calculation of histograms and
cooccurrence matrices of image supervoxels categorized into N classes according
to the previously obtained dictionary. This included the following:
{ extraction of supervoxels from the target image;
{ extraction of supervoxels features;
{ categorization of each supervoxel into one of N classes according to the
precalculated dictionary;
{ calculating a co-occurrence matrix [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] of the categorized supervoxels.
3.2
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Composing supervoxel dictionary</title>
        <p>
          In [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] the authors used superpixel dictionaries for semantic segmentation of
street images. A set of 1708 superpixel features including color, texture, shape
and location features was used for the task of scene description. In our study, we
used a set of 6 major supervoxel features which basically describe texture and
shape of a single supervoxel:
{ mean intensity of internal pixels;
{ standard deviation of intensity;
{ entropy of intensity;
{ mean gradient magnitude;
{ sphericity;
{ \cubeness".
        </p>
        <p>Sphericity was calculated using formula:
(1)
(2)
Sphericity =
1=3(6V )2=3</p>
        <p>A
;
where V is the total number of voxels in supervoxel, and A is the number of
border voxels in supervoxel. Sphericity value is close to 1 if the supervoxels
shape is similar to sphere. \Cubeness" feature expressed the extent of how much
the supervoxel shape is similar to cube and was calculated as:</p>
        <p>Cubeness =</p>
        <p>
          V
Vbb
;
where Vbb corresponds to number of voxels in bounding box of the supervoxel
considered. Maximum value of \cubeness" feature is 1 in case of ideally
cubicshaped supervoxel. The meaning of this feature is dictated by the algorithm of
generation of supervoxels. At the initialization step, the shape of all supervoxels
is cubic. And if the underlying image substrate is enough homogeneous the
resultant supervoxels remain cubic-shaped. A 3D version of the superpixel generation
algorithm [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] used with this study has two control parameters: superpixel size
Sz and a regularization parameter Reg. The examples of generated supervoxels
are shown in Fig. 3.
        </p>
        <p>Supervoxel dictionaries were generated for several combinations of
parameters Sz and Reg. Supervoxel clustering was performed using k-means algorithm,
number of clusters being set to N = 8; 16; 32 and 64.
The proposed method was nally applied to the training set images of
ImageCLEF tuberculosis subtask #1. The following combinations of control
parameters of supervoxel extraction algorithm were used: Sz = 32, Reg = 0:1; Sz = 32,
Reg = 0:3; Sz = 32, Reg = 1; Sz = 48, Reg = 0:1; Sz = 48, Reg = 0:3;
Sz = 48, Reg = 1. The supervoxel dictionary size N was set to 8, 16, 32 and
64. Supervoxel maps were calculated for all of the 230 training CT scans, class
numbers were assigned to supervoxels and for each image a supervoxels class
co-occurrence matrix was calculated which was further used as a feature vector
for prediction.</p>
        <p>Only feature vector elements which were correlated with drug resistance at
signi cance level p &lt; 0:05 were selected for further prediction. Finally,
assessment of patients drug resistance probability was performed with use of Logistic
Regression classi er within 5-fold cross-validation procedure. Area under
ROCcurve (AUC) was used as a measure of prediction performance. The results are
shown in Table 3.3. According to the table of results, the best prediction
performance was achieved for the following combination of parameters: Sz = 32,
Reg = 1 and size of supervoxel dictionary N = 64. However, it should be notices
that even the highest achieved performance is pretty low in general and with
230 study cases corresponds to statistical signi cance level of roughly p 0:01.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Subtask #2: tuberculosis type detection</title>
      <p>
        For this subtask, an extended multi-sort, multi-dimensional co-occurrence
matrix approach was used which is proven to be powerful and exible enough to
capture a broad range of structural properties of both 2D and 3D medical
images [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. For describing lung CT image structure we employed the most general,
six-dimensional matrices of IIGGAD type counting voxel pairs with certain
intensities (I), gradient magnitudes (G), and mutual angles (A) between the
gradient vectors. CT image intensity range from -1000 to +500 Houns eld Units
(HU) was quantized to 12 bins. Gradient values and angles between gradients
were both quantized to 6 bins. The matrices consider co-occurrence of voxels in
3D on distances from 1 to 5 measured in axial voxel size. The values of algorithm
parameters were selected empirically to maximize classi cation performance on
training set of images. The resultant multidimensional co-occurrence matrices
contained in total 122 62 6 5 = 155 520 elements, most of which were
however zeros.
      </p>
      <p>To reduce the dimensionality of the feature space, a Principal Component
Analysis method (PCA) was applied and the rst 50 PC's were considered for
further analysis. The prediction of patient's tuberculosis type was performed
with use of Random Forests classi er.</p>
      <p>The evaluation of the proposed image classi cation method within 5-fold
cross-validation procedure demonstrated classi cation accuracy of 57.0% and
the un-weighted Cohens Kappa coe cient value was 0.442.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Submission and results</title>
      <p>
        Since subtask #1 represent a very challenging and important problem, most of
the e orts of our team was focused on drug resistance prediction subtask. Several
di erent approaches were tested and various descriptor types were examined.
Algorithms of automated detection of lesions [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] were used to derive additional
information on the a ected level of lungs. Some trials were made to utilize Deep
Learning classi cation methods to distinguish between DS and MDR
tuberculosis lesion appearance but no success was achieved there. Finally, since none
of the additional approaches provided any signi cant increase of classi cation
performance, only one run was submitted for subtask #1. In case of subtask #2,
one single run was submitted as well.
      </p>
      <p>
        In the nal table of results the submitted run for subtask #1 was ranked
as 7th among the 28 submitted runs with area under ROC-curve (AUC) equal
to 0.5415 and prediction accuracy of 49.3% on the test image dataset. The best
result in terms of AUC value was achieved by MedGIFT team [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and resulted in
AUC = 0.5825. Best drug resistance prediction accuracy of 56.8% was achieved
by HHU DBS team [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        For subtask #2, the submitted run was ranked 7th among the total number
of 23 runs with recognition accuracy of 39.0% and Cohen's Kappa equal to
0.196. The best results in this subtask were obtained by SGEast team [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] which
corresponded to Cohen's Kappa value of 0.2438 and recognition accuracy of
40.3%.
6
      </p>
    </sec>
    <sec id="sec-6">
      <title>Conclusions</title>
      <p>In this paper, image classi cation methods employed by UIIP group for
ImageCLEF 2017 tuberculosis task were described. Being tested within the two
subtasks, image description methods based on co-occurrence of voxels and
supervoxels proved to be e cient for description of textural appearance of 3D
medical images. It should be noticed that even the top-performing run
submitted for subtask #1 resulted in AUC = 0.59 which is pretty low in terms of
classi cation task. The conclusion which can be drawn is that the task of
prediction of tuberculosis drug resistance based on a single CT image probably cannot
be solved with a reasonable accuracy.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>This study was supported by the National Institute of Allergy and Infectious
Diseases, National Institutes of Health, U.S. Department of Health and Human
Services, USA through the CRDF project OISE-16-62631-1.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Kalinovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          , , Muller, H.:
          <article-title>Overview of ImageCLEFtuberculosis 2017 - predicting tuberculosis type and drug resistances</article-title>
          .
          <source>In: CLEF2017 Working Notes. CEUR Workshop Proceedings</source>
          , Dublin, Ireland, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 11-14</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arenas</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boato</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dicente Cid</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eickho</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia Seco de Herrera</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Islam</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mothe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwall</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Overview of ImageCLEF 2017: Information extraction from images</article-title>
          .
          <source>In: Experimental IR Meets Multilinguality, Multimodality, and Interaction 8th International Conference of the CLEF Association</source>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2017</year>
          . Volume
          <volume>10456</volume>
          of Lecture Notes in Computer Science., Dublin, Ireland, Springer (September
          <volume>11</volume>
          -14
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Cha</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>H.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koh</surname>
            ,
            <given-names>W.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kwon</surname>
            ,
            <given-names>O.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yi</surname>
            ,
            <given-names>C.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>T.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chung</surname>
            ,
            <given-names>M.J.:</given-names>
          </string-name>
          <article-title>Radiological ndings of extensively drug-resistant pulmonary tuberculosis in non-AIDS adults: comparisons with ndings of multidrug-resistant and drugsensitive tuberculosis</article-title>
          .
          <source>Korean journal of radiology 10(3)</source>
          (
          <year>2009</year>
          )
          <volume>207</volume>
          {
          <fpage>216</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>E.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>C.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goo</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yim</surname>
            ,
            <given-names>J.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>H.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>H.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>I.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Im</surname>
            ,
            <given-names>J.G.</given-names>
          </string-name>
          :
          <article-title>Computed tomography features of extensively drug-resistant pulmonary tuberculosis in non-HIV-infected patients</article-title>
          .
          <source>Journal of computer assisted tomography 34(4)</source>
          (
          <year>2010</year>
          )
          <volume>559</volume>
          {
          <fpage>563</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Safonau</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Astrauko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrahina</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tarasau</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Is there any correlation between the drug resistance and structural features of radiological images of lung tuberculosis patients</article-title>
          ? In: Computer Assisted Radiology - 27th
          <source>International Congress and Exhibition (CARS-2013)</source>
          . Volume
          <volume>8</volume>
          ., Springer, Heidelberg (
          <year>2013</year>
          )
          <volume>18</volume>
          {
          <fpage>20</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalinouski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenthal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabrielian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrahina</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Astrauko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Tarasau: Utilizing radiological images for predicting drug resistance of lung tuberculosis</article-title>
          . In: Computer Assisted Radiology - 27th
          <source>International Congress and Exhibition (CARS-2015)</source>
          . Volume
          <volume>10</volume>
          ., Springer, Barcelona (
          <year>2015</year>
          )
          <volume>129</volume>
          {
          <fpage>130</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Staring</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Murphy</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Viergever</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pluim</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          :
          <article-title>Elastix: a toolbox for intensity{based medical image registration</article-title>
          .
          <source>IEEE Transactions on medical imaging 29(1)</source>
          (
          <year>2010</year>
          )
          <volume>196</volume>
          {
          <fpage>205</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Achanta</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shaji</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lucchi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fua</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , Susstrunk, S.:
          <article-title>SLIC superpixels compared to state{of{the{art superpixel methods</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>34</volume>
          (
          <issue>11</issue>
          ) (
          <year>November 2012</year>
          )
          <volume>2274</volume>
          {
          <fpage>2282</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>Y.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hauptmann</surname>
            ,
            <given-names>A.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ngo</surname>
            ,
            <given-names>C.W.</given-names>
          </string-name>
          :
          <article-title>Evaluating bag{of{visual{ words representations in scene classi cation</article-title>
          .
          <source>In: Proceedings of the international workshop on Workshop on multimedia information retrieval</source>
          ,
          <source>ACM</source>
          (
          <year>2007</year>
          )
          <volume>197</volume>
          {
          <fpage>206</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Sicre</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tasli</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gevers</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Superpixel based angular di erences as a midlevel image descriptor</article-title>
          .
          <source>In: 22nd International Conference on Pattern Recognition (ICPR-2014)</source>
          , Stockholm, Sweden (
          <year>2014</year>
          )
          <volume>3732</volume>
          {
          <fpage>3737</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Micusik</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kosecka</surname>
          </string-name>
          , J.:
          <article-title>Semantic segmentation of street scenes by superpixel co-occurrence and 3d geometry</article-title>
          . In: IEEE Workshop on
          <article-title>Video-Oriented Object and Event Classi cation</article-title>
          ,
          <source>Japan</source>
          (
          <year>2009</year>
          )
          <volume>625</volume>
          {
          <fpage>632</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Tighe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lazebnik</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Scalable nonparametric image parsing with superpixels</article-title>
          .
          <source>In: Proceedings of the 11th European conference on Computer vision (ECCV'10)</source>
          , Germany, Heidelberg (
          <year>2010</year>
          )
          <volume>352</volume>
          {
          <fpage>365</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kruggel</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gertz</surname>
          </string-name>
          , H.J.,
          <string-name>
            <surname>von Cramon</surname>
          </string-name>
          , D.Y.:
          <article-title>Three{dimensional texture analysis of MRI brain datasets</article-title>
          .
          <source>IEEE Transactions on Medical Imaging</source>
          <volume>20</volume>
          (
          <issue>5</issue>
          ) (May
          <year>2001</year>
          )
          <volume>424</volume>
          {
          <fpage>433</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Kalinovsky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tarasau</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Lesion detection in ct images using deep learning semantic segmentation technique</article-title>
          . ISPRS - International
          <source>Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W4</source>
          (
          <year>2017</year>
          )
          <volume>13</volume>
          {
          <fpage>17</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Batmanghelich</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          , Muller, H.:
          <article-title>Textured graph-model of the lungs for tuberculosis type classi cation and drug resistance prediction: participation in ImageCLEF 2017</article-title>
          .
          <source>In: CLEF2017 Working Notes. CEUR Workshop Proceedings</source>
          , Dublin, Ireland, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 11-14</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Braun</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singhof</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tatusch</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conrad</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Convolutional neural networks for multidrug-resistant and drug-sensitive tuberculosis dinstinction</article-title>
          .
          <source>In: CLEF2017 Working Notes. CEUR Workshop Proceedings</source>
          , Dublin, Ireland, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 11-14</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chong</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>Y.X.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Binder</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>ImageCLEF 2017: ImageCLEF tuberculosis task - the SGEast submission</article-title>
          .
          <source>In: CLEF2017 Working Notes. CEUR Workshop Proceedings</source>
          , Dublin, Ireland, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 11-14</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>