<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Textured Graph-model of the Lungs for Tuberculosis Type Classi cation and Drug Resistance Prediction: Participation in ImageCLEF 2017</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yashin Dicente Cid</string-name>
          <email>yashin.dicente@hevs.ch</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kayhan Batmanghelich</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Henning Muller</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>SO)</institution>
          ,
          <addr-line>Sierre</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Applied Sciences Western Switzerland (HES</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Geneva</institution>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Pittsburgh</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In 2017, the ImageCLEF benchmark proposed a task based on CT (Computed Tomography) images of patients with tuberculosis (TB). This task was divided into two subtasks: multi-drug resistance prediction, and TB type detection. In this work we present a graph-model of the lungs capable of characterizing TB patients with di erent lung problems. This graph contains a xed number of nodes with weighted edges based on distance measures between texture descriptors computed on the nodes. This model attempts to encode the texture distribution along the lungs, making it suitable for describing patients with di erent tuberculosis types. The results show the strength of the technique, leading to best results in the competition for multi-drug resistance (AUC = 0.5825) and good results in the tuberculosis type detection (Cohen's Kappa coef. = 0.1623), with many of the good runs being fairly close.</p>
      </abstract>
      <kwd-group>
        <kwd>lungs graph-model</kwd>
        <kwd>3D texture analysis</kwd>
        <kwd>tuberculosis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        ImageCLEF (the image retrieval and analysis evaluation campaign of the
CrossLanguage Evaluation Forum, CLEF) has organized challenges on image
classication and retrieval since 2003 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Since 2004, a medical image analysis and
retrieval task has been organized [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ].
      </p>
      <p>
        The ImageCLEF 2017 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] challenge included a task based on tuberculosis CT
(Computed Tomography) volumes, the ImageCLEF 2017 tuberculosis task [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
In this task, a dataset of lung CT scans was provided and 2 subtasks were
proposed. When tuberculosis a ects the lungs, several visual patterns can be
seen in a CT image. However, the nal diagnosis usually required other analyses
than only the images [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. A preliminary visualization of the CT volumes available
in the ImageCLEF task showed lighter regions forming patterns that could be
characterized with an holistic description of the lung. In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], a graph-model of the
lungs capable to di erentiate between pulmonary hypertension and pulmonary
embolism patients is presented. Both diseases present somewhat similar visual
defects in lung CT scans, with di erent shapes and distributions. The graph was
based on dividing the lung into several regions and use these as nodes of a graph.
The edges of the graph encoded the di erence between HU distributions in the
lung regions. However, Dual Energy CT (DECT) scans were used in the study
and the HU distribution can be described with more detail than in a standard
single energy CT. Preliminary results showed that only one energy level did not
contain enough information about the HU distribution to di erentiate between
the pulmonary hypertension and embolism patients.
      </p>
      <p>Following the same approach, more complex features are used in this work.
The descriptors based on the HU distribution were replaced by 3D texture
features. Moreover, a deep analysis of the edges of the graph are helpful to describe
the tuberculosis patterns. Our hypothesis is that a holistic analysis of the
relations between regional texture features is able to encode subtle di erences
between patients with di erent tuberculosis type and drug resistance.</p>
      <p>
        The following section contains a brief overview of the subtasks and dataset of
the ImageCLEF 2017 tuberculosis task. More detailed information on the task
can be found in the overview article [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Section 3 explains the process of building
the textural graph-model of the lungs and all the variations tested for this task
in detail. The results obtained by this approach in both subtasks are shown in
Section 4. Finally, Section 5 concludes our participation in this challenge.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Subtasks and Datasets</title>
      <p>
        The ImageCLEF 2017 TB task proposed two subtasks: i) Multi-drug resistance
(MDR) prediction, and ii) Tuberculosis type (TBT) detection. The MDR task
is a 2-class problem and the TBT task contains 5 classes. For both subtasks
volumetric chest CT images with di erent voxel sizes and automatic segmentations
of the lungs were provided. No other lung segmentation was attempted in this
work and the masks provided were used. These masks were obtained with the
method described in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>The competition was divided into two phases. In the rst phase, the
organizers released for each subtask a set of patient CT volumes as training set with
their lung masks and ground truth labels. In the second phase, the test set with
its lung segmentations but no labels were provided. The evaluation on the test
data was performed by the organizers after the scheduled deadline for all runs
that were submitted in time. The number of CT volumes for each task and set
are speci ed in Tables 1 and 2.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Methods</title>
      <p>This section details the process for the feature vector extraction from the CT
images provided by the ImageCLEF TB task. The same technique was applied
for describing the patients of both subtasks. This technique consists of
creating graph-models of the lungs, with nodes based on a geometrical atlas and
weighted edges encoding dissimilarities between 3D texture descriptors of each
atlas region.
3.1</p>
      <sec id="sec-3-1">
        <title>Isometric Volumes</title>
        <p>The approach is based on 3D texture features. These features require having
isometric voxels. The rst step of our approach is to make the 3D images and
masks provided by the organizers isometric. After analyzing the multiple
resolutions found in the dataset and inter-slice distances, we opted for a voxel size
of 1 mm to capture a maximum of information.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Atlas of the Lung</title>
        <p>
          To build a graph with a xed structure over the lung physiology we rst need
a localization system. In this case we chose the atlas developed by Depeursinge
et al. in [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. This atlas is only based on the mask of the lungs and provide 36
geometric regions dividing the lungs as shown in Figure 1. It is not based on the
lung lobes.
3.3
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3D Texture Features</title>
        <p>Two state-of-the-art 3D texture features were selected to describe the texture
inside the lung. The rst method is a histogram of gradients based on the Fourier
transform HOG (FHOG) introduced in [10]. 28 3D directions are used for the
histogram obtaining a 28-dimensional feature vector per image voxel (fH 2 R28).</p>
        <p>The second approach is the locally-oriented 3D Riesz-wavelet transform
introduced by Dicente et al. in [11]. The parameters that obtained the best results
in this article were used in the approach. These are: 3rd-order Riesz transform,
4 scales and 1st-order alignment. This con guration results in 40-dimensional
feature vectors for each image voxel. The feature vector for a single voxel is then
10-dimensional containing the energy of each lter along the 4 scales (fR 2 R10).
Nodes Using as a base the 36-region atlas, the texture information from the 3D
textural features was embedded in a 36-node graph. With this xed number of
nodes, we de ned several graphs varying the number of edges and their weights.
The following notation is used: given a patient p 2 P and its 36-region atlas
Ap of the lungs with regions fr1; : : : ; r36g 2 Ap we let Gp be the 36-node graph
of the patient p with nodes fN1; : : : ; N36g. Each node Na represents one region
ra 2 Ap.</p>
        <p>Edges Three graphs were de ned.</p>
        <p>{ Graph Full : This is the fully connected 36-node graph. For every pair of
nodes Na and Nb with a 6= b there exists an undirected edge Ea;b. The total
number of edges in this case is 630 ( 36235 ).
{ Graph 66 : Based on the region adjacency de ned by the atlas Ap, in this
case, there exists edge Ea;b between nodes Na and Nb if regions ra and rb
are 3D adjacent in the atlas. This graph contains 66 edges in total.
{ Graph 84 : The last graph has the same 66 edges as Graph 66. Moreover, it
has 18 additional edges connecting each pair of nodes representing opposite
regions inside the atlas.</p>
        <p>The three graphs are shown in Figure 2.</p>
        <p>Graph Full</p>
        <p>Graph 66</p>
        <p>Graph 84</p>
        <p>Weights The graphs were always undirected weighted graphs. The weight wa;b
of an edge Ea;b was de ned in several ways based on the relations between the
features of the corresponding nodes Na and Nb. Four measures were used to
compute the weights. Considering fa and fb the feature vectors of regions ra and
rb respectively, the measures used are:
{ Correlation (corr): wa;b = corr (fa; fb)
{ Cosine similarity (cos): wa;b = cos(fa; fb)
{ Euclidean distance (euc): wa;b = jjfa fbjj2
{ Norm of the sum (sumNorm): wa;b = jjfa + fbjj2
Feature Vector of a Region Several feature vectors can be extracted from a
region r. Section 3.3 introduced the features extracted for a single voxel, fH and
fR. Given a region ra, we extracted the mean ( a) and standard deviation ( a)
of the features inside the region, i.e.: a(fH ), a(fH ), a(fR), and a(fR).
Feature Vector of a Patient Finally, the feature vector wp of a patient p,
is de ned as the ordered concatenation of the weights wa;b 2 Gp. Depending on
the graph used, this feature vector can be 630-, 66-, or 84-dimensional.
Feature Normalization The last step before using the feature vectors wp with
the classi er is to normalize them along the set of training patients PTRN . Each
component wp;i of a vector wp corresponds to the weight of a di erent edge in the
graph. Then, the components of a feature vector can not be seen independently
and the normalization is required to keep these relations. Thus, the normalization
was done for all components simultaneously. Several normalizations were tested:
The rst is linearly resizing the min and max values among all components
between 0 and 1, i.e.:</p>
        <p>[p2mPiTnRN fwpg; p2mPaTxRN fwpg] ! [0; 1]
. The second normalization aims to remove outliers. It considers the feature
vector components as elements of a Gaussian distribution, and it centers the
data around 0 with a standard deviation of 1, i.e.:
[ p2PTRN (wp)</p>
        <p>p2PTRN (wp); p2PTRN (wp) + p2PTRN (wp)] ! [ 1; 1]
. These normalizations were also applied to the set of test patients PTST using
the limits computed on the training set.</p>
        <p>Feature Concatenation Fixing a graph structure (Graph Full, Graph 66, or
Graph 84 ) and a measure between features (corr, cos, euc, or sumNorm), several
sets Wd of feature vectors wd;p were de ned when varying the features used to
encode the 3D texture in the regions. The possible features are a(fH ), a(fH ),
a(fR), and a(fR). After normalizing each set of vectors Wd, concatenations
of these descriptors were tested in order to better describe each patient. These
were based on the underlying features. The combinations tested were:
{ FHOG using the mean and the std ( (fH ) and (fH )),
{ Riesz using the mean and the std ( (fR) and (fR)),
{ and FHOG and Riesz using the mean, the std, and both ( (fH ) and (fR),
(fH ) and (fR), and (fH ) and (fR) and (fH ) and (fR)),
Feature Space Reduction When using the graph Graph Full and feature
concatenations, the feature space dimension was much larger than the number of
patients. To avoid the known problems when using such large feature spaces, we
tested two feature space reduction techniques. Both were applied in the training
phase. In the rst case, we considered the ground truth labels to be an step
function, with values f0; 1g for the MDR subtask, and f1; : : : ; 5g for the TB
task. Then, we computed the correlation between each feature dimension and
these step functions, and selected the feature dimensions that correlated best
with them. The threshold was set up as the mean absolute correlation of the
feature dimensions with the labels. The second technique is based on the
standard deviation of each feature. Only the components with a standard deviation
higher than the mean of the standard deviations were selected. Both techniques
reduced the size of the feature space by 2 approximately.
3.5</p>
      </sec>
      <sec id="sec-3-4">
        <title>Classi cation</title>
        <p>Multi-class support vector machine (SVM) classi ers with RBF kernel were used
for each run of both subtasks, particularly, 2-class for the MDR task and 5-class
for the TBT task. Grid search over the RBF parameters cost C and gamma
was applied. Since the data were normalized, both C and moved between
[2 10; 2 9; : : : ; 210]. The best C and combination for a run was set as the one
with highest cross-validation accuracy in the training set of each subtask.
3.6
The procedure explained in Section 3.4 resulted in 648 runs per subtask. Table 3
summarizes all possible options for each step using the same name coding as in
the results tables.
Submitted Runs A total of 10 runs could be submitted in the ImageCLEF
2017 TB task. 5 runs were submitted for each task. The 5 runs selected for
each task were either one of the 5 best runs with respect to the cross-validation
accuracy on the training set (AccTRN ), or a late fusion of a few of them. The
late fusion was executed using the probabilities that the SVM classi er returned
and the mean probability of belonging to each class. The same procedure was
applied in both tasks. Tables 4 and 5 show the identi er and run setup of the 5
best runs for each task respectively.</p>
        <p>
          Run Id. Graph Features F. measure E. weight F. norm. F. reduct. AccTRN
MDR Top1 Graph 84 FHOG and Riesz mean and std corr Gauss[
          <xref ref-type="bibr" rid="ref1">-1,1</xref>
          ] mostCorr 0.6900
MDR Top2 Graph 66 FHOG and Riesz std cos [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] mostCorr 0.6856
MDR Top3 Graph 84 FHOG mean corr [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] none 0.6812
MDR Top4 Graph 66 FHOG and Riesz mean and std corr [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] mostCorr 0.6725
MDR Top5 Graph 66 FHOG mean corr Gauss[
          <xref ref-type="bibr" rid="ref1">-1,1</xref>
          ] mostCorr 0.6725
        </p>
        <p>
          Run Id. Graph Features F. measure E. weight F. norm. F. reduct. AccTRN
TBT Top1 Graph 66 FHOG and Riesz mean and std sumNorm Gauss[
          <xref ref-type="bibr" rid="ref1">-1,1</xref>
          ] none 0.5276
TBT Top2 Graph 84 FHOG and Riesz mean and std sumNorm Gauss[
          <xref ref-type="bibr" rid="ref1">-1,1</xref>
          ] none 0.5174
TBT Top3 Graph 66 FHOG and Riesz mean and std sumNorm [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] none 0.5112
TBT Top4 Graph 66 FHOG and Riesz mean and std sumNorm Gauss[
          <xref ref-type="bibr" rid="ref1">-1,1</xref>
          ] mostCorr 0.5112
TBT Top5 Graph 84 FHOG and Riesz mean and std sumNorm [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] none 0.5092
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Results</title>
      <p>This section shows the results obtained on the training set by the submitted runs
(AccTRN ), and the nal performance in the competition (AccTST ). The nal
ranking was based on the Area Under the ROC Curve (AUC) for the MDR task,
and on the unweighted Cohen's Kappa coe cient (Kappa) for the TBT task.
Table 6 shows the results for the MDR subtask ordered by the ranking provided
by the task organizers. The run identi ers MDR TopBest3 and MDR TopBest5
were obtained by doing late fusion with the 3 and 5 best runs respectively (see
Section 3.6). The results for the TBT task are shown in Table 7. Again, the run
identi ers TBT TopBest3 and TBT TopBest5 correspond to the late fusion of
the 3 and 5 best runs respectively. The best run of the competition is shown in
the same table.
This work presents a new graph-model of the lung based on regional 3D texture
features for describing lungs a ected by tuberculosis. The participation in the
ImageCLEF 2017 tuberculosis 2017 allows for an objective comparison between
methods since the ground truth for the test set was never released. For the MDR
task, our method participated with 5 runs and obtained the 1st, 2nd and 3rd
place in the challenge. Also in the case of the TBT subtask 5 runs were submitted
but the best rank obtained was 10. The results underline the di culty of both
tasks and the suitability of our approach for describing TB patients. However,
the results also suggest some over tting by our method when comparing the
accuracies obtained for the training and test sets.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>This work was partly supported by the Swiss National Science Foundation in
the project PH4D (320030{146804).
10. Liu, K., Skibbe, H., Schmidt, T., Blein, T., Palme, K., Brox, T., Ronneberger,
O.: Rotation-invariant hog descriptors using fourier analysis in polar and spherical
coordinates. International Journal of Computer Vision 106(3) (2014) 342{364
11. Dicente Cid, Y., Muller, H., Platon, A., Poletti, P.A., Depeursinge, A.: 3{D solid
texture classi cation using locally{oriented wavelet transforms. IEEE Transactions
on Image Processing 26(4) (April 2017) 1899{1910</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. Muller, H.,
          <string-name>
            <surname>Clough</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deselaers</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caputo</surname>
          </string-name>
          , B., eds.: ImageCLEF {
          <article-title>Experimental Evaluation in Visual Information Retrieval</article-title>
          . Volume
          <volume>32</volume>
          of The Springer International Series On Information Retrieval. Springer, Berlin Heidelberg (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Kalpathy-Cramer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <source>Garc</source>
          a Seco de Herrera,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Demner-Fushman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Antani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Bedrick</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          , Muller, H.:
          <article-title>Evaluating performance of biomedical image retrieval systems: Overview of the medical image retrieval task at ImageCLEF 2004{2014</article-title>
          .
          <source>Computerized Medical Imaging and Graphics</source>
          <volume>39</volume>
          (
          <issue>0</issue>
          ) (
          <year>2015</year>
          )
          <volume>55</volume>
          {
          <fpage>61</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Gilbert</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikolajczyk</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a Seco de Herrera,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bromuri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amin</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kazi</surname>
            <given-names>Mohammed</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Acar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Uskudarli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Marvasti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.B.</given-names>
            ,
            <surname>Aldana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.F.</given-names>
            ,
            <surname>Roldan</surname>
          </string-name>
          <string-name>
            <surname>Garc a</surname>
          </string-name>
          , M.d.M.:
          <article-title>General overview of ImageCLEF at the CLEF 2015 labs</article-title>
          .
          <source>In: Working Notes of CLEF 2015. Lecture Notes in Computer Science</source>
          . Springer International Publishing (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arenas</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boato</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dicente Cid</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eickho</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia Seco de Herrera</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Islam</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mothe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwall</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Overview of ImageCLEF 2017: Information extraction from images</article-title>
          .
          <source>In: Experimental IR Meets Multilinguality, Multimodality, and Interaction 8th International Conference of the CLEF Association</source>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2017</year>
          . Volume
          <volume>10456</volume>
          of Lecture Notes in Computer Science., Dublin, Ireland, Springer (September
          <volume>11</volume>
          -14
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Kalinovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          , , Muller, H.:
          <article-title>Overview of ImageCLEFtuberculosis 2017 - predicting tuberculosis type and drug resistances</article-title>
          .
          <source>In: CLEF 2017 Labs Working Notes. CEUR Workshop Proceedings</source>
          , Dublin, Ireland, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 11-14</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Jeong</surname>
            ,
            <given-names>Y.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.S.:</given-names>
          </string-name>
          <article-title>Pulmonary tuberculosis: up-to-date imaging and management</article-title>
          .
          <source>American Journal of Roentgenology</source>
          <volume>191</volume>
          (
          <issue>3</issue>
          ) (
          <year>2008</year>
          )
          <volume>834</volume>
          {
          <fpage>844</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            , Muller, H.,
            <surname>Platon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Janssens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.P.</given-names>
            ,
            <surname>Lador</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Poletti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.A.</given-names>
            ,
            <surname>Depeursinge</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>A lung graph{model for pulmonary hypertension and pulmonary embolism detection on DECT images</article-title>
          . In: MICCAI Workshop on Medical Computer Vision:
          <article-title>Algorithms for Big Data</article-title>
          ,
          <string-name>
            <surname>MCV</surname>
          </string-name>
          <year>2016</year>
          .
          <article-title>(</article-title>
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          ,
          <article-title>Jimenez-del-</article-title>
          <string-name>
            <surname>Toro</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Depeursinge</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Muller, H.:
          <article-title>E cient and fully automatic segmentation of the lungs in CT volumes</article-title>
          . In Orcun Goksel,
          <article-title>Jimenez-del-</article-title>
          <string-name>
            <surname>Toro</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Foncubierta-Rodriguez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Muller, H., eds.
          <source>: Proceedings of the VISCERAL Challenge at ISBI. Number 1390 in CEUR Workshop Proceedings (Apr</source>
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Depeursinge</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zrimec</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Busayarat</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Muller, H.:
          <article-title>3D lung image retrieval using localized features</article-title>
          .
          <source>In: Medical Imaging</source>
          <year>2011</year>
          :
          <article-title>Computer{Aided Diagnosis</article-title>
          . Volume
          <volume>7963</volume>
          .,
          <string-name>
            <surname>SPIE</surname>
          </string-name>
          (
          <year>2011</year>
          ) 79632E
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>