<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Overview of ImageCLEFtuberculosis 2018 { Detecting Multi-Drug Resistance, Classifying Tuberculosis Types and Assessing Severity Scores</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yashin Dicente Cid</string-name>
          <email>yashin.dicente@hevs.ch</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitali Liauchuk</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vassili Kovalev</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Henning Muller</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>SO)</institution>
          ,
          <addr-line>Sierre</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>United Institute of Informatics Problems</institution>
          ,
          <addr-line>Minsk</addr-line>
          ,
          <country country="BY">Belarus</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Applied Sciences Western Switzerland (HES</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Geneva</institution>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>ImageCLEF is the image retrieval task of the Conference and Labs of the Evaluation Forum (CLEF). ImageCLEF has historically focused on the multimodal and language-independent retrieval of images. Many tasks are related to image classi cation and the annotation of image data as well as the retrieval of images. The tuberculosis task was held for the rst time in 2017 and had a very encouraging participation with 9 groups submitting results to these very challenging tasks. In 2018 there was a slightly higher participation. Three tasks were proposed in 2018: (1) the detection of drug resistances among tuberculosis cases, (2) the classi cation of the cases into ve types of tuberculosis and (3) the assessment of a tuberculosis severity score. Many di erent techniques were used by the participants ranging from Deep Learning to graph-based approaches and best results were obtained by a variety of approaches with no clear technique dominating. Both, the detection of drug resistances and the classi cation of tuberculosis types had similar results than in the previous edition, the former remaining as a very difcult task. In the case of the severity score task, the results support the suitability of assessing the severity based only on the CT image, as the results obtained were very good.</p>
      </abstract>
      <kwd-group>
        <kwd>Tuberculosis</kwd>
        <kwd>Computed Tomography</kwd>
        <kwd>Image Classi cation</kwd>
        <kwd>Drug Resistance</kwd>
        <kwd>Severity Scoring</kwd>
        <kwd>3D Data Analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>ImageCLEF4 is the image retrieval task of CLEF (Conference and Labs of the
Evaluation Forum). ImageCLEF was rst held in 2003 and in 2004 a medical
task was added that has been held every year since then [1{4]. More information</p>
    </sec>
    <sec id="sec-2">
      <title>4 http://www.imageclef.org/</title>
      <p>
        on the other tasks organized in 2018 can be found in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and the past editions
are described in [6{9].
      </p>
      <p>
        Tuberculosis (TB) is a bacterial infection caused by a germ called
Mycobacterium tuberculosis. About 130 years after its discovery, the disease remains a
persistent threat and a leading cause of death worldwide [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This bacteria
usually attacks the lungs, but it can also damage other parts of the body. Generally,
TB can be cured with antibiotics. However, the greatest disaster that can
happen to a patient with TB is that the organisms become resistant to two or more
of the standard drugs. In contrast to drug sensitive (DS) TB, its multi-drug
resistant (MDR) form is much more di cult and expensive to recover from. Thus,
early detection of the MDR status is fundamental for an e ective treatment. The
most commonly used methods for MDR detection are either expensive or take
too much time (up to several months) to really help in this scenario. Therefore,
there is a need for quick and at the same time cheap methods of MDR detection.
In 2017, ImageCLEF organized the rst challenge based on Computed
Tomography (CT) image analysis of TB patients [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], with a dedicated subtask for the
detection of MDR cases. The classi cation of TB subtypes was also proposed
in 2017. This is another important task for TB analysis since di erent types of
TB should be treated in di erent ways. Both subtasks were also proposed in
the 2018 edition where we extended their respective datasets. Moreover, a new
subtask was added based on assessing a severity score of the disease given a CT
image.
      </p>
      <p>This article rst describes the three tasks proposed around TB in 2018. Then,
the datasets, evaluation methodology and participation are detailed. The results
section describes the submitted runs and the results obtained for the three
subtasks. A discussion and conclusion section ends the paper.
2
2.1</p>
      <sec id="sec-2-1">
        <title>Tasks, Datasets, Evaluation, Participation</title>
        <sec id="sec-2-1-1">
          <title>The Tasks in 2018</title>
          <p>Three subtasks were organized in 2018. Two were common with the 2017 edition
and one new subtask was added:
{ Multi-Drug Resistance detection (MDR subtask);
{ Tuberculosis Type classi cation (TBT subtask);
{ Severity Scoring assessment (SVR subtask).</p>
          <p>This section gives an overview of each of the three subtasks.</p>
          <p>Multi-drug Resistance Detection: As in 2017, the goal of the MDR subtask
was to assess the probability of a TB patient having a resistant form of TB
based on the analysis of a chest CT scan alone. The dataset for this subtask
was increased from the previous year but the subtask remained as a binary
classi cation problem even though several levels of resistances exist.
Tuberculosis Type Classi cation: This subtask is also common with the
2017 edition and, like in the MDR subtask, we increased the dataset. The goal
of the TBT subtask is to automatically categorize each TB case into one of the
following ve TB types: In ltrative, Focal, Tuberculoma, Miliary, and
Fibrocavernous. The distribution of cases among the classes is not balanced but the
distributions are similar in the training and the test data.</p>
          <p>Severity Scoring: This subtask aims at assessing a TB severity score based
only on a chest CT image. The severity score is a cumulative score of severity
of a TB case assigned by a medical doctor. Originally, the score varied from 1
("critical/very bad") to 5 ("very good"). In the process of scoring, the medical
doctors considered many factors like pattern of the lesions, results of
microbiological tests, duration of treatment, patient age and other criteria.
2.2</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>Datasets</title>
          <p>For each of the three subtasks, a separate dataset was provided, all containing
3D CT images stored in the NIfTI (Neuroimaging Informatics Technology
Initiative) le format with slice resolution of 512 512 pixels and a number of slices
varying from about 50 to 400. A set of relevant meta-data such as age and
gender was provided for each subtask. The entire dataset including CT images and
associated meta-data were provided by the Republican Research and Practical
Center for Pulmonology and Tuberculosis that is located in Minsk, Belarus. The
data were collected in the framework of several projects that aim at the creation
of information resources on lung TB and drug resistance challenges. The projects
were conducted by a multi-disciplinary team and funded by the National
Institute of Allergy and Infectious Diseases, National Institutes of Health (NIH), U.S.
Department of Health and Human Services, USA, through the Civilian Research
and Development Foundation (CRDF). The dedicated web-portal5 developed in
the framework of the projects stores information of more than 940 TB patients
from ve countries: Azerbaijan, Belarus, Georgia, Moldova and Romania. The
information includes CT scans, X-ray images, genome data, clinical and social
data.</p>
          <p>
            In the framework of the ImageCLEF 2018 TB task, automatically extracted
masks of the lungs were provided for all CT images. These masks were extracted
using the method described in [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ]. The segmentations were analyzed based
on the number of lungs found and the size ratio of the lungs in a supervised
manner. Only those segmentations with anomalies on these two metrics were
visualized and evaluated accordingly. A total of 32 images out of 2,287 presented
a problematic mask, 8 including areas outside the lungs and 24 containing only
one lung. The 8 inaccurate masks were corrected by fusing the above mentioned
method and the registration-based segmentation used in [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ]. The other 24 masks
(20 from the TBT subtask and 4 from the MDR subtask) could not be properly
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5 http://tbportals.niaid.nih.gov/</title>
      <p>labeled due to the size and/or damage of one lung. In these cases, the masks
provided to the participants only contained one label (right lung).
Multi-drug Resistance Detection The dataset for this task is an extension
of the one used in the 2017 edition. Particularly, the training and test sets of this
subtask were extended by adding patients with extensively drug-resistant (XDR)
TB, which is a rare and more severe subtype of MDR TB. Along with the 3D CT
images and lung masks, the age and gender of each patient were provided. The
dataset includes only HIV-negative patients with no relapses. Each patient was
classi ed into one the two classes: drug sensitive (DS) or multi-drug resistant
(MDR). A patient was considered DS if the TB bacteria was sensitive to all the
anti-tuberculosis drugs tested. All XDR patients were considered to belong to
the MDR class. Table 1 contains the number of patients in each set.
Tuberculosis Type Classi cation The dataset used in this subtask includes
chest CT scans of TB patients along with the TB type and patient age at the
moment of the scan. Like the MDR dataset, the TBT 2017 dataset was extended
for the 2018 edition. In this case, new CT scans of the same patients involved
in 2017 were added and also some CT images of new patients. In the TBT 2018
dataset, for each patient there are between 1 and 9 CT scans acquired at di erent
time points. All scans of the same patient were diagnosed with the same TB type
by expert radiologists. Figure 1 shows one example for each of the ve TB types.
Moreover, Figure 2 shows examples of two patients with three CT scans each.
The CT slices in both gures are shown using a Houns eld Unit (HU) window
with center at -500 HU and width of 1400 HU. The number of CT scans and
patients in each TB type set are shown in Table 2.</p>
      <p>Severity Scoring The data for the SVR subtask includes 279 CT scans with
known TB severity scores ranging from 1 to 5 assigned by medical doctors. Each
CT scan corresponds to a speci c TB patient. To treat this subtask as a binary
classi cation problem, the severity scores were grouped so that values 1, 2 and 3
corresponded to "high severity" class, and values 4 and 5 corresponded to "low
severity". Table 3 contains the number of patients of each severity class in the
sets.</p>
      <p>In ltrative</p>
      <p>Focal</p>
      <p>Tuberculoma
Miliary</p>
      <p>Fibro{cavernous
Similar to 2017, the participants were allowed to submit up to 10 runs to each of
the three TB subtasks. In the case of the MDR subtask, the participants had to
46 years old
53 years old
54 years old
55 years old
58 years old
59 years old
provide the probability for the TB cases to belong to the MDR class ranging from
0 to 1. These probabilities were used to build Receiver Operating Characteristic
(ROC) curves. Since the MDR dataset was not perfectly balanced and had a
relatively small size, Area Under the ROC Curve (AUC) was used to evaluate
the participant runs. We provided the accuracy of the binary classi cation using
a standard threshold of 0.50.</p>
      <p>In the case of the TBT task, the participants had to predict the TB type of
each patient, and submit a run containing a category label in the set f1, 2, 3,
4, 5g. Considering that a high number of patients in the dataset had multiple
CT scans with the same TB type, the evaluation was performed patient-wise.
Cohen's Kappa coe cient was provided for each run along with the 5-class
prediction accuracy. Cohen's Kappa is not sensitive to unbalanced datasets, which
is the case for the data used here.</p>
      <p>The runs submitted for the severity scoring subtask were evaluated in two
ways. One used the original severity scores from 1 to 5 and the task was to
predict those numerical scores as precise as possible (a regression problem).
Here, Root Mean Square Error (RMSE) was computed between the ground truth
severity and the predicted scores provided by the participants. Alternatively, the
original severity score was transformed into two classes, where scores from 1 to
3 corresponded to "high severity" and the 4 and 5 scores corresponded to the
"low severity" class. In this case the participants had to provide the probability
of TB cases to belong to the "high severity" class. The corresponding results
were evaluated using AUC.
2.4</p>
      <sec id="sec-3-1">
        <title>Participation</title>
        <p>In 2018 there were 85 registered teams and 33 signed the end user agreement.
Finally, 11 groups from 9 countries participated in one or more subtasks and
submitted results. These numbers are similar to 2017, where there were 94
registered teams, 48 that signed the end user agreement, and 9 teams from 9 countries
submitting results. Table 4 shows the list of participants and the subtasks where
they participated. One of the groups (HHU-DBS) participated in two subtasks
with di erent approaches developed by a di erent set of authors. Therefore,
their approaches are referred as HHU-DBS 1 and HHU-DBS 2 in the following
sections.
( ) The HHU-DBS group participated with di erent approaches in the MDR and SVR
subtasks. Therefore, the group name is split into HHU-DBS 1 and HHU-DBS 2
respectively in the following sections.
3</p>
        <sec id="sec-3-1-1">
          <title>Results</title>
          <p>This section provides the results obtained by the participants in each of the
subtasks.
3.1
0.8</p>
          <p>SDVAHCS/UCSD</p>
          <p>UniversityAlicante</p>
          <p>ed
UIIP_BioM</p>
          <p>
            It is worth to notice that the image-based detection task of MDR TB
remains very challenging and so far has no solution with a su ciently high
prediction accuracy for being useful in clinical practice. Recent articles report the
presence of statistically signi cant links between drug resistance and multiple
thick-walled caverns [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]. However, computerized methods show a performance
of image-based MDR TB detection barely beyond the level of statistical signi
cance compared to a random classi er [
            <xref ref-type="bibr" rid="ref15 ref16 ref6">6, 15, 16</xref>
            ].
          </p>
          <p>
            The best result in terms of AUC was achieved by VISTA@UEvora team
with an AUC of 0.6178 [
            <xref ref-type="bibr" rid="ref17">17</xref>
            ]. The team used conventional approaches for the
extraction of quantitative image descriptors, such as statistical moments, fractal
dimension, gray-level co-occurrence matrices and their derivative features. A set
of conventional classi cation methods was used for prediction in all the three
subtasks. Their best run in terms of classi cation accuracy (0.5763) ranked 3rd
place among the participant runs and is not the same run that had the best
AUC. The second highest AUC of 0.6114 was achieved by the San Diego VA
HCS/UCSD [
            <xref ref-type="bibr" rid="ref18">18</xref>
            ] with an approach based on splitting the 3D CT scans into a set
of 2D images and using a pre-trained ResNeXt deep network for classi cation.
This run achieved the highest MDR detection accuracy (0.6144). The third
highest AUC was obtained by HHU-DBS 1 [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ]. They used 3D deep Convolutional
Neural Networks (CNNs) combined with decision trees and obtained 0.5810 AUC
and 0.5720 classi cation accuracy with their best run. The UniversityAlicante
group used two approaches: one based on 2D CNNs and the other based on
Optical Flow (OF) [
            <xref ref-type="bibr" rid="ref20">20</xref>
            ]. The best AUC among this group's runs was obtained using
only patient age and gender information and ranked 10th among all participant
runs with a AUC of 0.5669. Other runs obtained lower AUC. This OF-based
approach for CT image analysis resulted in an accuracy of 0.5339 and ranked
20th. The single run submitted by the UIIP BioMed group ranked 14th in AUC
and 36th in accuracy with an AUC of 0.5558 and an accuracy of 0.4576 [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ]. A
technique for automatic detection of lesions of di erent types in a six-region
division of the CT lung volume was used. A separate dataset with labeled lesions in
CT was used for training the lesion detection algorithm. A Random Forest (RF)
classi er was used for the prediction of the nal classes and scores in all three
subtasks. Methods based on a graph-model of the lungs and 3D texture analysis
were used by MedGIFT group [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ]. Their best runs resulted in the 22nd highest
AUC (0.5237) and the 2nd highest accuracy (0.5932). Finally, the LIST group
used a hybrid approach that combined 3D CNNs with linear SVM classi ers for
MDR detection and TB type classi cation [
            <xref ref-type="bibr" rid="ref23">23</xref>
            ]. The single run submitted by the
group obtained an AUC of 0.5029 and an accuracy of 0.4576 and ranked 28th
and 37th, respectively. The information about age and gender of TB patients
was used only by two participating groups: HHU-DBS 1 and UIIP BioMed.
3.2
          </p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Tuberculosis Type Classi cation</title>
        <p>Table 6 shows the results obtained for the TBT subtask. The runs were
evaluated on the test set of images using the unweighted Cohen Kappa coe cient
and overall classi cation accuracy. The results are sorted by Cohen's Kappa in
descending order. Figure 4 shows the highest Kappa values achieved by the
participants. The true positive rates of the di erent TB types are shown in Figure 5.</p>
        <p>
          In the TBT subtask, most of the teams used the same methods as they used
for the MDR detection. The best result in terms of both Kappa and classi cation
accuracy was achieved by the UIIP BioMed group with the use of a
lesionbased TB descriptor and a RF classi er. The run resulted in a Kappa of 0.2312
and a classi cation accuracy of 0.4227. Instead of using all the available CT
series, this group only used the rst scan of a patient for the classi cation of
the TB type. The second highest Kappa was achieved by the fau ml4cv group
that participated only in the TBT subtask [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. An ensemble of 3D CNNs was
used, achieving a Kappa of 0.1736 and an accuracy of 0.3533 with their best
run. The graph-based approach of the MedGIFT team resulted in the 2nd best
classi cation accuracy (0.3849) and the 3rd highest Kappa (0.1706). The best
runs of VISTA@UEvora, San Diego VA HCS/UCSD, UniversityAlicante and
LIST resulted in Kappa values of 0.1664, 0.1474, 0.0204, and -0.0024 respectively.
The MostaganemFSEI group participated in the TBT classi cation and the SVR
subtasks. The algorithm employed by them was based on splitting the 3D CT
scans into 2D slices, extracting semantic descriptors using a trained CNN and
applying conventional classi cation methods [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ]. They obtained a Kappa of
0.0629 and an accuracy of 0.2744. It is worth to highlight that only the fau ml4cv
and San Diego VA HCS/UCSD groups obtained a true positive rate higher than
a random classi er in all ve TB types (see Figure 5).
3.3
The results obtained for the severity scoring subtask are shown in Table 7.
The best RMSE achieved by the participating groups and the corresponding
AUCs are shown in Figures 6 and 7. The best results in terms of regression
were obtained by the UIIP BioMed group with an RMSE of 0.7840, which also
achieved the 6th best classi cation result with an AUC of 0.7025. The highest
classi cation result was achieved by the MedGIFT group with an AUC of 0.7708.
The MedGIFT group's best regression obtained an RMSE of 0.8513, which is
the second best result. The third best RMSE (0.8883) was obtained by the
VISTA@UEvora group. The same run ranked on the 21st place for classi cation
a
pp 0
a
K
-0.5
        </p>
        <p>
          SDVAHCS/UCSD
with an AUC of 0.6239. The third best result for classi cation was obtained by
the San Diego VA HCS/UCSD group with an AUC of 0.6984, which corresponds
to the 7th best result. Their best regression is an RMSE of 1.2153, which is
at rank 30. The HHU-DBS 2 team used a feature-based approach for scoring
the severity of TB based on a set of conventional methods [26]. The approach
employed image binarization and extraction of features including the presence of
calci cations, lung wateriness, cavities, infection ratio, HU histograms and lung
shape to characterize the volumes. The group obtained the 10th best RMSE
(0.9626) and 8th best AUC (0.6862). The MostaganemFSEI group achieved an
RMSE of 0.9721 and an AUC of 0.6127. Middlesex University participated only in
the SVR subtask. The group employed an approach based on using deep residual
learning, training on a set of overlapping 128 128 depth blocks, assessing the
TB severity for each block and gathering the results [27]. This allowed to achieve
an RMSE of 1.0921 and an AUC of 0.6534 that correspond to the 24th and 14th
positions. It is important to highlight that all groups obtained an AUC higher
than a random classi er (AUC of 0.50) with all their runs.
1.5
E
S
RM 1
0.5
0
1
0.8
Similar to 2017, the results obtained by the participants in the MDR detection
subtask demonstrate that the task of a fully automatic image-based detection of
drug resistance is extremely di cult. Despite the addition of XDR TB cases into
the dataset and the inclusion of information about patient age and gender, the
MDR detection performance still remains at a level relatively close to a random
classi cation with the highest reached AUC of 0.6178 and a 61.4% prediction
accuracy. The overall increase of prediction performance with respect to the
2017 edition might be caused by the addition of more severe cases with XDR
TB into the dataset. Using information about patient age and gender could also
improve the MDR detection results as suggested by the baseline submitted by
UniversityAlicante group [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
        <p>In the second subtask, the overall results of TB type classi cation are slightly
worse than in 2017. This might be caused by the decreased balance of TB classes
in the dataset. Using more than one CT scan per patient could also confuse
prediction methods and worsen the nal results. However there is a certain
improvement in prediction of class T2 (Focal TB) demonstrated by most of the
participants' results.</p>
        <p>The results of SVR subtask are encouraging, since the actual assessment
of the TB severity score is done using various clinical information sources, not
only CT image data. Most of the results achieved by the participants obtained a
RMSE of the severity score below 1 in a 5-grade scoring system. The best results
obtained using only CT volumes are close to the results reported in [28], where
the authors used clinical and laboratory data including drug resistance, presence
of TB symptoms, etc. in addition to the images. Extension of the dataset and
usage of clinical and laboratory data is expected to improve the severity scoring
results.</p>
        <p>Overall, the 2018 edition of the ImageCLEF TB task showed an improvement
with respect to the 2017 edition in terms of number of participants, data
provided, results obtained and the variety of methods proposed. This shows a high
interest in this topic and also the importance of the data that were generated.</p>
        <sec id="sec-3-2-1">
          <title>Acknowledgements</title>
          <p>This work was partly supported by the Swiss National Science Foundation in
the project PH4D (320030{146804) and by the National Institute of Allergy and
Infectious Diseases, National Institutes of Health, U.S. Department of Health
and Human Services, USA through the CRDF project DAA3-17-63599-1 "Year
6: Belarus TB Database and TB Portals".
26. Bogomasov, K., Himmelspach, L., Klassen, G., Tatusch, M., Conrad, S.:
Featurebased approach for severity scoring of lung tuberculosis from CT images. In:
CLEF2018 Working Notes. CEUR Workshop Proceedings, Avignon, France,
CEUR-WS.org &lt;http://ceur-ws.org&gt; (September 10-14 2018)
27. Gao, X., James-Reynolds, C., Currie, E.: Scoring TB severity with an enhanced
deep residual learning depth-resnet. In: CLEF2018 Working Notes. CEUR
Workshop Proceedings, Avignon, France, CEUR-WS.org &lt;http://ceur-ws.org&gt;
(September 10-14 2018)
28. Kovalev, V., Liauchuk, V., Skrahina, A., Astrauko, A., Rosenthal, A., Gabrielian,
A.: Examining the utility of clinical, laboratory and radiological data for scoring
severity of pulmonary tuberculosis. In: Computer Assisted Radiology and Surgury
- 32nd International Congress and Exhibition (CARS-2018). Volume 13., Springer,
Heidelberg (2018) 143{144</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Kalpathy-Cramer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <source>Garc</source>
          a Seco de Herrera,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Demner-Fushman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Antani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Bedrick</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          , Muller, H.:
          <article-title>Evaluating performance of biomedical image retrieval systems: Overview of the medical image retrieval task at ImageCLEF 2004{2014</article-title>
          .
          <source>Computerized Medical Imaging and Graphics</source>
          <volume>39</volume>
          (
          <issue>0</issue>
          ) (
          <year>2015</year>
          )
          <volume>55</volume>
          {
          <fpage>61</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. Muller, H.,
          <string-name>
            <surname>Clough</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deselaers</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caputo</surname>
          </string-name>
          , B., eds.: ImageCLEF {
          <article-title>Experimental Evaluation in Visual Information Retrieval</article-title>
          . Volume
          <volume>32</volume>
          of The Springer International Series On Information Retrieval. Springer, Berlin Heidelberg (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3. Garc a Seco de Herrera,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Schaer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Bromuri</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          , Muller, H.:
          <article-title>Overview of the ImageCLEF 2016 medical task</article-title>
          .
          <source>In: Working Notes of CLEF</source>
          <year>2016</year>
          (
          <article-title>Cross Language Evaluation Forum)</article-title>
          .
          <source>(September</source>
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. Muller, H.,
          <string-name>
            <surname>Clough</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hersh</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geissbuhler</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <source>ImageCLEF</source>
          <year>2004</year>
          {2005:
          <article-title>Results experiences and new ideas for image retrieval evaluation</article-title>
          .
          <source>In: International Conference on Content{Based Multimedia Indexing (CBMI</source>
          <year>2005</year>
          ), Riga, Latvia,
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>June 2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Villegas</surname>
          </string-name>
          , M.,
          <string-name>
            <surname>de Herrera</surname>
            ,
            <given-names>A.G.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eickho</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andrearczyk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cid</surname>
            ,
            <given-names>Y.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasan</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ling</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Farri</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lungren</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          : Overview of ImageCLEF 2018:
          <article-title>Challenges, datasets and evaluation. In: Experimental IR Meets Multilinguality, Multimodality, and Interaction</article-title>
          .
          <source>Proceedings of the Ninth International Conference of the CLEF Association (CLEF</source>
          <year>2018</year>
          ), Avignon, France,
          <source>LNCS Lecture Notes in Computer Science</source>
          , Springer (September
          <volume>10</volume>
          -14
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arenas</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boato</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dicente Cid</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eickho</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia Seco de Herrera</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Islam</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mothe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwall</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Overview of ImageCLEF 2017: Information extraction from images</article-title>
          .
          <source>In: Experimental IR Meets Multilinguality, Multimodality, and Interaction 8th International Conference of the CLEF Association</source>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2017</year>
          . Volume
          <volume>10456</volume>
          of Lecture Notes in Computer Science., Dublin, Ireland, Springer (September
          <volume>11</volume>
          -14
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Muller, H., Garcia Seco de Herrera,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Schaer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Bromuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Gilbert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Piras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Ramisa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Dellandrea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Gaizauskas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Mikolajczyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Puigcerver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Toselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.H.</given-names>
            ,
            <surname>Sanchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.A.</given-names>
            ,
            <surname>Vidal</surname>
          </string-name>
          , E.:
          <article-title>General overview of ImageCLEF at the CLEF 2016 labs</article-title>
          .
          <source>In: CLEF 2016 Proceedings. Lecture Notes in Computer Science</source>
          , Evora. Portugal, Springer (
          <year>September 2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Gilbert</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikolajczyk</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a Seco de Herrera,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bromuri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amin</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kazi</surname>
            <given-names>Mohammed</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Acar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Uskudarli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Marvasti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.B.</given-names>
            ,
            <surname>Aldana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.F.</given-names>
            ,
            <surname>Roldan</surname>
          </string-name>
          <string-name>
            <surname>Garc a</surname>
          </string-name>
          , M.d.M.:
          <article-title>General overview of ImageCLEF at the CLEF 2015 labs</article-title>
          .
          <source>In: Working Notes of CLEF 2015. Lecture Notes in Computer Science</source>
          . Springer International Publishing (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Caputo</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Thomee</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paredes</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zellhofer</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goeau</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joly</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bonnet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martinez</surname>
            <given-names>Gomez</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Garcia</surname>
          </string-name>
          <string-name>
            <surname>Varea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Cazorla</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          :
          <article-title>ImageCLEF 2013: the vision, the data and the open challenges</article-title>
          .
          <source>In: Working Notes of CLEF</source>
          <year>2013</year>
          (
          <article-title>Cross Language Evaluation Forum)</article-title>
          .
          <source>(September</source>
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. World Health Organization, et al.:
          <source>Global tuberculosis report 2016</source>
          . (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Kalinovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          , , Muller, H.:
          <article-title>Overview of ImageCLEFtuberculosis 2017 - predicting tuberculosis type and drug resistances</article-title>
          .
          <source>In: CLEF 2017 Labs Working Notes. CEUR Workshop Proceedings</source>
          , Dublin, Ireland, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 11-14</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          ,
          <article-title>Jimenez-del-</article-title>
          <string-name>
            <surname>Toro</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Depeursinge</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Muller, H.:
          <article-title>E cient and fully automatic segmentation of the lungs in CT volumes</article-title>
          . In Orcun Goksel,
          <article-title>Jimenez-del-</article-title>
          <string-name>
            <surname>Toro</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Foncubierta-Rodriguez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Muller, H., eds.
          <source>: Proceedings of the VISCERAL Challenge at ISBI. Number 1390 in CEUR Workshop Proceedings (Apr</source>
          <year>2015</year>
          )
          <volume>31</volume>
          {
          <fpage>35</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
          </string-name>
          , V.:
          <article-title>ImageCLEF 2017: Supervoxels and co-occurrence for tuberculosis CT image classi cation</article-title>
          .
          <source>In: CLEF2017 Working Notes. CEUR Workshop Proceedings</source>
          , Dublin, Ireland, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 11-14</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.X.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chung</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrahin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenthal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabrielian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tartakovsky</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Radiological signs associated with pulmonary multi-drug resistant tuberculosis: an analysis of published evidences</article-title>
          .
          <source>Quantitative Imaging in Medicine and Surgery</source>
          <volume>8</volume>
          (
          <issue>2</issue>
          ) (
          <year>2018</year>
          )
          <volume>161</volume>
          {
          <fpage>173</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Safonau</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Astrauko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrahina</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tarasau</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Is there any correlation between the drug resistance and structural features of radiological images of lung tuberculosis patients</article-title>
          ? In: Computer Assisted Radiology - 27th
          <source>International Congress and Exhibition (CARS-2013)</source>
          . Volume
          <volume>8</volume>
          ., Springer, Heidelberg (
          <year>2013</year>
          )
          <volume>18</volume>
          {
          <fpage>20</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalinouski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenthal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabrielian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrahina</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Astrauko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Tarasau: Utilizing radiological images for predicting drug resistance of lung tuberculosis</article-title>
          . In: Computer Assisted Radiology - 27th
          <source>International Congress and Exhibition (CARS-2015)</source>
          . Volume
          <volume>10</volume>
          ., Springer, Barcelona (
          <year>2015</year>
          )
          <volume>129</volume>
          {
          <fpage>130</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Ahmed</surname>
            ,
            <given-names>M.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Obaidullah</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jayatilake</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goncalves</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rato</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Texture analysis from 3D model and individual slice extraction for tuberculosis MDR detection, type classi cation and severity scoring</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , Avignon, France, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Gentili</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>ImageCLEF2018: Transfer learning for deep learning with CNN for tuberculosis classi cation</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , Avignon, France, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 10- 14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Tatusch</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conrad</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Detection of multidrug-resistant tuberculosis using convolutional neural networks and decision trees</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , Avignon, France, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Llopis</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fuster-Guillo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rico-Juan</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Azor</surname>
            n-Lopez,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Llopis</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Tuberculosis detection using optical ow and the activity description vector</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , Avignon, France, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tarasau</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Snezhko</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabrielian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenthal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>ImageCLEF 2018: Lesion-based TB-descriptor for CT image analysis</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , Avignon, France, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          , Muller, H.:
          <article-title>Texture-based graph model of the lungs for drug resistance detection, tuberculosis type classi cation, and severity scoring: Participation in ImageCLEF 2018 tuberculosis task</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , Avignon, France, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Allaouzi</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Benamrou</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ben</surname>
            <given-names>Ahmed</given-names>
          </string-name>
          ,
          <string-name>
            <surname>M.:</surname>
          </string-name>
          <article-title>3D-CNN in drug resistance detection and tuberculosis classi cation</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , Avignon, France, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Ishay</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marques</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Ensemble of 3D CNNs with multiple inputs for tuberculosis type classi cation</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , Avignon, France, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Hamadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yagoub</surname>
            ,
            <given-names>D.E.: ImageCLEF</given-names>
          </string-name>
          <year>2018</year>
          :
          <article-title>Semantic descriptors for tuberculosis CT image classi cation</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , Avignon, France, CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt; (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>