<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ImageCLEF 2018: Lesion-based TB-descriptor for CT Image Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vitali Liauchuk</string-name>
          <email>vitali.liauchuk@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aleh Tarasau</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eduard Snezhko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vassili Kovalev</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Scienti c and Practical Center for Pulmonology and Tuberculosis</institution>
          ,
          <addr-line>Minsk</addr-line>
          ,
          <country country="BY">Belarus</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>United Institute of Informatics Problems</institution>
          ,
          <addr-line>Minsk</addr-line>
          ,
          <country country="BY">Belarus</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper presents image description and classi cation method which was used by United Institute of Informatics Problems (UIIP BioMed) group for accomplishing the three subtasks of ImageCLEFtuberculosis task. The image description method employed is based on automated detection of tuberculosis (TB) lesions of di erent types in 3D lung Computed Tomography (CT) scans. The lesion detection method is based on Coder-Decoder Convolutional Neural Network trained on a third-party dataset of 149 CT scans with lesions labeled by a quali ed radiologist. It was shown that combination of lesion-based TB-descriptor and Random Forests classi er allows achieving the best performance in TB type classi cation and TB severity scoring subtasks.</p>
      </abstract>
      <kwd-group>
        <kwd>tuberculosis</kwd>
        <kwd>TB-descriptor</kwd>
        <kwd>lesions</kwd>
        <kwd>CT</kwd>
        <kwd>image analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The tuberculosis task [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] of ImageCLEF 2018 Challenge [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] considers three
subtasks all dealing with 3D CT images. The subtask #1 is dedicated to the problem
of single image-based distinguishing between multi-drug resistant tuberculosis
(MDR TB) cases and drug sensitive (DS) ones. The task remains very
challenging and so far has no solution with su cient prediction accuracy. Recent
analysis of published evidences reports presence of statistically signi cant links
between drug resistance and multiple thick-walled caverns [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. So far
computerized methods demonstrate performance of image-based detection of MDR TB
barely beyond the level of statistical signi cance [
        <xref ref-type="bibr" rid="ref4 ref8 ref9">4, 8, 9</xref>
        ]. Compared to 2017
data [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], datasets for MDR detection subtask were extended by means of adding
several cases with extensively drug-resistant tuberculosis (XDR TB), which is
a rare and more severe subtype of MDR TB. Thus, training data for the MDR
detection subtask included 259 CT images: 134 drug sensitive and 125 drug
resistant cases. Test set consisted of 236 CT images: 101 drug sensitive and 135
drug resistant cases.
      </p>
      <p>The subtask #2 of ImageCLEFtuberculosis task is aimed at automatic
categorization of CT images into one of ve types of tuberculosis: In ltrative, Focal,
Tuberculoma, Miliary and Fibro-cavernous. Compared to 2017, the datasets were
extended by adding new CT scans of the patients involved earlier, and also by
introducing CT images of some new patients. However, in this study only the
rst CT scan of each patient was used.</p>
      <p>The newly represented subtask #3 was dedicated to assessment of severity of
TB based on a single CT image of a patient. The severity score has meaning of a
cumulative score of severity of TB case assigned by a medical doctor. Originally,
the severity scores were assigned using natural numbers between 1 ("critical/very
bad") and 5 ("very good"). Additionally, for the case of binary classi cation the
scores were converted to binary values where scores from 1 to 3 corresponded to
"high severity" and the remaining 4 and 5 corresponded to "low severity". In the
process of scoring, the medical doctors considered many factors like patterns of
lung lesions, results of microbiological tests, duration of treatment, patient's age
and some other. One of the goals of this subtask is to distinguish "low severity"
from "high severity" based solely on the CT scan.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Detection of lung lesions in CT, TB-descriptor</title>
      <p>
        In this section, a method for automated detection of lung lesions in 3D CT images
is described. The method is based on training the Deep Convolutional Neural
Network (CNN) on a set of data derived from 3D CT images with manually
labeled lesions of di erent types. The method utilizes slice-wise image
segmentation technique previously described in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This technique considers splitting
the original 3D image into a number of smaller 2D regions, processing the
regions one-by-one and collecting the CNN output into a 3D probability map (see
Fig. 1). Finally, a quantitative TB-descriptor is built based on the lesion
probability maps.
TB lesions were labeled manually on a total number of 198 3D CT scans. The
labeling was performed in two stages. The rst stage was performed by a quali ed
radiologist and was aimed at coarse localization of TB lesions of di erent type in
lungs without the exact delineation. The second stage was aimed at correction
of initial lesion labeling and making more precise segmentation of lesions (see
Fig. 2). Both stages of labeling were performed using an auxiliary software tool
designed by the authors (see Fig. 3).
      </p>
      <p>
        The developed software tool allows labeling of 10 di erent types of TB
lesions. Some types of lesions were well represented in the dataset whilst lesions
of some other types (Plevritis, Atelectasis, Pneumathorax) were present only in
few images in the dataset. List of lesion types and the corresponding frequencies
of occurrence in dataset images are shown in Table 1. In the result of labeling
process, 3D masks with the corresponding lesion indexes were obtained.
For extraction of lung regions for both lesion detection and
ImageCLEFtuberculosis subtasks, a domestic implementation of a conventional
segmentation-byregistration approach [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] was employed instead of the one proposed by the
organizers. In our case the method utilized 130 reference CT scans with
manually segmented lungs. Projections along X, Y and Z axes are calculated for
each reference CT scan. The three normalized projections are concatenated into
a quantitative descriptor of a reference image. For a target CT scan, a similarity
measure is calculated between the target image and the reference images based
on the quantitative descriptors of all images. Top-5 most similar reference
images are selected. The selected images along with the corresponding lung masks
are non-rigidly registered to the target image using 'elastix' software tool [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ],
nal segmentation mask is obtained by means of averaging. The implemented
method demonstrates high robustness to the presence of large lesion in lungs
(see Fig. 4).
One of the possible ways to employ Deep Learning algorithms for 3D image is
to operate at slice level by representing each 3D CT image as a set of 2D slices.
One of the advantages of such approach is relatively low usage of computer
memory since the large 3D is processed slice-by-slice. In the current study, 2D
image regions of size 128 128 pixels were extracted from slices of original CT
images with 64-pixels stride. Three neighboring slices were used to compose a
single RGB image in order to use spatial information along Z-axis of original CT
images. Finally, the image regions were up-sized using bicubic interpolation to
256 256 pixels. The up-sizing was performed to improve the detection of small
lesions since the rst convolutional layer of the network used which is AlexNet
has 4-pixel stride, and some lesions present on the images have size of 2{3 pixels.
      </p>
      <p>From the total amount of 198 labeled 3D scans, 149 were used for training the
algorithms and the rest 49 were used for validation. Lesion types with indexes
1{5 were merged together into one class "Foci" as having similar nature and/or
being mixture of classes. From the 149 training CT images, 268,278 2D image
tiles were extracted. For each tile a corresponding label image was composed
using manually labeled lesion data (see Fig. 5). Image regions which lay beyond
the lung segmentation masks are marked with a special "don't care" label. Neural
network omits these regions at both training and validation stages which allows
to better focus the available computational facilities on the actual regions of
interest. On the label images such regions are marked with gray color.</p>
      <p>
        For segmentation of lesions in 2D slice regions a Fully Convolutional
Network Alexnet [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] was used. In order to increase convergence rate and overall
accuracy, a publicly available ILSVRC2012-trained model was used to initialize
the networks weights. The net was set to recognize multiple lesion types at the
same time.
      </p>
      <p>Training was performed on a personal computer equipped with Intel i7-6700K
CPU and dedicated GPU of Nvidia TITAN X type with 3072 CUDA Cores
and 12 GB of GDDR5 onboard memory. NVIDIA DIGITS interface and Ca ee
framework were used. The network training parameters were set to the following
values: Number of epochs=60, Activation function=ReLu, Batch size=64, Solver
type=SGD Ca e solver. Learning Rate was set to 0.001 for the rst 20 epochs,
0.0001 for the next 20 and 0.00001 for the last 20 ones.
2.4</p>
      <sec id="sec-2-1">
        <title>Obtaining probability maps</title>
        <p>Once the training process is nished, the trained network model can be used
for detection of lesions in an arbitrary 3D CT scan. In this case the CT image
undergoes the same procedures as for the training images:
{ segmentation of lung regions;
{ extraction of 2D tiles;
{ processing the tiles with the trained CNN and obtaining probability maps
for each lesion type considered;
{ collecting the obtained 2D probability maps into 3D probability maps for
each lesion type separately;</p>
        <p>Additionally, probability maps can be smoothed to reduce the number of
falsely detected lesions in images, or thresholded so that all probability values
below minimum allowed value are zeroed. Fig. 6 demonstrates the detected
lesions on test CT scans. Lesion regions were obtained from the corresponding
probability maps by means of thresholding with Pthres = 0:5. The resultant
lesion regions are marked with colors with correspondence to the colormap from
Fig. 5.
2.5</p>
      </sec>
      <sec id="sec-2-2">
        <title>Building TB-descriptor</title>
        <p>Once the probability maps are built, the TB-descriptor proposed with this study
is built as follows. The lungs region on CT image is divided into 6 parts as it is
shown on Fig. 7. Height of the parts along Z axis is taken equal. For every type
of lesion its presence in each of six parts is calculated as the sum of probabilities
in the corresponding voxels divided by the number of lung voxels within the
considered part. Since all the probabilities are ranged from 0 to 1, the lesion
presence score for each part is also a number from 0 to 1. Finally, the presence
scores obtained for each lesion type and each lung part are concatenated into a
single TB-descriptor of size Nlesion types Nparts.</p>
        <p>Thus, the proposed TB-descriptor indicates presence of lesions of certain
types in di erent parts of lungs: upper left, middle right, etc. Portion of the
a ected lung volume is considered as well. Such TB-descriptor was used for
recognition of drug resistance status, type and severity of tuberculosis in the
ImageCLEF challenge subtasks.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Submissions and results</title>
      <p>For all the ImageCLEFtuberculosis subtasks the following prediction scheme was
used:
{ segmentation of lung regions for each CT image;
{ detection of lesions;
{ calculation of TB-descriptors for each image;
{ prediction of the desired values using a valid classi er.</p>
      <p>Subtasks of ImageCLEFtuberculosis considered di erent types of predictions:
multiple-class prediction where only the index of predicted class must be
provided, two-class prediction where probability of belonging to positive class must
be provided as well, and regression where the corresponding method needs to
predict value of a continuous variable as precise as possible. For all three
subtasks, Random Forests classi er was used which is capable of handling all the
above-mentioned tasks. Assessment of the algorithms performance was carried
out on the Training data using k-fold cross-validation procedure with k = 5.
3.1</p>
      <sec id="sec-3-1">
        <title>Subtask #1: MDR detection</title>
        <p>Following the above-mentioned prediction scheme, TB-descriptors were
calculated for all the available CT images. Random Forests classi er was trained
on the set of TB-descriptors with concatenated meta-data values: patients' age
and gender. Based on a series of experiments, number of trees in the classi er
was chosen to be 150 for this subtask. Accuracy assessment within 5-fold
crossvalidation demonstrated Area Under ROC-Curve (AUC) value of 0.6385. One
run was submitted as the result of prediction of test data.</p>
        <p>A total number of 39 runs were submitted by 7 di erent participating groups
for MDR detection subtask. Table 2 shows top-15 best participants' results in
terms of AUC value. Utilizing lesion-based TB-descriptor resulted in 0.5558
AUC and ranked 14-th place among the 39 runs. The best acheived result by
VISTA@UEvora team with 0.6178 AUC value outperforms previous year's
result with 0.5825 AUC. However, MDR detection performance still remains at a
level close to random classi cation. Increase of prediction performance might be
caused by adding a number of more severe cases with XDR TB into the dataset
and also by utilizing information about patients' age and gender.
For TB type classi cation subtask, a similar procedure was carried out with the
di erence that Random Forests classi er was trained for the case of multiple
image classes. Number of trees for this subtask was chosen to be 150. Instead
of using all the available data, only the rst CT scan of each patient was used
both for algorithms training and for nal prediction of patient's TB class.</p>
        <p>
          In total, 39 runs were submitted by 8 participating groups for TB type
classi cation subtask. The results were evaluated and ranked by accuracy and
Cohen's Kappa coe cient [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] which is preferable in the case of unbalanced dataset.
Among the submitted runs our method based on lesion detection demonstrated
the best TB type recognition performance in terms of both Kappa coe cient
(0.2312) and accuracy (0.4227) (see Table 3). Compared to 2017, overall TB
type classi cation results are less accurate. Probably this is caused by the
increased disbalance between TB types. Using more than one CT scan per patient
might also confuse prediction methods and worsen the nal results.
3.3
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Subtask #3: Severity scoring</title>
        <p>In contrast to the two previous subtasks, the TB severity scoring subtask was
evaluated in two principally di erent ways.</p>
        <p>One way of evaluation used the original severity scores from 1 to 5 as provided
by the doctors and the task for participants was to predict those numerical scores
as precise as possible. Here, Root Mean Square Error (RMSE) was computed
between ground truth and predicted severity scores provided by participants.
The goal was to achieve lowest possible RMSE value.</p>
        <p>The other way of evaluation considered binary classi cation problem. The
original severity index was transformed into two class values: cases with scores
from 1 to 3 were labeled as "high severity" cases and the other cases with scores 4
and 5 corresponded to "low severity" class. With this way of evaluation the
participants were to provide probabilities of TB cases belonging to "high severity"
class. The results were ranked using AUC value. Top-10 runs for both evaluation
methods are shown in Tables 4 and 5.</p>
        <p>In total, 36 runs were submitted by 7 participants for this subtask. As it can
be seen from the tables, lesion-based TB-descriptor appeared to be extremely
useful for assessing TB severity with the best result in terms of regression
(minimum RMSE among all runs) and 6-th best result in terms of "low severity"/"high
severity" classi cation. Number of trees for this experiments was set to 100. The
highest binary classi cation performance with AUC value of 0.7708 was achieved
by MedGIFT group.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>The results of this study allows to draw the following conclusions:
{ Combination of lesion-based TB-descriptor and Random Forests classi er
allowed achieving the best performance in TB type classi cation and TB
severity scoring subtasks.
{ Similar to 2017 results, image-based MDR TB detection performance
remains low (AUC 0.6178, accuracy 55.93%) despite the addition of XDR TB
cases into the dataset and utilizing information about patients' age and
gender.
{ Lesion-based TB-descriptor derived from lung CT scans conveys valuable
information on patient's state and is worth to consider in CT image analysis
of TB patients.
{ Extending the training data for lesion detection is desirable for further
improvements of computerized TB diagnosis.</p>
      <p>In this paper, image description and analysis method based on automatic
detection of TB lesions in lungs and composing TB-descriptor is presented. The
method was employed by UIIP BioMed group in all three subtasks of
ImageCLEFtuberculosis 2018 challenge.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>This study was supported by the National Institute of Allergy and Infectious
Diseases, National Institutes of Health, U.S. Department of Health and Human
Services, USA through the CRDF project DAA3-17-63599-1 "Year 6: Belarus
TB Database and TB Portals".</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Cohen</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A coe cient of agreement for nominal scales</article-title>
          .
          <source>Educational and Psychological Measurement</source>
          <volume>20</volume>
          (
          <issue>1</issue>
          ),
          <volume>37</volume>
          {
          <fpage>46</fpage>
          (
          <year>1960</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Kalinovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          , , Muller, H.:
          <article-title>Overview of ImageCLEFtuberculosis 2017 - predicting tuberculosis type and drug resistances</article-title>
          .
          <source>In: CLEF2017 Working Notes. CEUR Workshop Proceedings</source>
          , CEURWS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Dublin,
          <source>Ireland (September</source>
          <volume>11</volume>
          -14
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          , , Muller, H.:
          <article-title>Overview of ImageCLEFtuberculosis 2018 - detecting multi-drug resistance, classifying tuberculosis type, and assessing severity score</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Avignon,
          <source>France (September 10- 14</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Villegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arenas</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boato</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dicente Cid</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eickho</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia Seco de Herrera</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Islam</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mothe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwall</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Overview of ImageCLEF 2017: Information extraction from images</article-title>
          .
          <source>In: Experimental IR Meets Multilinguality, Multimodality, and Interaction 8th International Conference of the CLEF Association, CLEF 2017. Lecture Notes in Computer Science</source>
          , vol.
          <volume>10456</volume>
          . Springer, Dublin,
          <source>Ireland (September</source>
          <volume>11</volume>
          -14
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Ionescu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , Muller, H.,
          <string-name>
            <surname>Villegas</surname>
          </string-name>
          , M.,
          <string-name>
            <surname>de Herrera</surname>
            ,
            <given-names>A.G.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eickho</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andrearczyk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cid</surname>
            ,
            <given-names>Y.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasan</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ling</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Farri</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lungren</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dang-Nguyen</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piras</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lux</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          : Overview of ImageCLEF 2018:
          <article-title>Challenges, datasets and evaluation. In: Experimental IR Meets Multilinguality, Multimodality, and Interaction</article-title>
          .
          <source>Proceedings of the Ninth International Conference of the CLEF Association (CLEF</source>
          <year>2018</year>
          ),
          <source>LNCS Lecture Notes in Computer Science</source>
          , Springer, Avignon,
          <source>France (September 10-14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kalinovsky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tarasau</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Lesion detection in CT images using Deep Learning semantic segmentation technique</article-title>
          . In: International Workshop "Photogrammetric and
          <article-title>computer vision techniques for video surveillance, biometrics and biomedicine"</article-title>
          .
          <source>The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLII</source>
          , pp.
          <volume>13</volume>
          {
          <fpage>17</fpage>
          . Moscow, Russia (May
          <year>2017</year>
          ). https://doi.org/10.5194/isprs-archives-XLII-2
          <string-name>
            <surname>-W4-</surname>
          </string-name>
          13-2017, http://www.int
          <article-title>-arch-photogramm-remote-sens-spatial-inf-sci</article-title>
          .net/XLII2-W4/13/2017/
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Staring</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Murphy</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Viergever</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pluim</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          :
          <article-title>Elastix: a toolbox for intensity{based medical image registration</article-title>
          .
          <source>IEEE Transactions on medical imaging 29(1)</source>
          ,
          <volume>196</volume>
          {
          <fpage>205</fpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalinouski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenthal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabrielian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrahina</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Astrauko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Tarasau: Utilizing radiological images for predicting drug resistance of lung tuberculosis</article-title>
          . In: Computer Assisted Radiology - 27th
          <source>International Congress and Exhibition (CARS-2015)</source>
          . vol.
          <volume>10</volume>
          , pp.
          <volume>129</volume>
          {
          <fpage>130</fpage>
          . Springer, Barcelona (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kovalev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liauchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Safonau</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Astrauko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrahina</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tarasau</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Is there any correlation between the drug resistance and structural features of radiological images of lung tuberculosis patients</article-title>
          ? In: Computer Assisted Radiology - 27th
          <source>International Congress and Exhibition (CARS-2013)</source>
          . vol.
          <volume>8</volume>
          , pp.
          <volume>18</volume>
          {
          <fpage>20</fpage>
          . Springer, Heidelberg (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Shelhamer</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Long</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Darrell</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Fully convolutional networks for semantic segmentation</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>39</volume>
          (
          <issue>4</issue>
          ),
          <volume>640</volume>
          {651 (April
          <year>2017</year>
          ). https://doi.org/10.1109/TPAMI.
          <year>2016</year>
          .2572683
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Sluimer</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prokop</surname>
          </string-name>
          , M.,
          <string-name>
            <surname>van Ginneken</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Toward automated segmentation of the pathological lung in ct</article-title>
          .
          <source>IEEE Transactions on Medical Imaging</source>
          <volume>24</volume>
          (
          <issue>8</issue>
          ),
          <volume>1025</volume>
          {1038 (Aug
          <year>2005</year>
          ). https://doi.org/10.1109/TMI.
          <year>2005</year>
          .851757
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.X.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chung</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Skrahin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenthal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabrielian</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tartakovsky</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Radiological signs associated with pulmonary multi-drug resistant tuberculosis: an analysis of published evidences</article-title>
          .
          <source>Quantitative Imaging in Medicine and Surgery</source>
          <volume>8</volume>
          (
          <issue>2</issue>
          ),
          <volume>161</volume>
          {
          <fpage>173</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>