<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Estimating Severity from CT Scans of Tuberculosis Patients using 3D Convolutional Nets and Slice Selection</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hasib Zunair</string-name>
          <email>hasibzunair@gmail.com1</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aimon Rahman</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nabeel Mohammed</string-name>
          <email>nabeel.mohammed@northsouth.edu3</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>North South University</institution>
          ,
          <addr-line>Dhaka 1229</addr-line>
          ,
          <country country="BD">Bangladesh</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this work, we propose a 16-layer 3D convolutional neural network with a slice selection technique employed in the task of 3D Computed Tomography Image data of Tuberculosis(TB) patients which attained 10 th place in the ImageCLEF 2019 Tuberculosis - Severity scoring challenge. The goal is aimed at estimating TB severity based on the CT image. The best result reported in this work is Area Under the ROC Curve (AUC) of 0.61 and a binary accuracy of 61.5%. Codes for this work can be found at URL.1</p>
      </abstract>
      <kwd-group>
        <kwd>Deep learning Convolutional Neural Networks Analysis Computed Tomography</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Tuberculosis is a potentially serious infectious disease that a ects lungs, and
sometimes other parts of the body. Mycobacterium tuberculosis, is the
bacteria that causes the infection and can spread from one person to another via
cough, spit or sneezes. Most infections do not cause any symptoms, which is
refered as latent tuberculosis and 10% of them turns to potentially fatal
active diseases. People with HIV/AIDS and smokers are more vulnerable to active
TB. The symptoms of the infections are cough with blood-containing mucus,
night sweats, fever, chills, loss of appetite and severe weight-loss. About
onequarter of the world population has been infected with the disease. In 2010,
1.2 -1.45 million deaths have occurred due to the disease, mostly in developing
countries which makes it the second most common cause of death from
infectious disease [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Tuberculosis is diagnosed by conducting chest X-ray and
microscopic examination of bodily uids. Computed tomography (CT) scan provides
1 https://github.com/hasibzunair/tuberculosis-severity
      </p>
      <p>
        Copyright c 2019 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0). CLEF 2019, 9-12
September 2019, Lugano, Switzerland.
more detailed information about the infection than X-ray images [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Several
techniques have been used to automate the detection of Mycobacterium
infection using di erent machine learning techniques in chest radiographs and CT
scan images. Deep learning [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] has also been used to diagnose lung diseases
in CT scan images. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] used deep belief network and convolutional neural
network to classify lung nodule in 3D tomography images, where deep beleif
network performed better than convolutional neural network and SIFT. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] shows
an ensemble of AlexNet and GoogleNet DCNNs performed best with the AUC
score of 0.99 on radiograph images to detect pulmonary tuberculosis.
Prognosis of diseases such as chronic obstructive pulmonary disease and acute
respiratory disease on smokers lungs has also been predicted using convolutional
neural network in computed tomography images [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Three di erent deep
learning techniques have also been applied to detect lung cancer from CT scan
images using convolutional neural networks(CNN), Deep neural networks(DNN)
and Sparse Autoencoders(SAE), where CNN performed best with an accuracy
of 84.15%, sensitivity of 83.96%, and speci city of 84.32% [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Most computer
aided diagnosis has been done for tuberculosis which uses radiographic images
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]-[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. A challenging part of working with CT scans is the fact that the data
points comprises of depth information, which make it not only three
dimensional but also computationally expensive to process for images with have large
depth size. Several works have been done on detecting 3D objects, for
example by integrating volumetric grid representation with a 3D convolutional
neural network [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] introduced a network named VoxNet, which is then validated
in LiDAR, RGBD, and CAD data. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] on the other hand, uses 2D
representation from various angles of a 3D object and trained a multi view
convolutional neural network classi er with view pooling method which performed
better than 3D CNN. They show the use of transfer learning [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] which is an
e ective technique where a model is used which previously learnt a set of
features to solve a visual recognition task as a starting point to solve another task.
      </p>
      <p>
        Training 3D Convolutional Nets on volumetric data is computationally
expensive due to depth information which requires additional feature learning and
hence results in an increased number of learnable paramters. Moreover, all the
data samples do not have same depth size which complicates training. To
address this problem, we introduce a novel data partitioning technique which makes
the training method not only e ective but most importantly feasible. We show,
using partial depth information from the each volumetric data point, it is
possible to achieve good AUC and accuracy values. We show our approach which
achieves 10 th place among a total of 100 participants in the ImageCLEF 2019
Tuberculosis - Severity scoring challenge [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Experimental Overview</title>
      <p>This section provides details of the setup used in the di erent experiments.
A brief description of the data set used for the experimentation will be given,
followed by description of the network architecture. Moreover, details of the
training regiment with the metrics are provided.</p>
      <p>For replication of this work, it is relevant to mention that all of the
experiments were performed on a machine with Windows system with Intel Core(TM)
i7-7700 CPU @3.60GHz processor, 32 GB RAM, a single CUDA-enabled NVIDIA
GTX 1050 4GB graphical processing unit (GPU), Python 3.6.7, Keras 2.2.4 with
Tensor ow 1.12.0 backend, and CUDA compilation tools, release 10.0, V10.0.130
dependencies for GPU acceleration.
2.1</p>
      <sec id="sec-2-1">
        <title>Dataset Description</title>
        <p>
          The dataset for the severity scoring(SVR) task is provided by ImageCLEF
Tuberculosis 2019 [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ][
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. The dataset consists of a total 335 chest CT scans
of Tuberculosis patients in addition with clinically relevant metadata. From the
dataset, 218 data points are used for training and the remaining 117 are hold
out for the nal evaluation. The selected metadata includes the following binary
measures: disability, relapse, symptoms of TB, comorbidity, bacillary, drug
resistance, higher education, ex-prisoner, alcoholic, smoking. The 3D CT images
which were provided have a slice size of 512 512 pixels and a number of slices
varying from 50 to 400. All the CT images are stored in NIFTI le format.
This le format stores raw voxel intensities in Houns eld units (HU) as well the
corresponding image metadata such as image dimensions, voxel size in physical
units, slice thickness, etc. Figure 1 shows an instance of this.
        </p>
        <p>
          The original severity score which is assigned by a medical doctor is included as
training meta-data which are annotated in a scale from 1("critical/very bad") to
5("very good). This grade is converted to "LOW" (scores 4 and 5) and "HIGH"
(scores 1, 2 and 3) thereby reducing the task to a binary classi cation problem
as per task descriptions provided by ImageCLEF Tuberculosis 2019[
          <xref ref-type="bibr" rid="ref15">15</xref>
          ][
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. In
addition to these requirements, the 117 test set labels are hidden, which make
it more challenging.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Data Prerocessing - Slice Selection Technique</title>
        <p>In order to prepare the data for training, we rst resize individual slices of
the 3D input volume to 128 128 with bicubic interpolation.</p>
        <p>This is followed by a slice selection technique that we introduce in this work,
where a depth size of 32 and 16 - which results in two experimental settings
discussed in 1 - are chosen for our nal submissions.</p>
        <p>In Figure 2, we show a visual representation of the slice selection technique.
For a given input volume the rst 4 slices, the middle 8 slices and the last 4 slices
are extracted. The middle slice of the input volume is obtained by taking the
half of the input volume depth. These three sub components are then stacked
to reconstruct the desired input volume where, in this case, is a depth size
of 16. In other words, the input volume consists of a total of 16 slices. The
main motivation behind the proposed technique eliminates the problem of GPU
exhaustion during optimization. Since the default input volume consist of large
number of slices, it was entire impossible to allocate tensors for computation in
our experimental setup discussed at the end of Section 1.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Con guration of the proposed Convolutional Neural Network</title>
        <p>
          The network used in this work was inspired by the architecture used for real
time object recognition by integrating a volumetric occupancy grid
representation with a supervised convolutional net named VoxNet [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. All the 3D input
volumes were transformed to 128 128 depth, where depth is 32 and 16 for
the two di erent settings, with cubic interpolation for the network input. The
network consists of three convolutional layers with 32 lters of size 2 2 2 .
After every convolutional layer, a maxpooling layer is added with a stride of 2.
Maxpooling layers reduces its size of input to half by taking maximum values
from a window of size 2 2. Reciti ed Linear Units(ReLu) was used as the
activation function for both convolutional and fully connected layers. The activation
function is governed by Equation 1:
a = max(0; x)
(1)
where, a is the output of the activation for a given input x.
        </p>
        <p>The convolutional blocks in Figure 3a is then followed by a batch
normalization layer. The deep learning community has quickly adopted the use of batch
normalization as it introduces a form of regularization which restrains the
network form simply memorizing the training dataset, which means the network is
expected to generalize better on unseen data with use of batch normalization.
The output from the batch normalization layer is attened and passed to a series
of fully connected layers with two dense layers having 1028 neurons, one with
512 neurons. Each of the dense layers were connected to a dropout layer which
drops the neuron connection with a probability of 40%. The output from the
nal two dropout layers was followed by a dense layer of 2 neurons. The network
architecture is shown in Figure 3.</p>
        <p>Softmax activation shown in equation 2 was applied on the last layer to
get the probability results for the binary classi cation problem. The output of
the softmax function is equivalent to a categorical probability distribution, it
tells you the probability that any of the classes are true. This enables better
performance of the model.</p>
        <p>(zj ) =</p>
        <p>e(zj)
PK
k=1 e(zk)
:
(2)
where z is a vector of the inputs to the output layer (if you have 10 output
units, then there are 10 elements in z). And again, j indexes the output units,
so j = 1, 2, ..., K.
2.4</p>
      </sec>
      <sec id="sec-2-4">
        <title>Training Regiment</title>
        <p>Stochastic Gradient descent was used to optimize the weights of the network
via backpropagation with a learning rate of 10 4 with a momentum of 0.9.
Cross-entropy error between the predicted and ground truth was used as the
loss function show in Equation 3 and the weights were updated using
minibatches for every iteration. The initialization of the weights was done at random
and the biases were initialized as zero.</p>
        <p>L(y; y^) =</p>
        <p>i=1 c=1
1 XN XC 1yi Cc log pmodel[yi Cc ]
N
(3)</p>
        <p>In equation 3, the double sum is over the observations i, whose number is
N , and the categories c, whose number is C. The term 1yi in Cc is the indicator
function of the ith observation belonging to the cth category. The pmodel is the
probability predicted by the model for the ith observation to belong to the cth
category. When there are more than two categories, the neural network outputs
a vector of C probabilities, each giving the probability that the network input
should be classi ed as belonging to the respective category. When the number
of categories is just two, the neural network outputs a single probability with
the other one being 1 minus the output.</p>
        <p>Training was continued for 300 epochs on layers L1 to L16 in Figure 3 with a
validation split of 0.1 and the nal evaluation was done on the test set. For the
nal submissions, we only make change in input volume depth size which also
required us to change the batch size which results two di erent settings shown in
Table 1. All the other remaining parameters were kept same in the two settings.
It is noteworthy that, in both the con gurations, the learnable parameters are
23,808,378.</p>
        <p>The task is evaluated as binary classi cation problem, including measures
such as Area Under the ROC Curve (AUC) and accuracy. An AUC of 1 represents
a perfect classi cation system where True positive rate is 1 and False positive
rate is 0. Since the ranking of the techniques will be rst based on the AUC and
then by the accuracy, AUC is the optimizing metric and the binary accuracy is
the satisfying metric.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Result and Discussion</title>
      <p>From Table 2 it can be seen that CFG-A achieves the highest AUC and
accuracy values in the experiments conducted in this work. CFG-A, where the
network is trained with an input volume of 128 128 32 , this achieves an average
test AUC of 0.611 and test accuracy of 61.5%. Also to note, the batch size for
this con guration was set to 4, since any larger values resulted in GPU memory
exhaustion. This still led to achieve the best result in the set of experiments
conducted which is surprising. CFG-B which is trained and evaluated with an
input volume of 128 128 16. From the experiments it can be said that the
degradation in CFG-B is due to lower number of slices in the input volume which
results in information loss. Even though the batch size was 16 in this setting,
the information loss outweighs the impact on the overall performance.</p>
      <p>Name
CFG-A
CFG-B</p>
      <p>In Figures 4 and 5 we show the training logs for both the con gurations. Both
CFG-A and CFG-B are trained for 300 epochs. In Figure 4, which portrays the
training log for CFG-A, it can be seen that the network starts to over t only
after 25 epochs. CFG-A achieves a highest validation accuracy of 68%, where in
the test set an accuracy of 61.5% is achieved which results in test-val accuracy
margin of 6.5% between validation and testing set even when the batch size is
set to 4.</p>
      <p>In the case of CFG-B which is shown in Figure 5 the network starts over
tting after 60 epochs and achieves a highest validation accuracy of 82.5%. It is
surprising that this con guration achieves only a test accuracy of 53.8% which
results in a test-val accuracy of 28.7%. From this behaviour we can say that the
preprocessing technique employed in CFG-B with depth size of 16 results the
test set to not be representative of the validation set. This setup also causes
information loss which results in signi cantly lower performance than CFG-A.</p>
      <p>
        We demonstrate a 3D convolutional neural network with a newly proposed
preprocessing technique, slice selection from volumetric data, used in the task
to estimate severity based on CT Image of Tuberculosis patients. This work
achieved 10 th place with a test AUC of 0.611 and test accuracy of 61.5% in the
ImageCLEF 2019 Tuberculosis - Severity Scoring challenge [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ][
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. We show
that even without using all the slices from the training set, via slice selection
technique, it is possible to achieve certain rather good AUC and accuracy values
in the nal test set.
5
      </p>
    </sec>
    <sec id="sec-4">
      <title>Future Works</title>
      <p>In future works, the results will be further analyzed to gain a better
understanding of the reasons behind the results. In addition, various networks
architectures will be experimented and further improvements in the proposed slice
selection technique will be made, in an attempt to build a robust deep learning
model to estimate severity of TB patients.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. Tuberculosis, https://www.who.int/en/news-room/fact-sheets/detail/ tuberculosis.
          <source>Last accessed 8 Sept 2018</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Skoura</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zumla</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bomanji</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Imaging in tuberculosis</article-title>
          .
          <source>International Journal of Infectious Diseases</source>
          ,
          <volume>32</volume>
          ,
          <fpage>87</fpage>
          -
          <lpage>93</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Lakhani</surname>
            , Paras, and
            <given-names>Baskaran</given-names>
          </string-name>
          <string-name>
            <surname>Sundaram</surname>
          </string-name>
          .
          <article-title>"Deep learning at chest radiography: automated classi cation of pulmonary tuberculosis by using convolutional neural networks</article-title>
          .
          <source>" Radiology</source>
          <volume>284</volume>
          , no.
          <issue>2</issue>
          (
          <year>2017</year>
          ):
          <fpage>574</fpage>
          -
          <lpage>582</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Hua</surname>
          </string-name>
          , Kai-Lung,
          <string-name>
            <surname>Che-Hao</surname>
            <given-names>Hsu</given-names>
          </string-name>
          , Shintami Chusnul Hidayati, Wen-Huang Cheng, and
          <string-name>
            <surname>Yu-Jen Chen</surname>
          </string-name>
          .
          <article-title>"Computer-aided classi cation of lung nodules on computed tomography images via deep learning technique." OncoTargets and therapy 8 (</article-title>
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Gonzlez</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ash</surname>
            ,
            <given-names>S.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vegas-</surname>
            Snchez-Ferrero,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Onieva</surname>
            <given-names>Onieva</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Rahaghi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.N.</given-names>
            ,
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.C.</given-names>
            ,
            <surname>Daz</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          , San Jos Estpar,
          <string-name>
            <surname>R.</surname>
          </string-name>
          and Washko,
          <string-name>
            <surname>G.R.</surname>
          </string-name>
          ,
          <year>2018</year>
          .
          <article-title>Disease staging and prognosis in smokers using deep learning in chest computed tomography</article-title>
          .
          <source>American journal of respiratory and critical care medicine</source>
          ,
          <volume>197</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>193</fpage>
          -
          <lpage>203</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. Song, QingZeng, Lei Zhao, XingKe
          <string-name>
            <surname>Luo</surname>
            , and
            <given-names>XueChen</given-names>
          </string-name>
          <string-name>
            <surname>Dou</surname>
          </string-name>
          .
          <article-title>"Using deep learning for classi cation of lung nodules on computed tomography images</article-title>
          .
          <source>" Journal of healthcare engineering</source>
          <year>2017</year>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Hwang</surname>
          </string-name>
          , Sangheum,
          <string-name>
            <surname>Hyo-Eun</surname>
            <given-names>Kim</given-names>
          </string-name>
          , Jihoon Jeong, and
          <string-name>
            <surname>Hee-Jin Kim</surname>
          </string-name>
          .
          <article-title>"A novel approach for tuberculosis screening based on deep convolutional neural networks."</article-title>
          <source>In Medical Imaging</source>
          <year>2016</year>
          :
          <article-title>Computer-Aided Diagnosis</article-title>
          , vol.
          <volume>9785</volume>
          , p.
          <fpage>97852W</fpage>
          .
          <source>International Society for Optics and Photonics</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Lopes</surname>
          </string-name>
          , U. K., and Joo Francisco Valiati.
          <article-title>"Pre-trained convolutional neural networks as feature extractors for tuberculosis detection." Computers in biology</article-title>
          and medicine
          <volume>89</volume>
          (
          <year>2017</year>
          ):
          <fpage>135</fpage>
          -
          <lpage>143</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Qin</surname>
            , Chunli, Demin Yao,
            <given-names>Yonghong</given-names>
          </string-name>
          <string-name>
            <surname>Shi</surname>
            , and
            <given-names>Zhijian</given-names>
          </string-name>
          <string-name>
            <surname>Song</surname>
          </string-name>
          .
          <article-title>"Computer-aided detection in chest radiography based on arti cial intelligence: a survey." Biomedical engineering online17</article-title>
          , no.
          <issue>1</issue>
          (
          <year>2018</year>
          ):
          <fpage>113</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Vajda</surname>
            , Szilrd, Alexandros Karargyris, Stefan Jaeger,
            <given-names>K. C.</given-names>
          </string-name>
          <string-name>
            <surname>Santosh</surname>
            , Sema Candemir, Zhiyun Xue, Sameer Antani, and
            <given-names>George</given-names>
          </string-name>
          <string-name>
            <surname>Thoma</surname>
          </string-name>
          .
          <article-title>"Feature selection for automatic tuberculosis screening in frontal chest radiographs</article-title>
          .
          <source>" Journal of medical systems 42, no. 8</source>
          (
          <year>2018</year>
          ):
          <fpage>146</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. Maturana, Daniel, and
          <string-name>
            <given-names>Sebastian</given-names>
            <surname>Scherer</surname>
          </string-name>
          .
          <article-title>"Voxnet: A 3d convolutional neural network for real-time object recognition."</article-title>
          <source>In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source>
          , pp.
          <fpage>922</fpage>
          -
          <lpage>928</lpage>
          . IEEE,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Su</surname>
          </string-name>
          , Hang, Subhransu Maji, Evangelos Kalogerakis, and
          <string-name>
            <surname>Erik</surname>
          </string-name>
          Learned-Miller.
          <article-title>"Multi-view convolutional neural networks for 3d shape recognition."</article-title>
          <source>In Proceedings of the IEEE international conference on computer vision</source>
          , pp.
          <fpage>945</fpage>
          -
          <lpage>953</lpage>
          .
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>S. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>A survey on transfer learning</article-title>
          .
          <source>IEEE Transactions on knowledge and data engineering</source>
          ,
          <volume>22</volume>
          (
          <issue>10</issue>
          ),
          <fpage>1345</fpage>
          -
          <lpage>1359</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>LeCun</surname>
          </string-name>
          , Y.,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinton</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Deep learning</article-title>
          .
          <source>nature</source>
          ,
          <volume>521</volume>
          (
          <issue>7553</issue>
          ),
          <fpage>436</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. Yashin Dicente Cid, Vitali Liauchuk, Dzmitri Klimuk, Aleh Tarasau, Vassili Kovalev, Henning Muller,
          <source>Overview of ImageCLEFtuberculosis 2019 - Automatic CT-based Report Generation and Tuberculosis Severity Assessment, CLEF 2019 Working Notes. CEUR Workshop Proceedings (CEUR- WS.org)</source>
          ,
          <source>ISSN 1613-0073</source>
          , http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2380</volume>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Bogdan</surname>
            <given-names>Ionescu</given-names>
          </string-name>
          , Henning Muller, Renaud Peteri, Yashin Dicente Cid, Vitali Liauchuk, Vassili Kovalev, Dzmitri Klimuk, Aleh Tarasau, Asma Ben Abacha, Sadid A.
          <string-name>
            <surname>Hasan</surname>
          </string-name>
          , Vivek Datla, Joey Liu, Dina Demner-Fushman,
          <string-name>
            <surname>Duc-Tien</surname>
            <given-names>DangNguyen</given-names>
          </string-name>
          , Luca Piras, Michael Riegler,
          <string-name>
            <surname>Minh-Triet</surname>
            <given-names>Tran</given-names>
          </string-name>
          , Mathias Lux, Cathal Gurrin, Obioma Pelka,
          <string-name>
            <surname>Christoph M. Friedrich</surname>
          </string-name>
          , Alba Garc a Seco de Herrera, Narciso Garcia, Ergina Kavallieratou,
          <source>Carlos Roberto del Blanco</source>
          ,
          <article-title>Carlos Cuevas Rodr guez</article-title>
          , Nikos Vasillopoulos, Konstantinos Karampidis, Jon Chamberlain, Adrian Clark, Antonio Campello, ImageCLEF 2019:
          <article-title>Multimedia Retrieval in Medicine, Lifelogging, Security and Nature In: Experimental IR Meets Multilinguality, Multimodality, and Interaction</article-title>
          .
          <source>Proceedings of the 10th International Conference of the CLEF Association (CLEF</source>
          <year>2019</year>
          ), Lugano, Switzerland,
          <source>LNCS Lecture Notes in Computer Science</source>
          ,
          <source>Springer (September 9-12</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>