<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Tuberculosis detection using optical activity description vector ow and the</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fernando Llopis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andres Fuster-Guillo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juan Ramon Rico-Juan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jorge Azor n-Lopez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Irene Llopis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Activity Description</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Alicante.</institution>
          <addr-line>Carretera San Vicente del Raspeig s/n 03690 San Vicente del Raspeig - Alicante</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Early detection of tuberculosis can save many lives as it remains one of the leading causes of death, half a century after its discovery. The analysis of chest CT scanned images can be a quick and economic mechanism for detecting not only the type of tuberculosis, but also differentiating whether or not the disease is multi-drug resistant. These are two of the objectives of the ImageClef Tuberculosis task of 2018, and are the ones studied by the group of the University of Alicante in this edition. We have carried out two work approaches, one based exclusively on the use of Deep Learning techniques on a sequence of 2D images extracted from a 3D tomography and on a second approach using Optical Flow to convert the 3D tomography into a motion representation in order to calculate the ADV (a previous descriptor provided by the group). This descriptor is able to synthesize the information of a sequence into one image. This article presents the experiments carried out and the results obtained within the task.</p>
      </abstract>
      <kwd-group>
        <kwd>Tuberculosis Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        ImageClefTuberculosis is one of the tasks of ImageClef 2018 [11]. The
ImageCLEFtuberculosis task 2018 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] includes three independent subtasks.
1. Subtask 1: MDR detection The goal of this subtask is to assess the
probability of a TB patient having resistant form of tuberculosis based on the
analysis of chest CT scan.
2. Subtask 2: TBT classi cation The goal of this subtask is to automatically
categorize each TB case into one of the following ve types: (1) In ltrative,
(2) Focal, (3) Tuberculoma, (4) Miliary, (5) Fibro-cavernous.
3. Subtask 3: Severity scoring This subtask is aimed at assessing TB severity
score based on chest CT image. The Severity score is a cumulative score of
severity of TB case assigned by a medical doctor.
      </p>
      <p>In this rst participation our initial objective was to compare two models,
Deep learning and Optical Flow to check their results in task 1. Finally we made
a delivery about task 2 using the second model that had given us better results
in the experimentation of the task 1.</p>
      <p>This document is structured as follows: in sections 2 we present the
architectures of the models used: Deep Learning and Optical Flow. In section 3 we
show the experimetntation done with both models. Section 4 presents the o cial
results of the experiments and Section 5 summarizes the document and o ers a
series of proposals for future work.
2
2.1</p>
      <p>Our approaches to the solution</p>
    </sec>
    <sec id="sec-2">
      <title>Deep Learning</title>
      <p>
        Deep neural networks have managed to solve problems or increase e ciency in
problems related to image processing [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. On the one hand, the convolutional
layers manage to extract discriminative characteristics from the images so that
they can be evaluated by subsequent layers [12]. On the other hand, recurrent
neural networks have also evolved in their approach and are mainly used in
sequence analysis [15].
      </p>
      <p>To address the issue of the rst task (resistant form of tuberculosis), 3D chest
images of CT scan are used. In a rst stage, these 3D images are transformed
into a sequence of 2D images each one that represent the entry of the neural
networks.</p>
      <p>Input:
multichannel
image</p>
      <p>Conv1</p>
      <sec id="sec-2-1">
        <title>Pool1</title>
      </sec>
      <sec id="sec-2-2">
        <title>Conv2</title>
      </sec>
      <sec id="sec-2-3">
        <title>Pool2 Dense</title>
      </sec>
      <sec id="sec-2-4">
        <title>Output</title>
        <p>1. Convolutional neural network (CNN) with data augmentation: The main
idea is to use the advantages of convolutional layers for a single multi-channel
image. In this case, each channel would be a 2D gray image (see Figure 1).
2. Convolutional layers combined with a recurrent neural network: The
natural way to combine the advantages of convolutional layers and sequential
Convolutional
modules</p>
        <p>LSTM
l
a
iton rs
lvou lyae
n
o
C
e
ltt
a
s
a
n
tIr
e
n
h1
l
a
iton rs
lvou lyae
n
o
C
e
ltt
a
s
a
n
tIr
e
n
h2
...
...
...</p>
        <p>l
a
iton rs
lvuo lyea
n
o
C
e
ltt
a
s
a
n
tIr
e
n
hn
r
e
y
a
l
e
s
n
e
D
t
u
p
t
u
O</p>
        <p>treatment is to combine it in networks with multiple inputs per tomography.</p>
        <p>
          Figure 2 shows a basic scheme about this approach.
3. Pretrained network and classi cation: As rst approximation to extract
features from an image VGG16 deep convolutional neural network [16] is used
with the ImageNet [12] weights learned (4096 features per image). The main
idea is concatenate the features of each input image belonging to the same
tomography to get the nal features vector to classify in a classical way.
4. Pretrained network and classi cation as a sequence using recurrent neural
network. In this case, the extraction of features is similar to the one described
in the previous paragraph and each feature vector would be considered as
a component of a sequence to be treated by a well knowns recurrent neural
network called Long-Shot Term Memory (LSTM) [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
2.2
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Optical Flow plus ADV</title>
      <p>In this sections we propose a combined method based on optical ow and a
characterization method called ADV, to deal with the classi cation of chest CT
scan images a ected by di erent types of tuberculosis. The key point of this
method is the interpretation of the set of cross-sectional chest images provided
by CT scan, not as a volume but as a sequence of video images. We can extract
movement descriptors capable of classifying tuberculosis a ections by analyzing
deformations or movements produced in these video sequences.</p>
      <p>
        The concept of optical ow refers to the estimation of displacements of
intensity patterns. This concept has been extensively used in computer vision in
OF-MIADV. Optical Flow plus MIADV
In this sections we propose a combined method based on optical flow and a characterization
method called MIADV, to deal with the classification of chest CT scan images affected by
different types of tuberculosis. The key point of this method is the interpretation of the set of
cross-sectional chest images provided by CT scan, not as a volume but as a sequence of video
dimi ageerse.nWteacpanpleixctaratcitomnovdeommenatindes:scrrioptboorstcaoprabvleehoficclaessnifyaivngigtautbieorcnu,locsiasraffdercitvioinnsgb,y video
saunravlyesiinllgadnecfoermoratfioancsiaolr mexopvermesesnitosnpro[6d]u.ceInd inbitohemseedviidceaolsecqounetnecxets. optical ow has been
uThseedcontocepatnoafloypztiecalofrlogwa nrefderesftoortmheaetsiotimnsation of dispWlaceemcaennts onfidntednisity pattermns.eTthhisods in
[
        <xref ref-type="bibr" rid="ref9">9,17</xref>
        ]. erent
tchonecelipttehraastbuereen etxoteonbsivtealiynustehdeinocopmtipcuatler voiswion[i3n]d.ifOferneentoafpptlihcaetiomnodsotmuaisnes:dromboettohrod to
evsethiimcleanteavimgaotitoino,ncaratdrievaincgh, vpidiexoelsuirsveLillaunccaesorKfaacniaaldeexp[r1es3s]i.onI n(Fotrhtuisn ewt oarl.k20w15e).wInill use
LbiuocmaesdiKcaalcnoandteext
mopettichaolfdlowtohaesxbtereancutseodpttoicaanlalysoeworgcaonmdepfaorrminagtiosnesq(uHaetnaceetsalo.2f0c0o0)nsecut(iXvaevieimreatgael.s2.0N12e)v.Werethcaenlefsinsd, wdifefenreenetdmneothtoodsnilnytthoe leitsetriamtuaretetomobottaiionnthbeuotptdiceaslcflroiwbe this
m(Cohatoioent.al. 2014). One of the most used method to estimate motion at each pixel is Lucas
KanaIdne o(Pradteelr&amp;tSoaudreashcbr2ib01e3)m.Inotthioisnwtohrkewree warilelusseevLeucraaslKmaneatdheomdesthuosdetdo einxtrdacit oeprteicnatl
compfluowtecromparing sceoqnuteenxcetsloikfceonhsuemcutaivne bimeahgaesv.iNoerverertchoeglenssi,twioenne[8ed].nAot osnulcyctoesessftuiml amteethod
vision
tmoodtioenscbruitbdeeshcurimbeatnhisbmeohtaiovni.or based on trajectory analysis is presented in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The
pInaopredrer ptorodepsocrsiebse maodtioenscthriepretiaorensevveecrtaolmrecthaolldesdus(eAdiDnVdiffeArecnttivcoitmypuDteersvcirsiiopntcioonnteVxtector)
tlieksetehudmiannsbeevhearvaiolucrornetcoegxntistio[2n](.GIonwssiukmhamaeatrayl,. t2h01e4A). DA Vsucvceecsstfourl mdeetshcordibteosdtehsceribaectivity
ihnu mimanagbeehsaeviqouurenbacseedbyonctoraujencttionryg afnoarlyesiascihsprreegseinotnedoifnt(hAezorimin-aLogpeeztheteaml. 2o0v1e3m). eTnhets
prodpuapceerdprionpofsoeusra ddeirsecrcitpitoionnsveocftotrhceall2eDd(AsDpVacAec.tivAitydDeetsacriilpetdiondVeescctroirp)ttieosntedoifntsheveermalethod
contexts (Azorin-Lopez et al. 2016). In summary, tphreoApDoVsevector describes the activitydinesimcraigbee
mocan be found in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In this paper we the use of ADV to
sequence by counting for each region of the image the movements produced in four directionismages
tion in the optical ow obtained from sequences of cross-sectional chest
of the 2D space. A detailed description of the method can be found in (Azorin-Lopez et al. 2013).
pInrothvisidpeadper weCpTropsocsaent.he use of ADV to describe motion in the optical flow obtained from
by
sequences of cross-sectional chest images provided by CT scan.
      </p>
      <p>n</p>
      <p>n</p>
      <sec id="sec-3-1">
        <title>Video image</title>
        <p>transformation
cross-sectional chest</p>
      </sec>
      <sec id="sec-3-2">
        <title>CT scan images</title>
      </sec>
      <sec id="sec-3-3">
        <title>Activity</title>
      </sec>
      <sec id="sec-3-4">
        <title>Description</title>
      </sec>
      <sec id="sec-3-5">
        <title>Vector ADV</title>
        <p>r
5 d f
l u
ADV 3x3x5</p>
      </sec>
      <sec id="sec-3-6">
        <title>Optical flow</title>
      </sec>
      <sec id="sec-3-7">
        <title>Lucas Kanade</title>
        <p>n</p>
      </sec>
      <sec id="sec-3-8">
        <title>Video sequence chest XxYxn images</title>
      </sec>
      <sec id="sec-3-9">
        <title>Normalization</title>
        <p>r
5 d f
l u</p>
      </sec>
      <sec id="sec-3-10">
        <title>Optical Flow chest 64x64xn images</title>
      </sec>
      <sec id="sec-3-11">
        <title>Classification SVM, K-nn</title>
      </sec>
      <sec id="sec-3-12">
        <title>Normalized ADV 3x3x5</title>
      </sec>
      <sec id="sec-3-13">
        <title>TB Label</title>
        <p>The figure summarizes thFeisgu.c3ce.ssive stages of the process forperxotrcaecstsingsttahgeeasctivity descriptors</p>
        <p>Optical ow plus ADV
(optical flow+ADV) that will be the input of a classifier. In the first stage a transformation over
the cross-sectional chest images provided by the CT scan is performed in order to transform
imagTehfeormgautsr eintsou mvidmeoarsiezqeusetnhcees saudcacpetesdsivtoe csatlacugleasteoofptthiceal pflroowc.eTsshefosreceoxntdrastcatgieng the
aimctpilvemiteyntdsetshceriLputcaosrsKa(noapdteicmaelthoodwt+oAobDtaVin)otphticaatl wFloilwl.bTehet htheirdinsptaugte ocfalaculcaltaesssdie er. In
the rst stage a transformation over the cross-sectional chest images provided
by the CT scan is performed in order to transform image formats into video
sequences adapted to calculate optical ow. The second stage implements the
Lucas Kanade method to obtain optical Flow. The third stage calculates the
activity description vector ADV (3x3x5) accumulating within each 3x3 region
of the image, the displacements of the optical ow in four directions of a 2D
space (right, left, up, down). The fth component of the ADV calculates the
frequencies in direction changes. In the fourth stage a normalization of the ADV
vector in performed. Finally, the last stage uses the ADV vector normalized as
the input for a generic classi er in order to evaluate the results.
3
3.1</p>
        <p>Experimentation</p>
        <p>
          Preliminary experiments using Deep Learning
In order to validate the results the wide 10-fold cross validation (10-CV)
technique are used and 7 images of 2D are extracted from the original 3D
tomography. For the experiments Keras v2.1.6 [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] and scikit-learn v0.19.1 [14] Python
software are used, in order to build deep neural networks and apply classi ers,
respectively.
        </p>
        <p>Table 1 shows the rst approach using CNN. The results are close to 50%
which means that the network has not learned the di erence between the two
classes.</p>
        <p>The second approach consists of combination of CNN with RNN. In this case,
2 layers are used and the lters are: 32 (3x3), 64 (3x3). The accuracy is 0.50
and individual proofs are [0.54 0.58 0.42 0.50 0.48 0.52 0.52 0.44 0.48 0.52]. The
results are also unsatisfactory and we will try a new approach.</p>
        <p>Conv. Detail
layers lters x kernel</p>
        <p>Accuracy
mean
10-CV
results</p>
        <p>The third try using a pretrained network (VGG16) with ImageNet weights
con guration. In this case, VGG16 is used to extract the weights of the
penultimate layer as descriptors of image. These features extracted from the latest
layers of the neural network are called neural codes. The number of nal
characteristics is 28672 corresponding to 7 images per times 4096 neural codes per
image. Table 2 summarize the experiments using classi ers belonging di erent
families of algorithms attending to neural codes directly or normalizing with L2
function.</p>
        <p>Last approach using deep learning architectures consists of get neural codes
as previous try and classify the sequence of 7 images with a recurrent neural
network (LSTM). Again, the accuracy is 0.49 and detailed fold results are [0.62
0.54 0.42 0.46 0.28 0.48 0.48 0.48 0.56 0.56].</p>
        <p>In general, the results per folder (10-CV) are very di erent probably due
to the nature of neuronal networks with random initialization of neurons, the
optimizers that have to adjust thousands of parameters that nally nd local
minimums and also due to the small amount of images available to train a
neuronal network where small di erences between the training and test sets
Neural Classi er
Codes algorithms</p>
        <p>Nearest Neighbors
Linear SVM
RBF SVM</p>
        <p>Decision Tree
original Random Forest</p>
        <p>AdaBoost
Naive Bayes
Logistic Regression
XGBoost</p>
        <p>Accuracy 10-CV
mean results
allow generating sets easier to classify in some cases than in others. On the
other hand, no preprocessing has been applied to 2D images which could also
in uence the high variations in results.
3.2</p>
        <p>Preliminary experiments using Optical Flow plus ADV
For this experiments, the wide 10-fold cross validation (10-CV) technique have
been used again. All images of the original 3D tomography are used to calculate
the optical ow for each patient. For the experiments Matlab R2013b has been
used to calculate the optical ow, the ADV and the classi ers.</p>
        <p>Table 3.2 shows the performance results of the proposed method.</p>
        <p>Classi er OF size ADV Accuracy MDR Accuracy DS Accuracy
SVM 64x64 3x3 0,5097 0,312
3-knn 64x64 3x3 0,5135 0,52</p>
        <p>Table 3. Classi cation results using Optical ow plus ADV
0,6567
0,4627
3.3</p>
        <p>Frequency Matrix with Deep Learning
A modi cation of the Optical Flow experiment was to use the frequency matrices
generated as input to a neural network.</p>
        <p>In gure 1 you can see an example of Frequency Matrix..
4</p>
        <p>Results
1. Run 1: MDR Baseline. The Baseline is a probabilistic model in which the
image was not analyzed and only the data of sex and age have been taken
into account.
2. Run 2: ADV 3x3, SVM, 1000 SMOTE upsampling</p>
        <p>As can be see in the table 4 the model of Optical Flow SVM obtains the best
results, for the sake of using only selected images.</p>
        <p>Due to we had little time available for second task, we only present the two
models of Optical Flow, SVM and 3nn.</p>
        <p>Run 1: ADV 3x3, SVM, 1000 SMOTE upsampling Run 2: ADV 3x3, 3-nn,
1000 SMOTE upsampling</p>
        <p>The results were signi cantly better using the 3-nn but very far from the rest
of the participants 5.</p>
        <p>Conclusions and future work
Early detection of tuberculosis is a major social challenge, given the devastating
e ects of the disease. On the other hand, it represents a scienti c challenge of
the highest level. As the organizers claim, \you have to work to get methods
that allow a correct detection of the disease that kills thousands and thousands
of people". In this paper we have proposed two di erent approaches to face the
problem. The rst one is based on the use of Deep Learning techniques on a
sequence of 2D images extracted from a 3D tomography. The second approach
uses Optical Flow to convert the 3D tomography into a motion representation in
order to calculate the ADV (a previous descriptor provided by the group). This
descriptor is able to synthesize the information of a sequence into one image.
The experiments carried out in these two approaches allow us to con rm the
interest of these lines of research and encourage us to seek improvements in the
proposed methodologies.
11. Ionescu, B., Muller, H., Villegas, M., de Herrera, A.G.S., Eickho , C.,
Andrearczyk, V., Cid, Y.D., Liauchuk, V., Kovalev, V., Hasan, S.A., Ling, Y., Farri, O.,
Liu, J., Lungren, M., Dang-Nguyen, D.T., Piras, L., Riegler, M., Zhou, L., Lux, M.,
Gurrin, C.: Overview of ImageCLEF 2018: Challenges, datasets and evaluation. In:
Experimental IR Meets Multilinguality, Multimodality, and Interaction.
Proceedings of the Ninth International Conference of the CLEF Association (CLEF 2018),
vol. 11018. LNCS Lecture Notes in Computer Science, Springer, Avignon, France
(September 10-14 2018)
12. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classi cation with deep
convolutional neural networks. In: Advances in neural information processing systems.
pp. 1097{1105 (2012)
13. Patel, D., Saurahb, U.: Optical ow measurement using Lucas Kanade method.</p>
        <p>Int J Comput Appl 61(10), 6{10 (2013)
14. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O.,
Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A.,
Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: Machine
learning in Python. Journal of Machine Learning Research 12, 2825{2830 (2011)
15. Rumelhart, D., Hinton, G., Williams, R.: Learning sequential structure in simple
recurrent networks. Parallel distributed processing: Experiments in the
microstructure of cognition 1 (1986)
16. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556 (2014)
17. Xavier, M., Lalande, A., Walker, P.M., Brunotte, F., Legrand, L.: An adapted
optical ow algorithm for robust quanti cation of cardiac wall motion from
standard cine-MR examinations. IEEE Transactions on Information Technology in
Biomedicine 16(5), 859{868 (2012). https://doi.org/10.1109/TITB.2012.2204893</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Azorin-Lopez</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saval-Calvo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fuster-Guillo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia-Rodriguez</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Human behaviour recognition based on trajectory analysis using neural networks</article-title>
          .
          <source>In: Proceedings of the International Joint Conference on Neural Networks</source>
          (
          <year>2013</year>
          ). https://doi.org/10.1109/IJCNN.
          <year>2013</year>
          .6706724
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Azorin-Lopez</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saval-Calvo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fuster-Guillo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia-Rodriguez</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cazorla</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Signes-Pont</surname>
          </string-name>
          , M.T.:
          <article-title>Group activity description and recognition based on trajectory analysis and neural networks</article-title>
          .
          <source>In: 2016 International Joint Conference on Neural Networks (IJCNN)</source>
          .
          <source>vol. 2016-Octob</source>
          , pp.
          <volume>1585</volume>
          {
          <issue>1592</issue>
          (
          <year>2016</year>
          ). https://doi.org/10.1109/IJCNN.
          <year>2016</year>
          .7727387
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Chao</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Napolitano</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A survey of optical ow techniques for robotics navigation applications</article-title>
          .
          <source>Journal of Intelligent and Robotic Systems: Theory and Applications</source>
          <volume>73</volume>
          (
          <issue>1-4</issue>
          ),
          <volume>361</volume>
          {
          <fpage>372</fpage>
          (
          <year>2014</year>
          ). https://doi.org/10.1007/s10846-013-9923-6
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Chollet</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , et al.: Keras. https://keras.io (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Dicente</given-names>
            <surname>Cid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Liauchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          , , Muller, H.:
          <article-title>Overview of ImageCLEFtuberculosis 2018 - detecting multi-drug resistance, classifying tuberculosis type, and assessing severity score</article-title>
          .
          <source>In: CLEF2018 Working Notes. CEUR Workshop Proceedings</source>
          , CEUR-WS.org &lt;http://ceur-ws.
          <source>org&gt;</source>
          , Avignon,
          <source>France (September 10- 14</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Fortun</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouthemy</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kervrann</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Optical ow modeling and computation: A survey</article-title>
          .
          <source>Computer Vision and Image Understanding</source>
          <volume>134</volume>
          ,
          <issue>1</issue>
          {
          <fpage>21</fpage>
          (
          <year>2015</year>
          ). https://doi.org/10.1016/j.cviu.
          <year>2015</year>
          .
          <volume>02</volume>
          .008
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Goodfellow</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Courville</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Deep Learning</article-title>
          . MIT Press (
          <year>2016</year>
          ), http: //www.deeplearningbook.org
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Gowsikhaa</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abirami</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baskaran</surname>
          </string-name>
          , R.:
          <article-title>Automated human behavior analysis from surveillance videos: a survey</article-title>
          .
          <source>Arti cial Intelligence Review</source>
          <volume>42</volume>
          (
          <issue>4</issue>
          ),
          <volume>747</volume>
          {
          <fpage>765</fpage>
          (
          <year>2014</year>
          ). https://doi.org/10.1007/s10462-012-9341-3, https://doi.org/ 10.1007/s10462-012-9341-3
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Hata</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nabavi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wells</surname>
            ,
            <given-names>W.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>War eld</surname>
          </string-name>
          , S.K.,
          <string-name>
            <surname>Kikinis</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Black</surname>
            ,
            <given-names>P.M.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jolesz</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          :
          <article-title>Three-dimensional optical ow method for measurement of volumetric brain deformation from intraoperative MR images</article-title>
          .
          <source>Journal of Computer Assisted Tomography</source>
          <volume>24</volume>
          (
          <issue>4</issue>
          ),
          <volume>531</volume>
          {
          <fpage>538</fpage>
          (
          <year>2000</year>
          ). https://doi.org/10.1097/
          <fpage>00004728</fpage>
          -200007000- 00004
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Hochreiter</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidhuber</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>Long short-term memory</article-title>
          .
          <source>Neural computation 9(8)</source>
          ,
          <volume>1735</volume>
          {
          <fpage>1780</fpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>