=Paper= {{Paper |id=Vol-2380/paper_143 |storemode=property |title=Using improved Optical Flow Model to Detect Tuberculosis |pdfUrl=https://ceur-ws.org/Vol-2380/paper_143.pdf |volume=Vol-2380 |authors=Fernando Llopis,Andrés Fuster-Guilló,Jorge Azorín-López,Irene Llopis |dblpUrl=https://dblp.org/rec/conf/clef/LlopisGLL19 }} ==Using improved Optical Flow Model to Detect Tuberculosis== https://ceur-ws.org/Vol-2380/paper_143.pdf
     Using improved optical flow model to detect
                   Tuberculosis

 Fernando Llopis, Andrés Fuster-Guilló, Jorge Azorı́n-López, and Irene Llopis

University of Alicante. Carretera San Vicente del Raspeig s/n 03690 San Vicente del
                    Raspeig - Alicante, Spain informacio@ua.es
 http://www.ua.es {fernando.llopis, fuster, jazorin}@ua.es, ilq2@alu.ua.es



        Abstract. In 2017, 10 million people suffered from tuberculosis and 1.3
        million deaths were recorded at the national level. Nowadays, a quarter
        of the world’s population faces this disease.
        Early detection of tuberculosis can save many lives. There are many
        methods to detect this disease but one of the cheapest and quickest is
        the analysis of CT images of the chest. This is one of the objectives of
        the ImageClef Tuberculosis 2019 task, and is the one being studied by
        the University of Alicante’s research group in this edition. Last year we
        used two working approaches, one based exclusively on the use of Deep
        Learning techniques on a sequence of 2D images extracted from a 3D
        tomography and another based on the use of Optical Flow to convert
        the 3D tomography into a moving representation to calculate the ADV
        (a previous descriptor provided by the group). This descriptor can to
        synthesize information from a sequence into an image. This year we have
        tried to improve the results of the second model. This article presents
        the experiments carried out and the results obtained within the task.

        Keywords: Tuberculosis · Optical Flow · Activity Description · Deep
        Learning.


1     Introduction
Tuberculosis is a disease caused by the bacterium Mycobacterium Tuberculosis
or Koch’s bacillus. The main organ affected are the lungs, but we can also find
conditions in the kidneys, spine, and brain. It is one of the deadliest diseases in
the world:
 – In 2017, 10 million people were affected and 1.3 million deaths were recorded
   at the national level. A quarter of the world’s population suffers from it
 – Causes more deaths than malaria and AIDS combined
People who have symptoms (even if they have a negative test result) or a positive
TB test result should be screened for tuberculosis. There are two types of tests
    Copyright c 2019 for this paper by its authors. Use permitted under Creative Com-
    mons License Attribution 4.0 International (CC BY 4.0). CLEF 2019, 9-12 Septem-
    ber 2019, Lugano, Switzerland.
to find out if a person has been infected with TB bacteria: - The tuberculin skin
test : A small amount of tuberculin is injected into the lower arm and, after
48-72 hours, the patient must return for medical personnel to analyse the size
of the raised, hardened, or swollen area.
 – Blood Tests
   In the case of a positive result, it would be to perform other tests, since with
   the above-mentioned tests it is not possible to confirm whether a person has
   a latent tuberculosis infection or tuberculosis disease.
   They use other diagnostic methods for this purpose:
 – Medical history.
   It’s important to keep in mind:
    1. History of exposure to tuberculosis
    2. Demographic factors such as country of origin, age, race, occupation...
        as these may increase the risk of exposure to the disease.
    3. Patient with other conditions such as HIV or diabetes
 – Physical exam Provides information about the patient’s condition and other
   factors that may influence tuberculosis treatment
 – Diagnostic microbiology or baciloscopic Several samples from a sputum smear
   or other samples are cultured to test for the presence of acid-fast bacilli
   (BAAR), which must be M. tuberculosis. A positive result would confirm
   the diagnosis.
 – Anteroposterior Chest X-ray It is used to detect abnormalities in the chest,
   lesions that can appear anywhere in the lungs, with different sizes, shapes,
   density and cavitation, being more common the apical lesion. Although this
   test cannot be used as a definitive diagnosis, it is used to rule out the possi-
   bility of pulmonary tuberculosis in a person who has had a positive reaction
   to the tuberculin skin test or blood test. Chest radiography is considered
   fundamental in the diagnosis, so we will focus on this test later, using im-
   ages from different x-rays, which we will process to determine if a patient
   has tuberculosis or not, and if so, which of all types.
    Computers can support the automatic detection of patients with tuberculosis.
Along these lines, the CLEF (Conference and Labs of the Evaluation Forum)
has developed several tasks within this field.
    This is a series of campaigns that have been carried out since 2000, focusing
on the systematic evaluation of information, through various tasks. Most of
the tasks are related to image classification and annotation (ImageCLEF) [8].
ImageCLEF is the name given to tasks that use image processing. They began to
be proposed in 2003, and since 2004 medical tasks are added every year. In 2017
a specific task was proposed for the detection of tuberculosis called ImageCLEF
Tuberculosis, with the participation of 9 groups.
    This year ImageClefTuberculosis [4] includes two independent subtasks.
1. Subtask 1: Severity scoring.
   This subtask is aimed at assessing TB severity score. The Severity score
   is a cumulative score of severity of TB case assigned by a medical doctor.
   Originally, the score varied from 1 (”critical/very bad”) to 5 (”very good”).
2. Subtask 2: CT report.
   In this subtask the participants will have to generate an automatic report
   based on the CT image. This report should include the following information
   in binary form (0 or 1): Left lung affected, right lung affected, presence of
   calcifications, presence of caverns, pleurisy, lung capacity decrease.

Last year we test deep learning and Optical Flow models [9]. In our second
participation our objective was to improve the Optical Flow model we used last
year using information from the three axes. We have tested the models developed
with slight variations in the two subtasks.
    This document is structured as follows: in section 2 we present the architec-
tures of the model used Optical Flow. In section 3 we present the official results
of the experiments and Section 4 summarizes the document and offers a series
of proposals for future work.


2   Our approaches to the solution

In this section we propose a combined method based on optical flow and a
characterization method called ADV, to deal with the classification of chest CT
scan images affected by different types of tuberculosis. The key point of this
method is the interpretation of the set of cross-sectional chest images provided
by CT scan, not as a volume but as a sequence of video images. We can extract
movement descriptors capable of classifying tuberculosis affections by analysing
deformations or movements produced in these video sequences.
    The concept of optical flow refers to the estimation of displacements of in-
tensity patterns. This concept has been extensively used in computer vision in
different application domains: robot or vehicle navigation, car driving, video
surveillance or facial expression [5]. In biomedical context optical flow has been
used to analyse organ deformations [7,11]. We can find different methods in
the literature to obtain the optical flow [3]. One of the most used method to
estimate motion at each pixel is Lucas Kanade [10]. In this work we will use
Lucas Kanade method to extract optical flow comparing sequences of consecu-
tive images. Nevertheless, we need not only to estimate motion but describe this
motion.
    To describe motion there are several methods used in different computer
vision context like human behaviour recognition [6]. A successful method to
describe human behaviour based on trajectory analysis is presented in [1]. The
paper proposes a description vector called (ADV Activity Description Vector)
tested in several contexts [2]. In summary, the ADV vector describes the activity
in image sequence by counting for each region of the image the movements
produced in four directions of the 2D space. A detailed description of the method
can be found in [1].
    In this paper we propose the use of ADV to describe motion in the optical flow
obtained from sequences of cross-sectional chest images provided by CT scan. In
the first stage a transformation over the cross-sectional chest images provided by
                    Fig. 1. Optical flow plus ADV process stages



the CT scan is performed to transform image formats into three video sequences.
Each video sequence corresponds to the section of the volume of the CT scan in
each axis: XY axis, XZ axis and YZ axis. The second stage calculates the optical
flow of the video sequences for each axis using the Lucas Kanade method. The
third stage calculates the activity description vector ADV (3x3x5) independently
for each optical flow extracted from the sections accumulating within each 3x3
region of the image, the displacements of the optical flow in four directions of
a 2D space (right, left, up, down). The fifth component of the ADV calculates
the frequencies in direction changes. In the fourth stage a normalization of the
ADV vector in performed. The fifth stage uses the ADV vector normalized as
the input for a generic classifier to evaluate the results. In this paper, the SVM
and the LDA classifiers are used. Finally, the last stage ensembles the individual
classification results for each axis into a single result. It can to combine the results
using the statistical mode as the most used label or using the SVM classifier to
provide a boosting based combination. On the other hand, some results about
the combination of the different classification architectures have been provided
as a multiclassifier (MC). This method uses a combination of the individual SVM
and LDA classifiers for each axis and the combination of the ensemble layer as
mode or SVM to provide a meta-classifier combining all the results together to
provide a single label.
    The figure summarizes the successive stages of the process for extracting the
activity descriptors (optical flow+ADV) that will be the input of a classifier.


3     Results

3.1   Task 1

As can be seen in the table 1 the model of learning the predictions of the ADV
calculated from the sequence of slices by the SVM classifier and combining them
by the mode obtains the best results. The use of the LDA as classifier produce
very similar results. Finally, using a combination of the different classifiers and
combinations (SVR-MC) have significant results but increasing the complexity
of the prediction.


      Table 1. Results of University of Alicante vs better results at SubTask 1

                   Run                     AUC ACC Rank
                   UIIP BioMed             0.7877 0.7179 1
                   SVR-SVM-axis-mode-4.txt 0.7013 0.7009 12
                   SVR-MC                  0.7003 0.7009 14
                   SVR-LDA-axis-mode-4.txt 0.6842 0.6838 18
                   SVR-SVM-axis-svm-4.txt 0.6761 0.6752 20
                   SVR-LDA-axis-svm-4.txt 0.6499 0.6496 23




3.2   Task 2


      Table 2. Results of University of Alicante vs better results at Subtask 2

                       Run               AUC ACC Rank
                       UIIP BioMed       0.7968 0.6860 1
                       svm-axis-svm.txt 0.6190 0.5366 15
                       MC                0.6104 0.5250 16
                       svm-axis-mode.txt 0.6043 0.5340 18
                       lda-axis-mode.txt 0.5975 0.4860 20
                       lda-axis-svm.txt 0.5787 0.4851 22
    In the case of the second task (see results in Table 2), the best results are
obtained using the SVM as classifier per axis and for combining the different
predictions. Again, the MC is close to the best result but increasing the com-
plexity of the model. Finally, the LDA classifier produce wrong results and very
far from the UIIP BioMed.
    To sum up, the results obtained are very promising for the task one. There
are no differences between the classification methods used, but the ADV looks
like a model that can offer acceptable results. However, for the second task, more
resarch should be done in the ADV to be closer to the UIIP BioMed. In future
editions, we will combine the use of ADVs with deep learning techniques, which
we will try to use in future editions.


4   Conclusions and future work
Early detection of tuberculosis is a major social challenge, given the devastating
effects of the disease. As the organizers state, ”we have to work towards methods
that allow a correct detection of the disease that kills thousands and thousands
of people”. In this paper we have proposed an approach based on Optical Flow to
convert the 3D tomography into a motion representation to calculate the ADV
(a previous descriptor provided by the group). This year we used the three-axis
matrix and improved last year’s system. The experiments carried out and the
results obtained allow us to confirm the interest of this line of research and
encourages us to continue making improvements to the proposed model.


Acknowledgment
Acknowledgements This research work has been partially funded by the Univer-
sity of Alicante (Spain), Generalitat Valenciana and the Spanish Government
through the projects (PROMETEU/2018/089), HUMANO(RTI2018-094653-B-
C22) and INTEGER: Intelligent Text Generation (RTI2018-094649-B-I00).


References
 1. Azorin-Lopez, J., Saval-Calvo, M., Fuster-Guillo, A., Garcia-Rodriguez, J.: Hu-
    man behaviour recognition based on trajectory analysis using neural networks.
    In: Proceedings of the International Joint Conference on Neural Networks (2013).
    https://doi.org/10.1109/IJCNN.2013.6706724
 2. Azorin-Lopez, J., Saval-Calvo, M., Fuster-Guillo, A., Garcia-Rodriguez, J., Ca-
    zorla, M., Signes-Pont, M.T.: Group activity description and recognition based
    on trajectory analysis and neural networks. In: 2016 International Joint Con-
    ference on Neural Networks (IJCNN). vol. 2016-Octob, pp. 1585–1592 (2016).
    https://doi.org/10.1109/IJCNN.2016.7727387
 3. Chao, H., Gu, Y., Napolitano, M.: A survey of optical flow techniques for robotics
    navigation applications. Journal of Intelligent and Robotic Systems: Theory and
    Applications 73(1-4), 361–372 (2014). https://doi.org/10.1007/s10846-013-9923-6
 4. Dicente Cid, Y., Liauchuk, V., Klimuk, D., Tarasau, A., Kovalev, V., Müller, H.:
    Overview of ImageCLEFtuberculosis 2019 - automatic ct-based report genera-
    tion and tuberculosis severity assessment. In: CLEF2019 Working Notes. CEUR
    Workshop Proceedings ISSN 1613-0073, CEUR-WS.org , Lugano, Switzerland (September 9-12 2019)
 5. Fortun, D., Bouthemy, P., Kervrann, C.: Optical flow modeling and computa-
    tion: A survey. Computer Vision and Image Understanding 134, 1–21 (2015).
    https://doi.org/10.1016/j.cviu.2015.02.008
 6. Gowsikhaa, D., Abirami, S., Baskaran, R.: Automated human behavior anal-
    ysis from surveillance videos: a survey. Artificial Intelligence Review 42(4),
    747–765 (2014). https://doi.org/10.1007/s10462-012-9341-3, https://doi.org/
    10.1007/s10462-012-9341-3
 7. Hata, N., Nabavi, A., Wells, W.M., Warfield, S.K., Kikinis, R., Black, P.M.L.,
    Jolesz, F.A.: Three-dimensional optical flow method for measurement of volumetric
    brain deformation from intraoperative MR images. Journal of Computer Assisted
    Tomography 24(4), 531–538 (2000). https://doi.org/10.1097/00004728-200007000-
    00004
 8. Ionescu, B., Müller, H., Péteri, R., Cid, Y.D., Liauchuk, V., Kovalev, V., Klimuk,
    D., Tarasau, A., Abacha, A.B., Hasan, S.A., Datla, V., Liu, J., Demner-Fushman,
    D., Dang-Nguyen, D.T., Piras, L., Riegler, M., Tran, M.T., Lux, M., Gurrin, C.,
    Pelka, O., Friedrich, C.M., de Herrera, A.G.S., Garcia, N., Kavallieratou, E., del
    Blanco, C.R., Rodrı́guez, C.C., Vasillopoulos, N., Karampidis, K., Chamberlain,
    J., Clark, A., Campello, A.: ImageCLEF 2019: Multimedia retrieval in medicine,
    lifelogging, security and nature. In: Experimental IR Meets Multilinguality, Mul-
    timodality, and Interaction. Proceedings of the 10th International Conference of
    the CLEF Association (CLEF 2019), LNCS Lecture Notes in Computer Science,
    Springer, Lugano, Switzerland (September 9-12 2019)
 9. Pascual, F.L., López, J.A., Rico-Juan, J.R., Guilló, A.F., Llopis, I.: Tuberculosis
    detection using optical flow and the activity description vector. In: Working Notes
    of CLEF 2018 - Conference and Labs of the Evaluation Forum, Avignon, France,
    September 10-14, 2018. (2018), http://ceur-ws.org/Vol-2125/paper_128.pdf
10. Patel, D., Saurahb, U.: Optical flow measurement using Lucas Kanade method.
    Int J Comput Appl 61(10), 6–10 (2013)
11. Xavier, M., Lalande, A., Walker, P.M., Brunotte, F., Legrand, L.: An adapted
    optical flow algorithm for robust quantification of cardiac wall motion from stan-
    dard cine-MR examinations. IEEE Transactions on Information Technology in
    Biomedicine 16(5), 859–868 (2012). https://doi.org/10.1109/TITB.2012.2204893