<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Understanding Automatic COVID-19 Classi cation using Chest X-ray images</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathematics and Computer Science, University of Calabria</institution>
          ,
          <addr-line>Rende</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The COVID-19 disease caused by the SARS-CoV-2 virus rst appeared in Wuhan, China, and is considered a serious disease due to its high permeability, and contagiousness. The similarity of COVID19 disease with other lung infections, along with its high spreading rate, makes the diagnosis di cult. Solutions based on machine learning techniques achieved relevant results in identifying the correct disease and providing early diagnosis, and can hence provide signi cant clinical decision support; however, such approaches su er from the lack of proper means for interpreting the choices made by the models, especially in case of deep learning ones. With the aim to improve interpretability and explainability in the process of making quali ed decisions, we designed a system that allows a partial opening of this black box by means of proper investigations on the rationale behind the decisions. We tested our approach over arti cial neural networks trained for multiple classi cation based on Chest X-ray images; our tool analyzed the internal processes performed by the networks during the classi cation tasks to identify the most important elements involved in the training process that in uence the network's decisions. We report the results of an experimental analysis aimed at assessing the viability of the proposed approach.</p>
      </abstract>
      <kwd-group>
        <kwd>GradCAM</kwd>
        <kwd>Networks</kwd>
        <kwd>Chest X-ray images</kwd>
        <kwd>Convolutional Neural</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The Novel Coronavirus, that reportedly started to infect human individuals at
the end of 2019, rapidly caused a pandemic, as the infection can spread quickly
from individual to individual in the community [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Signs of infection include
respiratory symptoms, fever, cough and dyspnea. In more serious cases, the
infection can cause Pneumonia, severe acute respiratory syndrome, septic shock,
multi-organ failure, and death [
        <xref ref-type="bibr" rid="ref13 ref15">15, 13</xref>
        ].
      </p>
      <p>
        Early and automatic diagnoses are relevant to control the epidemic, paving
the way to timely referral of patients to quarantine, rapid intubation of serious
Copyright c 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0)
cases in specialized hospitals, and monitoring of the spread of the disease. Since
the disease heavily a ects human lungs, analyzing Chest X-ray images of the
lungs may prove to be a powerful tool for disease investigation. Several methods
have been proposed in the literature in order to perform disease classi cation
from Chest X-ray images, especially based on deep learning approaches [
        <xref ref-type="bibr" rid="ref29 ref4 ref9">29,
9, 4</xref>
        ]. Notably, in this context, solutions featuring interpretability and
explainability approaches can signi cantly help at improving disease classi cation and
providing context-aware assistance and understanding. Indeed, interpreting the
decision-making processes of neural networks can be of great help at
enhancing the diagnostic capabilities and providing direct patient- and process-speci c
support to diagnosis and surgical tool detection. However, interpretability and
explainability represent critical points in approaches based on deep learning
models, that achieved great results in disease classi cation.
      </p>
      <p>
        In this work, we investigate the use of convolutional neural networks (CNNs)
with the aim to perform multiple-disease classi cation from Chest X-ray images.
Diseases that are a matter of concern for our experiments are COVID-19,
Viral Pneumonia and Streptococcus Pneumonia. Notably, although these diseases
are characterized by pulmonary in ammation caused by di erent pathogens,
Streptococcus Pneumonia and Viral Pneumonia have similar clinical symptoms
to COVID-19 [
        <xref ref-type="bibr" rid="ref24 ref28">24, 28</xref>
        ], such as fever, chills, cough, and dyspnea. The
symptombased similarity among diseases is a critical factor that could a ect a proper
diagnosis and treatment plan. Moreover, we include in our experiments Healthy
patients to learn how they di er from symptomatic patients.
      </p>
      <p>We analyze the CNNs-based model to identify the mechanisms and the
motivations steering neural networks decisions in classi cation task. In particular, we
use gradient visualization techniques to produce coarse localization maps
highlighting the image regions most likely to be referred to by the model when the
classi cation decision is taken. The highlighted areas are then used to discover
(i) patterns in Chest X-ray images related to a speci c disease, and (ii)
correlation between these areas and classi cation accuracy, by analyzing a possible
performance worsening after their removal.</p>
      <p>The remainder of the paper is structured as follows. We rst brie y report on
related work in Section 2; in Section 3 we then provide a detailed description of
our approach, that has been assessed via a careful experimental activity, which
is discussed in Section 4; we analyze and discuss results in Section 5, eventually
drawing our conclusions in Section 6.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related work</title>
      <p>In this section we present state-of-the-art methods used to (i) perform disease
classi cation through Chest X-ray images and (ii) provide interpretability and
explainability of the rationale behind the decisions performed.</p>
      <p>
        Disease Classi cation. Deep learning-based models recently achieved
promising results in image-based disease classi cation. These models, such as CNNs [
        <xref ref-type="bibr" rid="ref14 ref20 ref25 ref6">14,
6, 20, 25</xref>
        ], are proven to be appropriate and e ective when compared to
conventional methods; indeed, CNNs currently represent the most widely used method
for image processing. Abbas et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] proposed a deep learning approach
(DeTraC) to perform disease classi cation using X-ray images. The approach was
used to distinguish COVID-19 X-ray images from normal ones, achieving an
accuracy of 95.12%. An improvement in terms of binary classi cation accuracy
was presented by Ozturk et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. The authors proposed a deep learning model
(DarkCovidNet) for automatic diagnosis of COVID-19 based on Chest X-ray
images. They both performed a binary and multiclass classi cation, dealing with
patients with COVID-19, no- ndings and Pneumonia. The accuracy achieved
is of 98.08% and 87.02%, respectively. Similarly, Wang et al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] proposed a
deep learning-based approach (COVID-Net) to detect distinctive abnormalities
in Chest X-ray images among patients with non-COVID-19 viral infections,
bacterial infections, and healthy patients, achieving an overall accuracy of 92.6%.
All the approaches showed limitations related to low number of image samples
and imprecise localization on the chest region. More accurate localization of
model's prediction was proposed by Mangal et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and Haghanifar et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
The authors proposed a deep learning-based approach to classify COVID-19
patients from others/normal ones. They also generated saliency maps to show the
classi cation score obtained during the prediction and to validate the results.
      </p>
      <p>
        Explainability of deep learning model. In the last year, attempts at
understanding neural networks decision-making have raised a lot of interest in
the scienti c community. Several approaches have been proposed to visualize the
behavior of a CNN by sampling image patches that maximize the activation of
hidden units [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], and by backpropagation to identify or generate salient image
features [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Other researchers were trying to solve this problem by explaining
neural network decisions by generating informative heatmaps such as
Gradientweighted Class Activation Mapping (GradCAM) [
        <xref ref-type="bibr" rid="ref19 ref3">19, 3</xref>
        ], or through layer-wise
relevance propagation [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. However, these methods present some limitations;
indeed, the generated heatmaps were basically qualitative, and not informative
enough to specify which concepts have been detected. An improvement was
provided using semantically explanation from visual representation [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] to
decompose the evidence for a prediction for image classi cation into semantically
interpretable components, each with an identi ed purpose, a heatmap, and a
ranked contribution.
      </p>
      <p>In this work, we propose the use of Deep Learning approach to perform
multiple disease classi cation using Chest X-ray. Additionally, we take advantage of
a novel technique for analyzing the internal processes and the decision performed
by a neural network during the training phase.
3
3.1</p>
    </sec>
    <sec id="sec-3">
      <title>Proposed Approach</title>
      <p>Classi cation
Considering that other diseases appear similar to COVID-19 Pneumonia,
including other Coronavirus infections and community-acquired Pneumonia such</p>
      <p>Fig. 2: Architecture of the network Inception-v3.
as Streptococcus, the distinction between these is extremely important and
necessary, especially during a pandemic. Therefore, our purpose is to automatically
identify the \correct" disease in Chest X-ray images.</p>
      <p>In order to achieve this goal, we train CNN to classify patients according to
3 similar-based symptoms diseases (i.e., COVID-19, Viral Pneumonia,
Streptococcus Pneumonia) and Healthy patients.</p>
      <p>The herein proposed approach, illustrated in Fig. 1, is based on: (i)
Multipledisease classi cation using CNNs, and (ii) Visual Explanations using GradCAM
to indicate the discriminative image regions used by the CNN.</p>
      <p>
        In order to classify patients, we used and compared the results of four neural
networks chosen on the basis of the good performance obtained on the
ImageNet data set over several competitions [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. We make use of DenseNet 121,
DenseNet 169, DenseNet 201 and Inception V3 [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
      </p>
      <p>
        DenseNet networks [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] are made of dense blocks, as shown in Table 1, where
for each layer the inputs are the feature maps of all the previous layers with
the aim to improve the information ow on 224 224 input images. More in
detail, for convolutional layers with kernel size 3 3, each side of the inputs is
zero-padded by one pixel to keep the feature-map size xed. The layers between
two contiguous dense blocks are referred as transition layers for convolution and
pooling, which contain 1 1 convolution and 2 2 average pooling. A 1 1
convolution is introduced as a bottleneck layer before each 3 3 convolution to
reduce the number of input feature-maps, and thus to improve computational
e ciency. At the end of the last dense block, a global average pooling and a
softmax classi er are applied.
      </p>
      <p>The structure of Inception-v3 is shown in Fig. 2. The Inception modules
(Inception A, Inception B and Inception C) are well-designed convolution
modules that can both generate discriminatory features and reduce the number of
parameters. Each Inception module is composed of several convolutional layers
and pooling layers in parallel. The network is composed of 3 Inception A
modules, 5 Inception B modules, and 2 Inception C modules that are stacked in
series. The input size used is 224 224 and, after the Inception modules and
convolutional layers, the feature map dimensions were 5 5 with 2.048
channels. Afterwards, we added 3 fully connected layers to the end of the Inception
modules, and, nally, a softmax layer was added as a classi er outputting a
probability for each class, and the one with the highest probability was chosen
as the predicted class.
3.2</p>
      <p>Visual Explanations
We used GradCAM to identify visual features in the input able to explain
result process achieved during the multiple classi cation. The overall structure of
GradCAM is showed in Fig. 3. In particular, it uses the gradient information
owing into the last convolutional layer of the CNN to assign importance values
to each neuron. GradCAM is applied to a trained neural network with xed
weights. Given a class of interest c, let yc the raw output of the neural network,
that is, the value obtained before the application of softmax used to trasform
the raw score into a probability. GradCAM performs the following three steps:
1. Compute Gradient of yc w.r.t. feature maps activation Ak, for any
arbic
trary k, of a convolutional layer (i.e., Ayk ). This gradient value depends on
the input image chosen; indeed, the input image determines both the feature
maps Ak and the nal class score yc that is produced.
2. Calculate Alphas by Averaging Gradients over the width dimension
(indexed by i) and the height dimension (indexed by j) to obtain neuron
importance weights kc, as follows:
kc =
global average pooling
1 zX}|X{ yc
Z i j Aik;j ;</p>
      <p>gradient|s v{iza }backprop
where Z is a constant (i.e., number of pixels in the activation map).
3. Calculate Final GradCAM Heatmap by performing a weighted
combination of the feature map activations Ak as follows:</p>
      <p>LcGradCAM = ReLU (X kcAk);</p>
      <p>k
lin|ear co{mzbinat}ion
where kc is a di erent weight for each k, and ReLU is the Recti ed Linear
Unit operation used to emphasize only the positive values and to convert
the negative values into 0.</p>
      <p>(b) Viral
Pneumonia
(c) Streptococcus</p>
      <p>Pneumonia
(d) Healthy
patients
We describe next the setting of the experimental analysis performed in order to
assess the viability of our approach.
The dataset was split into training (80%) and testing (20%) sets; the 20% of the
training set is used as validation set, in order to monitor the training process
and prevent over tting.</p>
      <p>All experiments have been performed on a machine equipped with a 12 x86 64
Intel(R) Core(TM) CPUs @3.50GHz, running GNU/Linux Debian 7 and using
CUDA compilation tools, release 7:5, V 7:5:17 NVIDIA Corporation GM 204 on
GeForce GTX 970.</p>
      <p>Fine-tuning For the training phase we performed hyperparameters
optimization. DenseNet 169 was trained with both optimizers Adam and SGD and
for each optimizer 7 learning rate were tried. The best performance is obtained
with the following con guration, trained for 300 epochs: Adam optimizer,
learning rate 10 5, batch size 16, and binary cross-entropy as loss function.</p>
      <p>The con guration of networks was modi ed in terms of the number of nodes
or levels to optimize the performance. We empirically changed the number of
layers and we trimmed network size by pruning nodes to improve computational
performance and identify those nodes which would not noticeably a ect network
performance. However, since we performed the experiments using well-know
networks already optimized, we achieved the best performance using the standard
con guration as originally proposed by respective authors.</p>
      <p>We performed 10-fold cross-validation in order to choose the parameter value
that gives the lowest cross-validation average error; experiments were performed
on the very same machine with the same con guration of the other approaches.
4.3</p>
      <p>Performance Metrics
We assessed the e ectiveness of our approach by measuring Area Under the
Curve (AUC) and Recall, especially focusing on the last one; indeed, in this
context, the most important thing is to minimize False Negatives (i.e., disease
is present but is not identi ed).</p>
      <p>Let T P be a True Positive, T N a True Negative, F N a False Negative,
and F P a False Positive, a ROC curve is a plot of true positive fractions
(Se = T PT+PF N ) versus false positive fractions (Sp = 1 T NT+NF P ) by varying
the threshold on the probability map. Closer a curve approaches the top left
corner, then better is the performance of the system.</p>
      <p>
        The Area Under the Curve (AUC), which is 1 for a perfect system, is a single
measure to quantify this behavior [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>Recall (Rec = T PT+PF N ) considers prediction accuracy among only actual
positives and explain how correct our prediction is among all people.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Results and Discussion</title>
      <p>Table 2 and Table 3 report classi cation results after 10-fold cross-validation for
all datasets in terms of Recall and AUC, respectively. Even though promising
results are achieved in all DenseNet-based experiments, DenseNet 169 shows the
most e cient architecture: if reports AUC mean value of 0:95 and Recall mean
value of 0:90 over all the classes: hence, it was the one selected for the study.</p>
      <p>The herein proposed approach achieves promisingly results; in particular,
DenseNet 169 achieves the best performance on COVID-19 dataset (i.e., Recall
mean value: 0:99 and AUC: 0:99), instead, it decreases on Viral Pneumonia
dataset (i.e., Recall mean value: 0:83 and AUC: 0:92), and in classifying Healthy
patients (i.e., Recall mean value of 0:80 and AUC: 0:89).</p>
      <p>Analyzing the results, we see that Viral Pneumonia is often confused with
Streptococcus Pneumonia, due to overlapping imaging characteristics and thus
resulting in a Recall mean value always less than 0:90 in all experiments
performed. It is worth noting that, in general, the extraction of CT scan images from
published articles, rather than from actual sources, might lessen image quality,
thus a ecting performance of the machine learning model.</p>
      <p>A visual inspection of the GradCAM results con rms the quality of the
model; indeed, our model exhibits strong classi cation criteria in the Chest
region (see Fig. 5). In particular, red areas refer to the parts where the attention
is strong, while blue areas refer to weaker attention. In general, the warmer the
color, the more important are the features highlighted for the network.</p>
      <p>Moreover, in order to con rm that the identi ed portions are actually signi
cant, for each dataset we selected and removed the 40% of highlighted elements;
this threshold was selected empirically after several experiments. A substantial
decrease of Recall (on average around 10%) is shown using COVID-19, Viral
Pneumonia and Streptococcus Pneumonia (i.e., p-value &lt; 0.05 for paired t-test
computed before and after images cutting); no statistical changes are shown
using dataset of Healthy patients. This result suggests that GradCAM is able
to identify the important elements involved in the training process and,
consequently, responsibility for this diminishment is due to images cutting by which
we removed the peculiar characteristic of the disease.
(b) Viral Pneumonia
(c) Streptococcus Pneumonia
(d) Healthy patients
In this work we exploit the use of CNNs and visual explanation techniques to
estimate diagnosis using Chest X-ray and to analyze the internal processes
performed by a neural network during the training phase with the aim of improving
explainability in the process of making quali ed decisions. Basically, we try to
identify the most important regions that in uence the network's decisions.</p>
      <p>We ne-tuned the approach by means of accurate experimental activities; in
particular, we classi ed four di erent disease datasets, and four di erent CNNs
for the classi cation.</p>
      <p>Experimental results show that our proposal is robust and it is able to identify
speci c regions that are crucial in the neural network decision-making process,
thus improving explainability. Indeed, classi cation accuracy is lower when
highlighted regions are removed from the input images; this suggests the importance
of these areas in disease classi cation and the possibility to consider the set of
elements identi ed as potential disease markers.</p>
      <p>In context where early and accurate medical diagnosis of speci c pathologies
are essential, our method proves that visual explanation method combined with
machine learning techniques can be used to provide solid disease classi cations
and automatically discover new bio-markers by interpreting network decisions.</p>
      <p>As future work is concerned, we plan to investigate misclassi cation errors
and improve the generalization capability of the model. Our e orts will also
include the interaction with physicians, so that proper medical expertise can be
used to judge and better assess the quality of the regions highlighted by the
proposed approach.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Abbas</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abdelsamea</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gaber</surname>
            ,
            <given-names>M.M.:</given-names>
          </string-name>
          <article-title>Classi cation of covid-19 in chest x-ray images using detrac deep convolutional neural network</article-title>
          . arXiv preprint arXiv:
          <year>2003</year>
          .
          <volume>13815</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Binder</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montavon</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klauschen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , Muller,
          <string-name>
            <given-names>K.R.</given-names>
            ,
            <surname>Samek</surname>
          </string-name>
          , W.:
          <article-title>On pixel-wise explanations for non-linear classi er decisions by layer-wise relevance propagation</article-title>
          .
          <source>PloS one 10(7)</source>
          ,
          <year>e0130140</year>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bruno</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calimeri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kitanidis</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Momi</surname>
          </string-name>
          , E.:
          <article-title>Understanding automatic diagnosis and classi cation processes with data visualization</article-title>
          .
          <source>In: 2020 IEEE International Conference on Human-Machine Systems (ICHMS)</source>
          . pp.
          <volume>1</volume>
          {
          <issue>6</issue>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bullock</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cuesta-Lazaro</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Quera-Bofarull</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Xnet: A convolutional neural network (cnn) implementation for medical x-ray image segmentation suitable for small datasets</article-title>
          .
          <source>In: Medical Imaging</source>
          <year>2019</year>
          :
          <article-title>Biomedical Applications in Molecular, Structural, and Functional Imaging</article-title>
          . vol.
          <volume>10953</volume>
          , p.
          <fpage>109531Z</fpage>
          .
          <source>International Society for Optics and Photonics</source>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cohen</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morrison</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Duong</surname>
            ,
            <given-names>T.Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghassemi</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Covid19 image data collection: Prospective predictions are the future</article-title>
          . arXiv preprint arXiv:
          <year>2006</year>
          .
          <volume>11988</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Colleoni</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moccia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Du</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Momi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stoyanov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Deep learning based robotic tool detection and articulation estimation with spatio-temporal layers</article-title>
          .
          <source>IEEE Robotics and Automation Letters</source>
          <volume>4</volume>
          (
          <issue>3</issue>
          ),
          <volume>2714</volume>
          {
          <fpage>2721</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Haghanifar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Majdabadi</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Covid-cxnet: Detecting covid-19 in frontal chest x-ray images using deep learning</article-title>
          .
          <source>arXiv preprint arXiv:2006</source>
          .
          <volume>13807</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Der Maaten</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weinberger</surname>
            ,
            <given-names>K.Q.</given-names>
          </string-name>
          :
          <article-title>Densely connected convolutional networks</article-title>
          .
          <source>In: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          . pp.
          <volume>4700</volume>
          {
          <issue>4708</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nan</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jin</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pu</surname>
          </string-name>
          , J.: Sdfn:
          <article-title>Segmentation-based deep fusion network for thoracic disease classi cation in chest x-ray images</article-title>
          .
          <source>Computerized Medical Imaging and Graphics</source>
          <volume>75</volume>
          ,
          <issue>66</issue>
          {
          <fpage>73</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Mahendran</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vedaldi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Understanding deep image representations by inverting them</article-title>
          .
          <source>In: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          . pp.
          <volume>5188</volume>
          {
          <issue>5196</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Mangal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rajgopal</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rangarajan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Namboodiri</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Banerjee</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arora</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Covidaid: Covid-19 detection using chest x-ray</article-title>
          . arXiv preprint arXiv:
          <year>2004</year>
          .
          <volume>09803</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Mar n</surname>
          </string-name>
          , D.,
          <string-name>
            <surname>Aquino</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gegundez-Arias</surname>
            ,
            <given-names>M.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bravo</surname>
            ,
            <given-names>J.M.:</given-names>
          </string-name>
          <article-title>A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features</article-title>
          .
          <source>IEEE Transactions on medical imaging 30(1)</source>
          ,
          <volume>146</volume>
          {
          <fpage>158</fpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>McKeever</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Here's what coronavirus does to the body</article-title>
          .
          <source>National Geographic</source>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Moccia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Banali</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martini</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muscogiuri</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pontone</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pepi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caiani</surname>
            ,
            <given-names>E.G.</given-names>
          </string-name>
          :
          <article-title>Development and testing of a deep learning-based strategy for scar segmentation on cmr-lge images</article-title>
          .
          <source>Magnetic Resonance Materials in Physics, Biology and Medicine</source>
          <volume>32</volume>
          (
          <issue>2</issue>
          ),
          <volume>187</volume>
          {
          <fpage>195</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Organization</surname>
            ,
            <given-names>W.H.</given-names>
          </string-name>
          , et al.:
          <article-title>Health topics. coronav rus</article-title>
          . Coronavirus: symptoms. World Health Organization,
          <year>2020a</year>
          . Dispon vel em: https://www. who. int/healthtopics/coronavirus# tab= tab 3.
          <source>Acesso em 7</source>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. Ozturk, S., Ozkaya, U.,
          <string-name>
            <surname>Barstugan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Classi cation of coronavirus (covid-19) from x-ray and ct images using shrunken features</article-title>
          .
          <source>International Journal of Imaging Systems and Technology</source>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Ozturk</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Talo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yildirim</surname>
            ,
            <given-names>E.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baloglu</surname>
          </string-name>
          , U.B.,
          <string-name>
            <surname>Yildirim</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acharya</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          R.:
          <article-title>Automated detection of covid-19 cases using deep neural networks with x-ray images</article-title>
          .
          <source>Computers in Biology and Medicine</source>
          p.
          <volume>103792</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Rosebrock</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Imagenet: Vggnet, resnet, inception, and xception with keras</article-title>
          .
          <source>Mars</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Selvaraju</surname>
            ,
            <given-names>R.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cogswell</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Das</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vedantam</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parikh</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Batra</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          : Gradcam:
          <article-title>Visual explanations from deep networks via gradient-based localization</article-title>
          .
          <source>In: Proceedings of the IEEE international conference on computer vision</source>
          . pp.
          <volume>618</volume>
          {
          <issue>626</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Spadea</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pileggi</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , Za no,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Salome</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Catana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Izquierdo-Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Amato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Seco</surname>
          </string-name>
          , J.:
          <article-title>Deep convolution neural network (dcnn) multiplane approach to synthetic ct generation from mr images|application in brain proton therapy</article-title>
          .
          <source>International Journal of Radiation Oncology* Biology* Physics</source>
          <volume>105</volume>
          (
          <issue>3</issue>
          ),
          <volume>495</volume>
          {
          <fpage>503</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Szegedy</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vanhoucke</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , Io e, S.,
          <string-name>
            <surname>Shlens</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wojna</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Rethinking the inception architecture for computer vision</article-title>
          .
          <source>In: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          . pp.
          <volume>2818</volume>
          {
          <issue>2826</issue>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images</article-title>
          . arXiv preprint arXiv:
          <year>2003</year>
          .
          <volume>09871</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peng</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bagheri</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Summers</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          : Chestx-ray8:
          <article-title>Hospital-scale chest x-ray database and benchmarks on weakly-supervised classication and localization of common thorax diseases</article-title>
          .
          <source>In: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          . pp.
          <year>2097</year>
          {
          <volume>2106</volume>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McGoogan</surname>
            ,
            <given-names>J.M.:</given-names>
          </string-name>
          <article-title>Characteristics of and important lessons from the coronavirus disease 2019 (covid-19) outbreak in china: summary of a report of 72 314 cases from the chinese center for disease control and prevention</article-title>
          .
          <source>Jama</source>
          <volume>323</volume>
          (
          <issue>13</issue>
          ),
          <volume>1239</volume>
          {
          <fpage>1242</fpage>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25. Za no,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Pernelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Mastmeyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Mehrtash</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            , Zhang, H.,
            <surname>Kikinis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Kapur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Spadea</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.F.</surname>
          </string-name>
          :
          <article-title>Fully automatic catheter segmentation in mri with 3d convolutional neural networks: application to mri-guided gynecologic brachytherapy</article-title>
          .
          <source>Physics in Medicine &amp; Biology</source>
          <volume>64</volume>
          (
          <issue>16</issue>
          ),
          <volume>165008</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Zeiler</surname>
            ,
            <given-names>M.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fergus</surname>
          </string-name>
          , R.:
          <article-title>Visualizing and understanding convolutional networks</article-title>
          .
          <source>In: European conference on computer vision</source>
          . pp.
          <volume>818</volume>
          {
          <fpage>833</fpage>
          . Springer (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bau</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Torralba</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Interpretable basis decomposition for visual explanation</article-title>
          .
          <source>In: Proceedings of the European Conference on Computer Vision (ECCV)</source>
          . pp.
          <volume>119</volume>
          {
          <issue>134</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liao</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ling</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Long</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          , et al.:
          <article-title>Di erential diagnosis between the coronavirus disease 2019 and streptococcus pneumoniae pneumonia by thinslice ct features</article-title>
          .
          <source>Clinical Imaging</source>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Zotin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hamad</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Simonov</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kurako</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Lung boundary detection for chest x-ray images classi cation based on glcm and probabilistic neural networks</article-title>
          .
          <source>Procedia Computer Science</source>
          <volume>159</volume>
          ,
          <issue>1439</issue>
          {
          <fpage>1448</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>