<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Considerations for Applying Logical Reasoning to Explain Neural Network Outputs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Federico Maria Cau</string-name>
          <email>federicom.cau@unica.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lucio Davide Spano</string-name>
          <email>davide.spano@unica.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nava Tintarev</string-name>
          <email>n.tintarev@maastrichtuniversity.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Maastricht University</institution>
          ,
          <addr-line>DKE, Maastricht</addr-line>
          ,
          <country country="NL">the Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Cagliari, Department of Mathematics and Computer Science</institution>
          ,
          <addr-line>Cagliari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We discuss the impact of presenting explanations to people for Arti cial Intelligence (AI) decisions powered by Neural Networks, according to three types of logical reasoning (inductive, deductive, and abductive). We start from examples in the existing literature on explaining arti cial neural networks. We see that abductive reasoning is (unintentionally) the most commonly used as default in user testing for comparing the quality of explanation techniques. We discuss whether this may be because this reasoning type balances the technical challenges of generating the explanations, and the e ectiveness of the explanations. Also, by illustrating how the original (abductive) explanation can be converted into the remaining two reasoning types we are able to identify considerations needed to support these kinds of transformations.</p>
      </abstract>
      <kwd-group>
        <kwd>Explainable User Interfaces XAI Reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        In the last decade, eXplainable AI (XAI) research has made great advances,
introducing new explanation techniques like Grad-CAM [18], SHAP [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], LRP
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], LIME [17], LORE [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], RETAIN [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], DeepLIFT [19], a.o. However, much of
this previous work does not consider which kind of logical reasoning is presented
to users, or how this interacts with characteristics of task, much less individual
di erences between users. Given the possible implication of the reasoning type in
the e ectiveness of the XAI system, we inspect the transformation from one kind
of reasoning to another. We do this intending to draw possible considerations for
selecting between the types of reasoning, starting with neural networks models.
While the categorization of the reasoning types applies to explanations for other
probabilistic models, the available explanation techniques di er.
      </p>
      <p>The paper is organised as follows: in Section 2 we provide some background
on XAI taxonomies, evaluation tasks and reasoning types, in Section 3 we
investigate the reasoning types, in Section 4 we establish the guidelines for reasoning
transformation, in Section 5 we conclude the paper and we discuss possible ideas
for future work.</p>
      <p>Copyright c 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>In this section, we explore three topics: the rst concerns the existing XAI
taxonomies, to catalogue existing state-of-the-art techniques that explain neural
networks. After that, we will talk about the task types with which explanations
techniques are proposed and their importance for evaluation by humans. Finally,
we de ne the reasoning types and their relationship with XAI explanations.
2.1</p>
      <sec id="sec-2-1">
        <title>Explanation Taxonomies</title>
        <p>
          There have been articles that have categorized explanation methods for neural
networks. Among them, [
          <xref ref-type="bibr" rid="ref12 ref3">24, 3, 12, 20</xref>
          ] were very useful to lay the foundations of
our research: they describe a comprehensive taxonomy of interpretability
methods regarding Deep Neural Networks (DNN), including goals, properties and
architecture, together with guiding principles for their safety and trustworthiness.
Furthermore, other surveys go beyond the analysis of neural network models,
and help us to expand the knowledge on methods of explanation and models
used [
          <xref ref-type="bibr" rid="ref11 ref2">2, 11</xref>
          ]. However, there are two surveys that also focus on the impact of
explanations on users [14, 22]. The former supplies a categorization between design
goals for interpretable algorithms considering di erent XAI user groups. The
latter introduces a conceptual framework that explains how human reasoning
processes informs XAI techniques, which we will deepen in the next sections.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Types of Tasks</title>
        <p>
          In addition to determining which XAI which techniques to use, another key step
is to identify what type of task the user will accomplish. We started studying
tasks types from articles [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], following the distinction present in the latter
and taking into account two types of tasks, proxy and real. In studies that use
proxy tasks, the user mainly evaluates how well he perceives the AI's
explanations and what it has learned, focusing on the AI and on the actual goals the
users have in interacting with the system. [16, 25, 21]. Conversely, studies that
use real tasks evaluate the cooperation between users and AI: the user has a
primary role regarding the decision to make and can decide or not to use the AI
advice to complete the task [
          <xref ref-type="bibr" rid="ref4 ref8">8, 4, 23</xref>
          ]. The paper in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] also criticises the current
evaluation methodology of XAI based on proxy tasks, demonstrating that their
conclusions may not re ect the usage of the system on real tasks. Given this
discovery, we consider real tasks in the transformations explained in Section 4.
2.3
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Types of Reasoning</title>
        <p>
          During the evaluation phase, where the interaction between user and AI takes
place, one fundamental factor comes into play: the reasoning type. We started
analyzing this subject from the article [22], cited above. The authors highlight
that the AI's role is to facilitate the user connection with its decisions, starting
from the reasoning expressed through the AI's explanations. Accordingly, a
reasonable choice is to deeply explore a subset of the reasoning types, for instance,
the logical ones: inductive, deductive and abductive. Here, we consider Peirce's
syllogistic theory [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], and note that we can translate between these by exchanging
the conclusion (or result), the major premise (the rule) and the minor premise
(the cause). We will investigate these reasoning types in the next section.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Investigating the Reasoning Types in Explainable</title>
    </sec>
    <sec id="sec-4">
      <title>Intelligent Interfaces</title>
      <p>
        In this section, we investigate the reasoning types previously mentioned
borrowing some examples from the literature. Article [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] brie y discusses inductive and
deductive reasoning, explaining how to integrate them in a user evaluation
context but without an in-depth exploration. Furthermore, abductive reasoning is
often (unintentionally) used to compare novel explanation techniques with
stateof-the-art techniques where the user role is to identify what is the best-generated
explanation during the evaluation. Before starting with the de nitions, we
introduce the three components identi ed in all types of reasoning (c.f., Peirce's
syllogistic theory [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]): one (or more) Cause (or Case/ Explanation/ Reason),
      </p>
      <p>E ect (or Observation/ Result) and Rule (or Generalization/ Theory).</p>
      <p>
        Year Article Reasoning of task Type/s of network Type/s of task
2018 [15] deductive MLP proxy
2019 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] inductive RNN real
2017 [21] abductive LSTM, CNN proxy
      </p>
      <p>
        When applying this theory to XAI interfaces, it is important to identify
whether their representation is implicit or explicit. For example, a rule or a
cause is implicit when it comes from the user's mental model and not from the
AI. Instead, it is explicit when we consider the AI's mental model, i.e. what it
has learned in the training process and its explanations on the output's
prediction. The reference paper for the concepts we are going to describe is that of [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
(see Table 1). The order of the components described is unimportant, except for
the last one: we will use the latter to highlight what is the reasoning to elicit
from the user.
      </p>
      <p>Deduction: given a cause and a rule, deduce an e ect. This type of reasoning
starts with general rules and examines the possibilities to reach a speci c, logical
conclusion. Deductive reasoning is alternatively referred to as \top-down" logic
because it usually starts with a general statement and ends with a narrower,
speci c conclusion. The article [15] contains an example of this reasoning, as
depicted in Figure 1. Cause : The AI's words in red that identify a negative or
positive sentiment. Rule : Certain words contribute to the sentiment of text
(implicit). E ect : The sentiment prediction.</p>
      <p>
        Induction: given a cause and an e ect, induce a rule. This type of reasoning
involves drawing a general conclusion from a set of speci c observations. It is
alternatively referred to as \bottom-up" logic because it involves widening speci c
premises out into broader generalizations. Article [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is an example of inductive
reasoning, shown in Figure 2. Cause : The AI's example-based explanations.
      </p>
      <p>E ect : The AI was unable to recognize the user's sketch. Rule : Certain
properties of sketches represent an object (implicit).</p>
      <p>Abduction: given an e ect and a rule, abduct a cause. This type of reasoning
typically begins with an incomplete set of observation/s and proceeds to the
likeliest possible explanation. An example of abductive reasoning is [21], as
depicted in Figure 3. E ect : is given by the AI's sentiment prediction. Rule :
The chart in the explanation boxes give the user the intuition of the weights the
AI uses for computing the valence of the sentence (implicit). Cause : The user
selects the weights he considers the best (proxy task).</p>
    </sec>
    <sec id="sec-5">
      <title>Transforming Explanations to di erent Reasoning</title>
    </sec>
    <sec id="sec-6">
      <title>Types</title>
      <p>
        The goal of transforming the reasoning task is to analyze possible preferences
or performance di erences on the user during the evaluation phase. Moreover,
obtaining all three reasoning types allows us to nd underrepresented reasoning
types in the literature and to study if they work better with users than the
original task's reasoning. Abductive reasoning explanations (Fig. 3) are quite easy
to generate and have a good understandability power: this compromise could
be the reason why they are the most used reasoning for comparing the quality
of explanation techniques. As for inductive explanations (Fig. 2), we can easily
generate examples based on data, but the understandability is bounded by the
selected examples. Deductive explanations (Fig. 1) are more challenging to
generate when we have to create explicit rules, but they are very understandable
to the user. As mentioned in Section 2, the resulting transformation's task will
be a real one, for avoiding the mistake highlighted by article [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This can be
achieved by revisiting the task's question from the AI to the user perspective.
Now, let us depict some ideas to formulate the transformation with the three
reasoning types described previously, considering that we often have an explicit
or implicit rule or cause, and nearly always an e ect given by the AI (suggestion).
      </p>
      <p>Deductive to Inductive and Abductive. To adapt to inductive reasoning, we
need to replace the cause giving similar or dissimilar examples concerning the
data present in the task and based on the output of the AI, thus generating
example-based explanations. So, the users grasp the Rule (that becomes
implicit) that brought the AI to that result, and draw their conclusion E ect about
the given task data. To switch to abductive reasoning, the AI should provide a</p>
      <p>Cause based on the task data. After that, we need to make the Rule implicit
to not confuse the reasoning with the deductive one.</p>
      <p>Inductive to Deductive and Abductive. To switch to the deductive case, the AI
needs to explicitly de ne a Rule , that also includes a Cause . We can
accomplish this by leveraging the properties of the AI model or adding a
complementary one to obtain a rule. Additionally, sometimes the user may take the
decision without obtaining the E ect explicitly from the AI, but letting the
user deduce it from the rules and causes. Passing instead from inductive to
abductive reasoning, we use the common traits in the inductive examples to create
a Cause .</p>
      <p>Abductive to Inductive and Deductive. Starting from this reasoning type, we
hypothesize to already have an E ect given by the AI. To translate to the
inductive case, we need to replace the Cause given by the task's data with
that of example-based explanations and transform the Rule to implicit.
To move to deductive reasoning, we need to explicitly de ne the AI's Rule ,
that may change the original Cause , and if we want, hide the E ect .
5</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion and Future Work</title>
      <p>In sum, we investigated the considerations that arise when transferring between
di erent types of logical reasoning, considering real tasks as the resulting
transformation's tasks. We identi ed the importance of di erentiating between
implicit and explicit rule representation. We also consider whether the choice of
reasoning type balances the technical challenges of generating the explanations,
and the e ectiveness of the explanations for humans. As future work, we plan
to validate these ideas in user evaluations for di erent reasoning types. Also, we
plan to create a taxonomy considering reasoning and task types, in addition to
other useful metrics related to the XAI explanation, and further explore logical
reasoning on other black-box models beyond neural networks.
14. Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for
design and evaluation of explainable ai systems (2018)
15. Nguyen, D.: Comparing automatic and human evaluation of local explanations for
text classi cation. In: Proceedings of the 2018 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language
Technologies, Volume 1 (Long Papers). pp. 1069{1078. Association for Computational
Linguistics, New Orleans, Louisiana (Jun 2018).
https://doi.org/10.18653/v1/N181097, https://www.aclweb.org/anthology/N18-1097
16. Rajani, N.F., Mooney, R.J.: Ensembling visual explanations for vqa. In:
Proceedings of the NIPS 2017 workshop on Visually-Grounded Interaction and
Language (ViGIL) (December 2017),
http://www.cs.utexas.edu/users/ai-labpubview.php?PubID=127684
17. Ribeiro, M., Singh, S., Guestrin, C.: \why should i trust you?": Explaining the
predictions of any classi er. pp. 97{101 (02 2016).
https://doi.org/10.18653/v1/N163020
18. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.:
Grad-cam: Visual explanations from deep networks via gradient-based
localization. International Journal of Computer Vision 128(2), 336{359 (Oct 2019).
https://doi.org/10.1007/s11263-019-01228-7,
http://dx.doi.org/10.1007/s11263019-01228-7
19. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through
propagating activation di erences (04 2017)
20. Tjoa, E., Guan, C.: A survey on explainable arti cial intelligence (xai): Towards
medical xai (2019)
21. Tsang, M., Sun, Y., Ren, D., Liu, Y.: Can i trust you more? model-agnostic
hierarchical explanations (2018)
22. Wang, D., Yang, Q., Abdul, A., Lim, B.: Designing theory-driven user-centric
explainable ai (05 2019). https://doi.org/10.1145/3290605.3300831
23. Yin, M., Vaughan, J., Wallach, H.: Understanding the e ect of
accuracy on trust in machine learning models. pp. 1{12 (04 2019).
https://doi.org/10.1145/3290605.3300509
24. Yu, R., Shi, L.: A user-based taxonomy for deep learning visualization. Visual</p>
      <p>Informatics 2 (09 2018). https://doi.org/10.1016/j.visinf.2018.09.001
25. Zhou, B., Sun, Y., Bau, D., Torralba, A.: Interpretable basis decomposition for
visual explanation. In: Proceedings of the European Conference on Computer Vision
(ECCV) (September 2018)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Binder</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montavon</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klauschen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , Muller,
          <string-name>
            <given-names>K.R.</given-names>
            ,
            <surname>Samek</surname>
          </string-name>
          , W.:
          <article-title>On pixel-wise explanations for non-linear classi er decisions by layer-wise relevance propagation</article-title>
          .
          <source>PLOS ONE 10(7)</source>
          ,
          <volume>1</volume>
          {
          <fpage>46</fpage>
          (07
          <year>2015</year>
          ). https://doi.org/10.1371/journal.pone.
          <volume>0130140</volume>
          , https://doi.org/10.1371/journal.pone.0130140
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Barredo</given-names>
            <surname>Arrieta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Diaz Rodriguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Del Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Bennetot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Tabik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Barbado</surname>
          </string-name>
          <string-name>
            <surname>Gonzalez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Gil-Lopez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Molina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Benjamins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.R.</given-names>
            ,
            <surname>Chatila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Herrera</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          :
          <article-title>Explainable arti cial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai</article-title>
          .
          <source>Information Fusion (12</source>
          <year>2019</year>
          ). https://doi.org/10.1016/j.in us.
          <year>2019</year>
          .
          <volume>12</volume>
          .012
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Buhrmester</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , Munch,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Arens</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.:</surname>
          </string-name>
          <article-title>Analysis of explainers of black box deep neural networks for computer vision: A survey</article-title>
          . ArXiv abs/
          <year>1911</year>
          .12116 (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bucinca</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gajos</surname>
            ,
            <given-names>K.Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Glassman</surname>
            ,
            <given-names>E.L.</given-names>
          </string-name>
          :
          <article-title>Proxy tasks and subjective measures can be misleading in evaluating explainable ai systems</article-title>
          .
          <source>Proceedings of the 25th International Conference on Intelligent User Interfaces (Mar</source>
          <year>2020</year>
          ). https://doi.org/10.1145/3377325.3377498, http://dx.doi.org/10.1145/3377325.3377498
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>C.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jongejan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holbrook</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The e ects of example-based explanations in a machine learning interface</article-title>
          .
          <source>In: Proceedings of the 24th International Conference on Intelligent User Interfaces</source>
          . p.
          <volume>258</volume>
          {
          <fpage>262</fpage>
          . IUI '
          <volume>19</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA (
          <year>2019</year>
          ). https://doi.org/10.1145/3301275.3302289, https://doi.org/10.1145/3301275.3302289
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Choi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bahadori</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schuetz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stewart</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
          </string-name>
          , J.: Retain:
          <article-title>Interpretable predictive model in healthcare using reverse time attention mechanism (08</article-title>
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Doshi-Velez</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Towards a rigorous science of interpretable machine learning (</article-title>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iandola</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Srivastava</surname>
            ,
            <given-names>R.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dollar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>He</surname>
            , X., Mitchell,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Platt</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          , et al.:
          <article-title>From captions to visual concepts and back</article-title>
          .
          <source>2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Jun</source>
          <year>2015</year>
          ). https://doi.org/10.1109/cvpr.
          <year>2015</year>
          .
          <volume>7298754</volume>
          , http://dx.doi.org/10.1109/CVPR.
          <year>2015</year>
          .7298754
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Flach</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kakas</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Abductive and inductive reasoning: Background and issues (01</article-title>
          <year>2000</year>
          ). https://doi.org/10.1007/
          <fpage>978</fpage>
          -94-017-0606-3-1
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Guidotti</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Monreale</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruggieri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pedreschi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giannotti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Local rule-based explanations of black box decision systems</article-title>
          . ArXiv abs/
          <year>1805</year>
          .10820 (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Guidotti</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Monreale</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruggieri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giannotti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pedreschi</surname>
            ,
            <given-names>D.:</given-names>
          </string-name>
          <article-title>A survey of methods for explaining black box models</article-title>
          .
          <source>ACM Computing Surveys</source>
          <volume>51</volume>
          (
          <issue>5</issue>
          ),
          <volume>1</volume>
          {
          <fpage>42</fpage>
          (Jan
          <year>2019</year>
          ). https://doi.org/10.1145/3236009, http://dx.doi.org/10.1145/3236009
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kroening</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruan</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharp</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thamo</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yi</surname>
            ,
            <given-names>X.:</given-names>
          </string-name>
          <article-title>A survey of safety and trustworthiness of deep neural networks: Veri cation, testing, adversarial attack and defence, and interpretability</article-title>
          .
          <source>Computer Science Review</source>
          <volume>37</volume>
          ,
          <issue>100270</issue>
          (Aug
          <year>2020</year>
          ). https://doi.org/10.1016/j.cosrev.
          <year>2020</year>
          .
          <volume>100270</volume>
          , http://dx.doi.org/10.1016/j.cosrev.
          <year>2020</year>
          .100270
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Lundberg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>S.I.:</given-names>
          </string-name>
          <article-title>A uni ed approach to interpreting model predictions (12</article-title>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>