<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, Los Angeles, USA,
March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Why these Explanations? Selecting Intelligibility Types for Explanation Goals</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Brian Y. Lim</string-name>
          <email>brianlim@comp.nus.edu.sg</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Qian Yang</string-name>
          <email>yangqian@cmu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ashraf Abdul, Danding Wang</string-name>
          <email>ashrafabdul@u.nus.edu</email>
          <email>wangdanding@u.nus.edu</email>
          <email>{ashrafabdul,wangdanding}@u.nus.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Carnegie Mellon University</institution>
          ,
          <addr-line>Pitsburgh, PA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National University of Singapore</institution>
          ,
          <country country="SG">Singapore</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>hTe increasing ubiquity of artificial intelligence (AI) has spurred the development of explainable AI (XAI) to make AI more understandable. Even as novel algorithms for explanation are being developed, researchers have called for more human interpretability. While empirical user studies can be conducted to evaluate explanation efectiven ess, it remains unclear why specific explanations are helpful for understanding. We leverage a recently developed conceptual framework for user-centric reasoned XAI that draws from foundational concepts in philosophy, cognitive psychology, and AI to identify pathways for how user reasoning drives XAI needs. We identified targeted strategies for applying XAI facilities to improve understanding, trust and decision performance. We discuss how our framework can be extended and applied to other domains that need usercentric XAI. This position paper seeks to promote the design of XAI features based on human reasoning needs.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Human-centered computing ~ Human computer interaction</p>
      <sec id="sec-1-1">
        <title>Intelligibility; Explanations; Explainable artificial intelligence ;</title>
      </sec>
      <sec id="sec-1-2">
        <title>Decision making</title>
        <p>ACM Reference format:</p>
        <sec id="sec-1-2-1">
          <title>1 Introduction</title>
          <p>
            The recent success of artificial intelligence (AI) is driving its
prevalence and pervasiveness in many domains of decision
making from supporting healthcare intervention decisions to
IUI Workshops'19, March 20, 2019, Los Angeles, USA.
Copyright © 2019 for the individual papers by the papers' authors.
Copying permitted for private and academic purposes. This volume
is published and copyrighted by its editors.
informing criminal justice. However, to ensure that we
understand how these models and algorithms work, and to better
control them, these models need to be explainable. As a result,
explainable AI research has been burgeoning with many
algorithmic approaches being developed to explain AI and many
HCI driven empirical studies to understand the impact of these
explanations. We refer the interested reader to several literature
reviews [
            <xref ref-type="bibr" rid="ref11 ref44 ref6">1, 5, 10, 43</xref>
            ].
          </p>
          <p>
            To help end users to understand, trust, and effectively manage
their intelligent partners, HCI and AI research have produced
many user-centered, innovative algorithm visualizations,
interfaces and toolkits (e.g., [
            <xref ref-type="bibr" rid="ref21 ref26 ref37 ref8">7, 20, 25, 36</xref>
            ]). To make sense of the
variety of explanations, several explanation frameworks have
been proposed for knowledge-based systems [
            <xref ref-type="bibr" rid="ref11">10</xref>
            ], recommender
systems [
            <xref ref-type="bibr" rid="ref13">12</xref>
            ], case-based reasoning [
            <xref ref-type="bibr" rid="ref40">39</xref>
            ], intelligent decision aids
[
            <xref ref-type="bibr" rid="ref41">40</xref>
            ], tutoring systems [
            <xref ref-type="bibr" rid="ref11">10</xref>
            ], intelligible context-aware systems
[
            <xref ref-type="bibr" rid="ref25">24</xref>
            ], etc. These frameworks are mostly taxonomic or driven by
clearly defined principles (e.g. [
            <xref ref-type="bibr" rid="ref22">21</xref>
            ]). In this work, we aim to
identify theories in human thinking that drives the needs for
different types of explanations.
          </p>
          <p>
            Indeed, some work has drawn from more formal theories.
Recent writings by Miller, Hoffman and Klein discussed relevant
theories from philosophy, cognitive psychology, social science,
and AI to inform the design of eXplainable AI (XAI) [
            <xref ref-type="bibr" rid="ref14 ref15 ref16 ref20 ref32">13, 14, 15, 19,
31</xref>
            ]. Miller noted that much of XAI research tended to use the
researchers’ intuition of what constitutes a “good” explanation.
He argued that to make XAI usable, it is important to draw from
social sciences. Hoffman et al. [
            <xref ref-type="bibr" rid="ref14 ref15 ref16">13, 14, 15</xref>
            ] and Klein [
            <xref ref-type="bibr" rid="ref20">19</xref>
            ]
summarized several theoretical foundations of how people
formulate and accept explanations, empirically identified several
purposes and patterns for causal reasoning, and proposed ways
that users can generate self-explanations to answer contrastive
questions. However, it is not clear how best to operationalize this
rich body of work in the context of XAI-based decision support
systems for specific user reasoning goals. Hence, adding on to this
line of inquiry, we have recently proposed a theory-driven,
usercentric XAI framework that connects XAI explanation features to
underlying reasoning processes that users have for explanations
[
            <xref ref-type="bibr" rid="ref43">42</xref>
            ]. Drawing on this framework, XAI researchers and designers
can identify pathways along which human cognitive paetrns
drives needs for building XAI. By articulating a detailed design
space of technical features of XAI and user requirements of
human reasoning, we intend that our framework will help
          </p>
          <p>Understanding People
informs</p>
          <p>Explaining AI
How People should Reason and Explain
• Explanation goals</p>
          <p>|ftirlatenrspcaauresnescy| g|eimneprraolvizeedaencdislieoanrsn||dperbeudgicmt aonddelc|omntordoelrate trust
• |iInndquucitriyona|nadnarleoagsyo|ndinedguction</p>
          <p>abduction | hypothetico-deductive model
• |Causal explanation and causal attribution
contrastive | counterfactual | transfactual | attribution</p>
        </sec>
        <sec id="sec-1-2-2">
          <title>2 XAI Framework of Reasoned Explanations</title>
          <p>
            We performed a literature review and synthesized a conceptual
framework from rationalizing logical connections. Rather than
perform a comprehensive encyclopedic literature review of
relevant concepts in XAI [
            <xref ref-type="bibr" rid="ref11 ref44 ref6">1, 5, 10, 43</xref>
            ], our goal was to create an
operational framework with which developers of XAI interfaces
and systems can use. We started with an existing literature review
This section informs how XAI can support different explanation
types by articulating how people understand events or
observations through explanations. We drew these insights from
the fields of philosophy, and cognitive psychology, specifically 1)
different ways of knowing, 2) what structures contain knowledge,
3) how to reason logically and 4) why we seek explanations.
          </p>
          <p>
            2.1.1 Explanation Goals. The needs for explanations are
triggered by a deviation from expected behavior [
            <xref ref-type="bibr" rid="ref32">31</xref>
            ], such as a
curious, inconsistent, discrepant or anomalous event.
Alternatively, users may also seek to monitor for an expected,
important or costly event. Miller identiefid that the main reason
why people want explanations is to facilitate learning by allowing
the user to (i) filter to a small set of causes to simplify their
observation, and to (ii) generalize these observations into a
conceptual model where they can predict and control future
phenomena [
            <xref ref-type="bibr" rid="ref32">31</xref>
            ]. eTh later goal of prediction is a lso described as
human-simulatability [
            <xref ref-type="bibr" rid="ref31">30</xref>
            ]. We orient our discussion of
explanations with respect to these broad goals of finding causes
and concept generalization.
          </p>
          <p>
            From the AI research perspective, a recent review by Nunes
and Jannach summarized several purposes for explanations [
            <xref ref-type="bibr" rid="ref33">32</xref>
            ].
Explanations are provided to support transparency, where users
can see some aspects of the inner state or functionality of the AI
system. When AI is used as a decision aid, users would seek to use
explanations to improve their decision making. If the system
behaved unexpectedly or erroneously, users would want
explanations for scrutability and debugging to be able to identify
the offending fault and take control to make corrections. Indeed,
this goal is very important and has been well studied regarding
user models [
            <xref ref-type="bibr" rid="ref17">3, 16</xref>
            ] and debugging intelligent agents [
            <xref ref-type="bibr" rid="ref22">21</xref>
            ]. Finally,
explanations are often proposed to improve trust in the system
and specifically moderate trust to an appropriate level [
            <xref ref-type="bibr" rid="ref27 ref5 ref7">4, 6, 26</xref>
            ].
          </p>
          <p>
            2.1.2 Inquiry and Reasoning. With the various goals of
explanations, the user would then seek to find causes or generalize
their knowledge and reason about the information or
explanations received. Pierce defined three kinds of inferences
[
            <xref ref-type="bibr" rid="ref35">34</xref>
            ]: deduction, induction, and abduction. Deductive reasoning
“top-down logic” is the process of reasoning from premises to a
conclusion. Inductive reasoning “bottom-up logic” is the reverse
process of reasoning from a single observation or instance to a
probable explanation or generalization. Abductive reasoning is
also the reverse of deductive reasoning and reasons from an
observation to the most likely explanation. This is also known as
“inference to the best explanation”. It is more selective than
inductive reasoning, since it prioritizes hypotheses.
          </p>
          <p>
            Popper combined these reasoning forms into the
HypotheticoDeductive model as a description of the scientific method [
            <xref ref-type="bibr" rid="ref36">2, 35</xref>
            ].
The model describes the steps of inquiry as (1) observe and
identify a new problem, (2) form a hypothesis as induction from
observations, (3) deduce consequent predictions from the
hypotheses, and (4) test (run experiments) or look for (or fail to
find) further observations that falsify the hypotheses. It is
commonly used and taught in medical reasoning [
            <xref ref-type="bibr" rid="ref10 ref34 ref9">8, 9, 33</xref>
            ]. A key
aspect of the HD model is hypothesis generation where
observation of the current state can help the user decide whether
to test for relationships between potential causes and the outcome
effect.
          </p>
          <p>
            Finally, analogical reasoning is the process of reasoning from
one instance to another. It is a weak form of inductive reasoning
since only one instance is considered instead of many examples
[
            <xref ref-type="bibr" rid="ref42">41</xref>
            ]. Nevertheless, it is often used in case base reasoning and in
legal reasoning to explain based on precedence (same case) or
analogy (similar case) [
            <xref ref-type="bibr" rid="ref23">22</xref>
            ].
          </p>
          <p>
            2.1.3 Causal Attribution and Explanations. As users inquire for
more information to understand an observation, they may seek
different types of explanations. Miller identified causal
explanations as a key type of explanation, but also distinguished
them from causal attribution, and non-causal explanations [
            <xref ref-type="bibr" rid="ref32">31</xref>
            ].
          </p>
          <p>
            Causal attribution refers to the articulation of internal or
external factors that could be attributed to influence the outcome
or observation [
            <xref ref-type="bibr" rid="ref12">11</xref>
            ]. Miller argues that this is not strictly a causal
explanation, since it does not precisely identify key causes.
Nevertheless, they provide broad information from which users
can judge and identify potential causes. Combining attribution
across time and sequence would lead to a causal chain, which is
sometimes considered a trace explanation or line of reasoning.
          </p>
          <p>
            Causal explanation refers to an explanation that is focused on
the selected causes relevant to interpreting the observation with
respect to existing knowledge. This requires that the explanation
be contrastive between a fact (what happened) and a foil (what is
expected or plausible to happen). Users can ask why not to
understand why a foil did not happen. The selected subset of
causes thus provides a counterfactual explanation of what needs
to change for the alternative outcome to happen. This helps
people to identify causes, on the scientific basis that manipulating
a cause will change the effect. This also provides a more usable
explanation than causal attribution, because it presents fewer
factors (reduces information overload) and can provide users with
a greater perception of control, i.e., how to control the system. A
similar method is to ask what if the factors were different, then
what the effect would be. Since this asks about prospective future
behavior, Hoffman and Klein calls this transfactual reasoning;
conversely, counterfactual reasoning asks retrospectively [
            <xref ref-type="bibr" rid="ref14 ref15">13, 14</xref>
            ].
This articulation highlights the importance of contrastive (Why
Not) and counterfactual (How To) explanations instead of simple
trace or attribution explanations typically used for transparency.
          </p>
          <p>2.1.4 Summary. We have identified different inquiry and
explanation goals, rational methods for reasoning, causal and
non-causal explanation types, and evaluation with decisions to
describe a chain of reasoning that people make. We next describe
various explanations and AI facilities and how they support
reasoning.
2.2</p>
        </sec>
        <sec id="sec-1-2-3">
          <title>How XAI Generates Explanations</title>
          <p>Now we turn to how algorithms generate explanations, in
searching for connections with human explanation facilities. We
characterize AI and XAI techniques by how they (1) semantically
support human reasoning specific methods of scientific inquiry,
such as Bayesian probability, similarity modeling, and queries;
and (2) how to represent explanations with visualization methods,
data structures and atomic elements. Where relevant we link AI
techniques back to concepts (green text) in rational reasoning.
Bold text refers to key constructs in each module in the
framework, and italic text refers to sub-constructs.</p>
          <p>2.2.1 Bayesian Probability. Due to the stochastic nature of
events, reasoning with probability and statistics is important in
decision making. People use inductive reasoning to infer events
and test hypotheses. Particularly influential is Bayes theorem that
describes how the probability of an event depends on prior
knowledge of observed conditions. This covers specific concepts
of prior and posterior probabilities, and likelihood. Understanding
outcome probabilities can inform users about the expected utility.</p>
          <p>Bayesian reasoning helps decision makers to reason by noting
the prevalence of events. E.g., doctors should not quickly conclude
that a rare disease is probable, and they would be interested to
know how influential a factor or feature is to a decision outcome.</p>
          <p>2.2.2 Similarity Modeling. As people learn general concepts,
they seek to group similar objects and identify distinguishing
features to differentiate between objects. Several classes of AI
approaches have been developed, including modeling similarity
with distance-based methods (e.g., case base reasoning, clustering
models), classification into different kinds (e.g., supervised
models, nearest neighbors), and dimensionality reduction to find
latent relationships (e.g., collaborative filtering, principal
components analysis, matrix factorization, and autoencoders).
Many of these methods are data-driven to match candidate objects
with previously seen data (training set), where characterization
depends on the features engineered and the model which frames
an assumed structure of the concepts. Explanations of these
mechanism are driven by inductive and analogical reasoning to
understand why certain objects are considered similar or
different. Identifying causal attributions can then help users
ascertain the potential causes for the matching and grouping.
Note that while rules appear to be a distinct explanation type, we
could consider them as descriptions of the boundary conditions
between dissimilar groups.</p>
          <p>
            2.2.3 Intelligibility Queries. Lim and Dey identified several
queries (called intelligibility queries) that a user may ask of a
smart system [
            <xref ref-type="bibr" rid="ref25 ref26">24, 25</xref>
            ]. Starting from a usability-centric
perspective, the authors developed a suite of colloquial questions
about the system state (Inputs, What Output, What Else Outputs,
Certainty), and inference mechanism (Why, Why Not, What If,
How To). While they initially found that Why and Why Not
explanations were most effective in promoting system
understanding and trust [
            <xref ref-type="bibr" rid="ref24">23</xref>
            ], they later found that users may
exploit different strategies to check model behavior and thus use
different intelligibility queries for the same interpretability goals
[
            <xref ref-type="bibr" rid="ref27 ref30">26, 29</xref>
            ].
          </p>
          <p>2.2.4 XAI Elements. We identify several building blocks that
compose many XAI explanations. By identifying these elements,
we can determine if an explanation strategy has covered
information that could provide key or useful information to users.
This reveals how some explanations are just reformulations of the
same explanation types but with different representations, such
that the information provided and interpretability may be similar.
Currently, showing feature attribution or influence is very
popular, but this only indicates which input feature of a model is
important or whether it had positive or negative influence
towards an outcome. Other important elements include the name
and value of input or outputs (generally shown by default in
explanations, but fundamental to transparency), and the clause to
describe if the value of a feature is above or below a threshold (i.e.,
a rule).</p>
        </sec>
        <sec id="sec-1-2-4">
          <title>3 Intelligibility Types</title>
          <p>
            We employ the taxonomy of Lim and Dey [
            <xref ref-type="bibr" rid="ref25 ref29">24, 28</xref>
            ] due to its
pragmatic usefulness to operationalize in applications and to
leverage the Intelligibility Toolkit [
            <xref ref-type="bibr" rid="ref26">25</xref>
            ] that makes it convenient
to implement a wide range of explanations. While it does not
currently generate recent state-of-the-art explanations and
models, the explanation data structures allow it to be extended to
support feature attribution and rules explanations. We reapply the
original definitions to more general applications of machine
learning beyond context-aware systems and introduce new types.
We also describe and situate the explanation types in context of
underlying reasoning processes.
          </p>
        </sec>
      </sec>
      <sec id="sec-1-3">
        <title>Inputs explanations inform users what input values from data</title>
        <p>instances or sensors that the application is reasoning for the
current case. When a user asks a why question, she may naively
be asking for the Inputs state. We also consider this to be the basic
form of explanation to support transparency by showing the
current measured input or internal state of the application.</p>
        <p>What Output explanations inform users what is the current
outcome, inference, or prediction and what possible output
options the application can produce. For applications that can
have diferent outcome values (multiclass or multilabel), we can
also show Outputs explanations. This lets users know what it can
do or what states it can be in (e.g., activity recognized as one of
three options: siitng, standing, walking). iThs helps users
understand the extent of the application’s capabilities.</p>
        <p>Certainty explanations inform users how (un)certain the
application is of the output value produced. They help the user
determine how much to trust the output value and whether to
consider an alternative outcome. While originally, Lim and Dey
considered the confidence outcome of a predictive model, this can
now include stochastic uncertainty from Bayesian modeling
approaches, which is essentially the posterior probability.</p>
      </sec>
      <sec id="sec-1-4">
        <title>Furthermore, we have found that users may reason with prior and</title>
        <p>conditional probability, so three types of uncertainty should be
supported: prior, conditional, posterior.</p>
        <p>Why explanations inform users why the application derived
its output value from the current (or previous) input values. This
is typically represented as a set of triggered rules (rule trace) for
rule-based systems or feature atributions (or weights of
evidences) for why the inferred value was inferred over
alternative values. Compared to Input explanations, Why
explanations focus on highlighting a subset of key variables or
clauses, though this does not specifically support counterfactual
reasoning, especially for multi-class classification systems .</p>
        <p>Why Not explanations inform users why an alternative
outcome (foil) was not produced, with respect to the inferred
outcome (fact), given the current input values. Why Not
explanations provide a pairwise comparison between the inferred
outcome and an alternative outcome. Similar to Why explanations
that help users to focus on key inputs, Why Not explanations
focus on salient inputs that mater for contrasting between the
fact and foil. With the fewer features highlighted, this can support
counterfactual reasoning, where the user learns how to change
key input values to achieve the alternative outcome. Hence, such</p>
      </sec>
      <sec id="sec-1-5">
        <title>Why Not explanations are essentially How To explanations. Note</title>
        <p>that we can also interpret a Why explanation as a contrast
between the inferred outcome and all other alternative outcomes.</p>
      </sec>
      <sec id="sec-1-6">
        <title>However, also note that Why Not explanations generated as</title>
        <p>feature atribution or weights of evidence are not particularly
useful for How To explanations. As with Lim and Dey, we note
that Why Not explanations are important to support, since users
would typically ask for explanations when something unexpected
happens, i.e., they expect the foil to happen. This agrees with</p>
      </sec>
      <sec id="sec-1-7">
        <title>Miller that most users truly ask for contrastive explanations (Why</title>
      </sec>
      <sec id="sec-1-8">
        <title>Not) and that these should be explained with counterfactuals (How To) [31].</title>
      </sec>
      <sec id="sec-1-9">
        <title>What If explanations allow users to anticipate or simulate</title>
        <p>what the application will do given a set of user-set input values.</p>
      </sec>
      <sec id="sec-1-10">
        <title>While this straightforward explanation type has received litle</title>
        <p>
          atention in recent AI research, it is an intuitive technique to
support human simulatability defined by Lipton [
          <xref ref-type="bibr" rid="ref31">30</xref>
          ] and supports
transfactual reasoning denfied by Hofman and Klein [
          <xref ref-type="bibr" rid="ref14">13</xref>
          ].
        </p>
        <p>
          When explanations (new) indicate under what circumstance or
scenario, or with what instance case would a particular outcome
happen. This can be used to explain for inferred or alternative
outcomes. Unlike Why or Why Not explanations which focus on
input feature at ributions or values, this focuses on the instance
entity as a whole. Thus , it is suitable for exemplar, prototype or
case-based explanations. Unlike How To explanations, this does
not describe counterfactuals of a subset o inputs to change a
scenario to have a diferent outcome. Note that we use a diefrent
definition as originally denfied by Lim [
          <xref ref-type="bibr" rid="ref29">28</xref>
          ], which referred to the
timestamp of the inference event.
        </p>
        <sec id="sec-1-10-1">
          <title>4 Selecting Intelligibility for Explanation Goals</title>
          <p>We had previously summarized several goals or reasons why
people ask for explanations. These are primarily to improve their
understanding of the AI-driven application, or the situation, or to
improve their current or future ability to act predictably and
correctly. In this section, we describe how to support three
explanation goals — filter causes, generalize and learn, and predict
and control — with the Intelligibility explanation types. By
relating the use of these explanations explicitly back to user goals,
we identify pathways to justify the use of various explanations in
explainable AI. While our text and Figure 1 already describe some
pathways, we articulate these clearer in this section.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>4.1 Find and Filter Causes</title>
      <p>
        We identified three pathways to help users narrow down and
identify specific causes for a particular system outcome (see
Figure 2). While Input explanations are most basic and
colloquially queried by users, we identify that users would inspect
the input feature values to find anomalies, discrepancies, or
surprising values, then generate hypotheses for what could be
wrong. This is not particularly efficient, since users are not
directed to any salient cause, but it can allow users to determine
their own hypotheses for causes. Going even further and giving
users more choice for hypothesis generation, we can support the
discovery of latent factors. While not originally defined in the
Intelligibility framework [
        <xref ref-type="bibr" rid="ref25 ref26 ref29">24, 25, 28</xref>
        ], recent work by Kim et al. on
TCAV [
        <xref ref-type="bibr" rid="ref19">18</xref>
        ] allows users to specify their own concepts of interest
and test if they are influential in a model’s inference.
      </p>
      <p>The second pathway involves showing Why explanations as
either a rule trace or feature attribution (or importance). This is
driven by the users identifying the influence or attribution due to
various causes (features) or by tracing deductive paths in the
system rule logic.</p>
      <p>The third pathway involves contrasting the inferred outcome
(fact) with the expected outcome (foil). Salient large feature
attribution differences can call the user’s attention to potential
causal features, but rules provide a more actionable method that
explains how specific feature values could have led to the
counterfactual case outcome.
4.2</p>
    </sec>
    <sec id="sec-3">
      <title>Generalize and Learn</title>
      <p>We identified two main pathways to help the user learn a general
mental model of how the system would behave (see Figure 3). The
first pathway involves reasoning by induction, the user could be
interested to know the likelihood of the outcome i) in general
(overall), ii) the system’s confidence or certainty of the outcome
prediction for the current instance, or iii) an intermediate
certainty where only some features matter (e.g., disease risk for
all males, given that a patient is male).</p>
      <p>
        The second pathway involves simpler but narrower reasoning
by analogy, where the user looks at one instance at a time to form
a detailed understanding of similar specific cases. Here, the
system proposes the examples, such as i) prototypes to indicate
median instances for each outcome type, ii) critique examples to
indicate examples of a desired outcome that are close to the
decision boundary [
        <xref ref-type="bibr" rid="ref18">17</xref>
        ], or iii) counter-examples that are similar
to the current instance but have different predicted outcomes [
        <xref ref-type="bibr" rid="ref39">38</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4.3 Predict and Control</title>
      <sec id="sec-4-1">
        <title>We identified three pathways to help users to predict the system’s</title>
        <p>
          future behavior and control current behavior (see Figure 4). First,
using deductive reasoning, users can read the full rule-set of the
general How To explanation to predict how the system would
inference. However, this can be tedious for a large rule-set.
Second, focusing on a specific contrast case, users can use the
How To counterfactual explanations of rule-based Why Not
explanations to understand how they could try to change the
situation for a different outcome. Anchors by Ribeiro et al. provide
a good recent method for counterfactual explanations to support
How To explanations [
          <xref ref-type="bibr" rid="ref38">37</xref>
          ]. Third, users could use a What If
explanation to test specific instances that they are interested in;
i.e., they set input values and observe the simulated outcomes.
This is similar to When explanations, but the user chooses the
input states and example.
        </p>
        <p>
          Note that we consider explanations that build tree explainer
models to be equivalent to rule-based explanations, since we can
use first-order logic to transform them [
          <xref ref-type="bibr" rid="ref26 ref29">25, 28</xref>
          ]. Furthermore, we
do not know of any feature attribution-based explanations that
can specifically satisfy the explanation goal of prediction and
control. The popularity of feature attributions presents a big gap
in the research in XAI which tend to not produce actionable
explanations.
        </p>
        <sec id="sec-4-1-1">
          <title>5 Conclusion</title>
          <p>We have described a theory-driven conceptual framework for
designing explainable facilities by drawing from philosophy,
cognitive psychology and artificial intelligence (AI) to develop
user-centric explainable AI (XAI). Using this framework, we can
identify specific pathways for how some explanations can be
useful, how certain reasoning methods fail due to cognitive biases,
and how to apply different elements of XAI to mitigate these
failures. By articulating a detailed design space of technical
features of XAI and connecting them with user requirements of
human reasoning, our framework aims to helps developers build
more user-centric explainable AI-based systems.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Abdul</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vermeulen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B. Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kankanhalli</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '18.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Scientific Method. The Stanford Encyclopedia of Philosophy</article-title>
          . https://plato.stanford.edu/entries/scientific-method/.
          <source>Retrieved 10 September</source>
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Assad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carmichael</surname>
            ,
            <given-names>D. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kay</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kummerfeld</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2007</year>
          , May).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>PersonisAD: Distributed, active, scrutable model framework for contextaware services</article-title>
          .
          <source>In International Conference on Pervasive Computing</source>
          (pp.
          <fpage>55</fpage>
          -
          <lpage>72</lpage>
          ). Springer, Berlin, Heidelberg.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Antifakos</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwaninger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Schiele</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2004</year>
          ,
          <article-title>September). Evaluating the effects of displaying uncertainty in context-aware applications</article-title>
          .
          <source>In International Conference on Ubiquitous Computing</source>
          (pp.
          <fpage>54</fpage>
          -
          <lpage>69</lpage>
          ). Springer, Berlin, Heidelberg.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Biran</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Cotton</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Explanation and justification in machine learning: A survey</article-title>
          .
          <source>In IJCAI-17 Workshop on Explainable AI (XAI)</source>
          (p.
          <fpage>8</fpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Bussone</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stumpf</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>O'Sullivan</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2015</year>
          ,
          <article-title>October)</article-title>
          .
          <article-title>The role of explanations on trust and reliance in clinical decision support systems</article-title>
          .
          <source>In Healthcare Informatics (ICHI)</source>
          , 2015 International Conference on (pp.
          <fpage>160</fpage>
          -
          <lpage>169</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Coppers</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Van den Bergh, J.,
          <string-name>
            <surname>Luyten</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coninx</surname>
          </string-name>
          , K.,
          <string-name>
            <surname>van der</surname>
            Lek-Ciudin,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vanallemeersch</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Vandeghinste</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          (
          <year>2018</year>
          , April).
          <article-title>Intellingo: An Intelligible Translation Environment</article-title>
          .
          <source>In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems</source>
          (p.
          <fpage>524</fpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Croskerry</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2009a</year>
          ).
          <article-title>A universal model of diagnostic reasoning</article-title>
          . Academic medicine,
          <volume>84</volume>
          (
          <issue>8</issue>
          ),
          <fpage>1022</fpage>
          -
          <lpage>1028</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Elstein</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shulman</surname>
            ,
            <given-names>L. S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sprafka</surname>
            ,
            <given-names>S. A.</given-names>
          </string-name>
          (
          <year>1978</year>
          ).
          <source>Medical Problem Solving: An Analysis of Clinical Reasoning</source>
          . Cambridge, MA: Harvard University Press.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Graesser</surname>
            ,
            <given-names>A.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Person</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>Mechanisms that generate questions</article-title>
          . In: Lauer,
          <string-name>
            <given-names>T.W.</given-names>
            ,
            <surname>Peacock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Graesser</surname>
          </string-name>
          , A.C. (Eds.),
          <source>Questions and Information Systems</source>
          . Lawrence Erlbaum, Hillsdale, NJ, pp.
          <fpage>167</fpage>
          -
          <lpage>187</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Heider</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>The psychology of interpersonal relations</article-title>
          . Psychology Press.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Herlocker</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Explaining collaborative filtering recommendations</article-title>
          .
          <source>In Proceedings of the 2000 ACM conference on Computer supported cooperative work (CSCW'00)</source>
          . ACM, New York, NY, USA,
          <fpage>241</fpage>
          -
          <lpage>250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Hoffman</surname>
            ,
            <given-names>R. R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2017a</year>
          ).
          <article-title>Explaining explanation, part 1: theoretical foundations</article-title>
          .
          <source>IEEE Intelligent Systems, (3)</source>
          ,
          <fpage>68</fpage>
          -
          <lpage>73</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Hoffman</surname>
            ,
            <given-names>R. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mueller</surname>
            ,
            <given-names>S. T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2017b</year>
          ).
          <source>Explaining Explanation, Part</source>
          <volume>2</volume>
          :
          <string-name>
            <given-names>Empirical</given-names>
            <surname>Foundations</surname>
          </string-name>
          .
          <source>IEEE Intelligent Systems</source>
          ,
          <volume>32</volume>
          (
          <issue>4</issue>
          ),
          <fpage>78</fpage>
          -
          <lpage>86</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Hoffman</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mueller</surname>
            ,
            <given-names>S. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Clancey</surname>
            ,
            <given-names>W. J.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Explaining Explanation, Part 4: A Deep Dive on Deep Nets</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          ,
          <volume>33</volume>
          (
          <issue>3</issue>
          ),
          <fpage>87</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Kay</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>Learner control. User modeling and user-adapted interaction</article-title>
          ,
          <volume>11</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>111</fpage>
          -
          <lpage>127</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khanna</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Koyejo</surname>
            ,
            <given-names>O. O.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Examples are not enough, learn to criticize! criticism for interpretability</article-title>
          .
          <source>In Advances in Neural Information Processing Systems</source>
          (pp.
          <fpage>2280</fpage>
          -
          <lpage>2288</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wattenberg</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gilmer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wexler</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Viegas</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2018</year>
          ,
          <article-title>July)</article-title>
          .
          <article-title>Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)</article-title>
          .
          <source>In International Conference on Machine Learning</source>
          (pp.
          <fpage>2673</fpage>
          -
          <lpage>2682</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Explaining Explanation, Part 3: The Causal Landscape</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          ,
          <volume>33</volume>
          (
          <issue>2</issue>
          ),
          <fpage>83</fpage>
          -
          <lpage>88</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Krause</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ng</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2016</year>
          , May).
          <article-title>Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models</article-title>
          .
          <source>In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems</source>
          (pp.
          <fpage>5686</fpage>
          -
          <lpage>5697</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Kulesza</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burnett</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>W. K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Stumpf</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2015</year>
          , March).
          <article-title>Principles of explanatory debugging to personalize interactive machine learning</article-title>
          .
          <source>In Proceedings of the 20th international conference on intelligent user interfaces</source>
          (pp.
          <fpage>126</fpage>
          -
          <lpage>137</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Lamond</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Precedent and analogy in legal reasoning</article-title>
          .
          <source>The Stanford Encyclopedia of Philosophy</source>
          . https://plato.stanford.edu/entries/legal-reasprec/.
          <source>Retrieved 10 September</source>
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B. Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Avrahami</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (2009a, April).
          <article-title>Why and why not explanations improve the intelligibility of context-aware intelligent systems</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</source>
          (pp.
          <fpage>2119</fpage>
          -
          <lpage>2128</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B. Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          (2009b,
          <article-title>September)</article-title>
          .
          <article-title>Assessing demand for intelligibility in context-aware applications</article-title>
          .
          <source>In Proceedings of the 11th international conference on Ubiquitous computing</source>
          (pp.
          <fpage>195</fpage>
          -
          <lpage>204</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B. Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          (
          <year>2010</year>
          ,
          <article-title>September)</article-title>
          .
          <article-title>Toolkit to support intelligibility in context-aware applications</article-title>
          .
          <source>In Proceedings of the 12th ACM international conference on Ubiquitous computing</source>
          (pp.
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B. Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          (2011a, September).
          <article-title>Investigating intelligibility for uncertain context-aware applications</article-title>
          .
          <source>In Proceedings of the 13th international conference on Ubiquitous computing</source>
          (pp.
          <fpage>415</fpage>
          -
          <lpage>424</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B. Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          (
          <year>2011b</year>
          ,
          <year>August</year>
          ).
          <article-title>Design of an intelligible mobile context-aware application</article-title>
          .
          <source>In Proceedings of the 13th international conference on human computer interaction with mobile devices and services</source>
          (pp.
          <fpage>157</fpage>
          -
          <lpage>166</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B. Y.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Improving understanding and trust with intelligibility in context-aware applications</article-title>
          .
          <source>PhD dissertation</source>
          . CMU.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B. Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          (
          <year>2013</year>
          ,
          <article-title>July)</article-title>
          .
          <article-title>Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application</article-title>
          . In International Conference on Human-Computer
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          (pp.
          <fpage>92</fpage>
          -
          <lpage>101</lpage>
          ). Springer, Berlin, Heidelberg.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Lipton</surname>
            ,
            <given-names>Z. C.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>The mythos of model interpretability</article-title>
          .
          <source>arXiv preprint arXiv:1606</source>
          .
          <fpage>03490</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Explanation in artificial intelligence: insights from the social sciences</article-title>
          .
          <source>arXiv preprint arXiv:1706</source>
          .
          <fpage>07269</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Nunes</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Jannach</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>A systematic review and taxonomy of explanations in decision support and recommender systems</article-title>
          .
          <source>User Modeling</source>
          and
          <string-name>
            <surname>User-Adapted</surname>
            <given-names>Interaction</given-names>
          </string-name>
          ,
          <volume>27</volume>
          (
          <issue>3-5</issue>
          ),
          <fpage>393</fpage>
          -
          <lpage>444</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [33]
          <string-name>
            <surname>Patel</surname>
            ,
            <given-names>V. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arocha</surname>
            ,
            <given-names>J. F.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Thinking and reasoning in medicine</article-title>
          .
          <source>The Cambridge handbook of thinking and reasoning</source>
          ,
          <volume>14</volume>
          ,
          <fpage>727</fpage>
          -
          <lpage>750</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [34]
          <string-name>
            <surname>Peirce</surname>
            ,
            <given-names>C. S.</given-names>
          </string-name>
          (
          <year>1903</year>
          ). Harvard lectures on pragmatism, Collected Papers v.
          <volume>5</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [35]
          <string-name>
            <surname>Popper</surname>
          </string-name>
          ,
          <string-name>
            <surname>Karl</surname>
          </string-name>
          (
          <year>2002</year>
          ),
          <article-title>Conjectures and Refutations: The Growth of Scientific Knowledge</article-title>
          , London, UK: Routledge.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [36]
          <string-name>
            <surname>Ribeiro</surname>
            ,
            <given-names>M. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Guestrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2016</year>
          ,
          <article-title>August)</article-title>
          .
          <article-title>Why should i trust you?: Explaining the predictions of any classifier</article-title>
          .
          <source>In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          (pp.
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [37]
          <string-name>
            <surname>Ribeiro</surname>
            ,
            <given-names>M. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Guestrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2018a</year>
          ).
          <article-title>Anchors: High-precision model-agnostic explanations</article-title>
          .
          <source>In AAAI Conference on Artificial Intelligence .</source>
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [38]
          <string-name>
            <surname>Ribeiro</surname>
            ,
            <given-names>M. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Guestrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2018b</year>
          ).
          <article-title>Semantically Equivalent Adversarial Rules for Debugging NLP Models</article-title>
          .
          <source>In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          (Vol.
          <volume>1</volume>
          , pp.
          <fpage>856</fpage>
          -
          <lpage>865</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [39]
          <string-name>
            <surname>Roth-Berghofer</surname>
            ,
            <given-names>T. R.</given-names>
          </string-name>
          (
          <year>2004</year>
          ,
          <article-title>August)</article-title>
          .
          <article-title>Explanations and case-based reasoning: Foundational issues</article-title>
          .
          <source>In European Conference on Case-Based Reasoning</source>
          (pp.
          <fpage>389</fpage>
          -
          <lpage>403</lpage>
          ). Springer, Berlin, Heidelberg.
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [40]
          <string-name>
            <surname>Silveira</surname>
            ,
            <given-names>M.S.</given-names>
          </string-name>
          , de Souza,
          <string-name>
            <given-names>C.S.</given-names>
            , and
            <surname>Barbosa</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.D.J.</surname>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>Semiotic engineering contributions for designing online help systems</article-title>
          .
          <source>In Proceedings of the 19th annual international conference on Computer documentation (SIGDOC '01)</source>
          . ACM, New York, NY, USA,
          <fpage>31</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [41]
          <string-name>
            <surname>Vickers</surname>
          </string-name>
          , John (
          <year>2009</year>
          ).
          <article-title>The Problem of Induction. The Stanford Encyclopedia of Philosophy</article-title>
          . https://plato.stanford.edu/entries/induction-problem/.
          <source>Retrieved 10 September</source>
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [42]
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abdul</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B.Y.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Designing Theory-Driven UserCentric Explainable AI</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '19</source>
          . https://doi.org/10.1145/3290605.3300831
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [43]
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Q. S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>S. C.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Visual interpretability for deep learning: a survey</article-title>
          .
          <source>Frontiers of Information Technology &amp; Electronic Engineering</source>
          ,
          <volume>19</volume>
          (
          <issue>1</issue>
          ),
          <fpage>27</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>