<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, Los Angeles, USA, March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Horses For Courses: Making The Case For Persuasive Engagement In Smart Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>S.Stumpf</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Simone Stumpf Centre for HCI Design City, University of London London</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>Current thrusts in explainable AI (XAI) have focused on using interpretability or explanatory debugging as frameworks for developing explanations. We argue that for some systems a different paradigm - persuasive engagement - needs to be adopted, in order to affect trust and user satisfaction. In this paper, we will briefly provide an overview of the current approaches to explain smart systems and their scope of application. We then introduce the theoretical basis for persuasive engagement, and show through a use case how explanations might be generated. We then discuss future work that might shed more light on how to best explain different kinds of smart systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Human-centered computing → HCI theory, concepts and
models • Human-centered computing → Interaction design
theory, concepts and paradigms</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>
        Explainable AI (XAI) has gained attention in recent years, with
significant research efforts being expended to investigate how to
generate interpretable explanations [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref30">30</xref>
        ][
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], how to manage
and structure the explanation design process [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ][
        <xref ref-type="bibr" rid="ref36">36</xref>
        ], and the
principles and important concepts underlying various
IUI Workshops'19, March 20, 2019, Los Angeles, USA.
      </p>
      <p>
        Copyright © 2019 for the individual papers by the papers' authors. Copying
permitted for private and academic purposes. This volume is published and
copyrighted by its editors.
approaches to provide explanations for smart systems [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ][
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] in
order to increase user satisfaction [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ][
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], user trust and/or
reliance [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], decrease misuse or disuse [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], make users’ mental
models more sound [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], or more deeply involve the user in
interactive machine learning, human-in-the-loop learning, and
decision-making [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ][
        <xref ref-type="bibr" rid="ref38">38</xref>
        ][
        <xref ref-type="bibr" rid="ref13">13</xref>
        ][
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. A long-standing focus of
research in XAI has been what and how to explain to users of AI
[
        <xref ref-type="bibr" rid="ref34">34</xref>
        ][
        <xref ref-type="bibr" rid="ref26">26</xref>
        ][
        <xref ref-type="bibr" rid="ref32">32</xref>
        ][
        <xref ref-type="bibr" rid="ref20">20</xref>
        ][
        <xref ref-type="bibr" rid="ref18">18</xref>
        ][
        <xref ref-type="bibr" rid="ref35">35</xref>
        ][
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], both in terms of content e.g. data,
details of the algorithm used, etc. and presentation e.g. textual,
graphical, visualizations, etc.
      </p>
      <p>
        However, there is increasing evidence that explanations
might have differing and even conflicting effects on users [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
and that they have to be carefully crafted to the context in which
explanations are provided [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ][
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. This position paper reviews
existing XAI frameworks which currently shape the design and
deployment of explanations. We will show that these
frameworks have underlying assumptions that make them
unsuitable for all situations. Instead, designers and developers
would do well to consider the purpose and intended effects of
explanations that are provided, in order to inform the content
and presentation. We will introduce persuasive engagement as
an alternative framework for shaping explanations, and provide
a use case that shows how explanations arise from this
framework. We close by discussing the road ahead for work in
XAI and potential future work investigating the persuasive
engagement framework.
      </p>
    </sec>
    <sec id="sec-3">
      <title>EXISTING EXPLANATION FRAMEWORKS</title>
      <p>There are currently two main frameworks that shape how
researchers shape explanations: interpretability (sometimes also
called intelligibility or transparency) and explanatory debugging.
We will provide a brief overview of each of these frameworks
and show that they are making various assumptions that shape
in which contexts they might be usefully deployed. Table 1
shows an overview of the main differences of these two
frameworks.</p>
    </sec>
    <sec id="sec-4">
      <title>The Interpretability Framework</title>
      <p>
        Interpretability [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] applies to machine learning systems that
have “the ability to explain or to present in understandable terms
to a human.” It has been argued that only those systems in
which incompleteness arises in optimization or evaluation
require an explanation; systems which do not have “significant
consequences for unacceptable results” or which are
“commonplace” will not need an explanation [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Once an AI system is
interpretable, other desirable aspects of AI systems, such as
fairness, reliability, trust, will also follow along.
      </p>
      <p>
        Aligned with this framework is work developed in the
context of context-aware and pervasive systems [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], that sense,
learn and adapt themselves to their environment and users. In
this context, a number of explanations types, such as What,
Certainty, Why, Why Not and Inputs have been identified which
should be presented to users in order to increase interpretability.
Explanations are judged on their quality when compared to
human explanations, and thus the main thrust of research in this
framework is to find generic dimensions of interpretability that
could lead to quality being optimized, such as how well patterns
in data or reasons for specific decisions are communicated, how
easily biases and errors are identified, and how much user
information processing constraints are taken into account.
Working within this framework, research efforts have
concentrated on how best to expose the workings of AI
algorithms to its users, either through making algorithms more
interpretable (e.g. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]) or investigating ways in which patterns,
data, biases, etc. could be communicated to users (e.g. [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ][
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]).
      </p>
    </sec>
    <sec id="sec-5">
      <title>The Explanatory Debugging Framework</title>
      <p>
        A different approach to providing explanations is the
framework of explanatory debugging [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Key aims in this
framework are to help the user identify the “bugs” in machine
learning and communicate enough of the machine learning
system so that the user can make targeted and useful changes to
improve the system to address these bugs. Explanations are
provided to users in order to build better mental models of how
the intelligent system behaves to support interactive machine
learning. Ideally, this is also associated with increased user
satisfaction if system performance improves, for example by the
system personalizing itself to user preferences, or making better
decisions but this is only a corollary to the main aim of
improving system performance. Research has suggested that
explanations should be presented iteratively and be as sound and
complete as possible while not overwhelming the user; the user
feedback should be able to incrementally modify the system
behavior in a meaningful way while also being reversible [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
Particular ways to expose the logic of these systems in aid of
interactive machine learning have been investigated, including
how to allow the user to interact with the explanations
interactively to provide feedback to the system
[
        <xref ref-type="bibr" rid="ref32">32</xref>
        ][
        <xref ref-type="bibr" rid="ref16">16</xref>
        ][
        <xref ref-type="bibr" rid="ref8">8</xref>
        ][
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>PERSUASIVE ENGAGEMENT</title>
      <p>
        We are not suggesting that one of these frameworks is better
than the other; in fact, we argue that the choice of framework is
dependent on the context and purpose in which explanations are
to be deployed. There is not one right explanation framework
and instead we need to consider the best ‘horses for courses’.
There is some evidence [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] that not all applications need an
explanation which would accord with the interpretability
framework. We argue that both frameworks do not serve smart
systems well that sit outside of their scope: those that might
have “inconsequential” effects, those that are common-place but
need to gain the trust of users, or ones that do not learn from
user interactions. For example, many smart heating systems do
not have “significant consequences for unacceptable results”; all
you do is change the heating setting. Siri’s and Alexa’s mistakes
provide for much hilarity and viral Internet memes but rarely do
we want to turn to interpretability or explanatory debugging
frameworks for creating explanations for them. Eiband et al’s [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
work on a fitness app also does not fall nicely with either of
these existing frameworks. Yet, users (and industry developing
these applications) want explanations for these kinds of systems,
especially if they go wrong.
      </p>
      <p>
        We have previously argued [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ] that these kinds of systems
need to be compatible with constrained engagement [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ], where
the user can engage with the system to input their preferences or
override the system if necessary but communication from the
system is constrained so it does not overwhelm the user or push
itself to the front. The main aim of the communications between
system and the user is to increase user trust and satisfaction.
Explanations in these situations about system decisions need to
be as concise and light-weight as possible and do not need to be
as detailed as in the interpretability and explanatory debugging
frameworks that hope to increase the understanding of users.
We argue that to help shape explanations for these kinds of
applications and situations, a framework of persuasive
engagement might be helpful. Table 2 outlines the main aspects
of persuasive engagement. This framework draws heavily on
previous seminal work in argumentation and rhetoric, which
will be outlined next.
      </p>
      <p>Aspect
Context of Use</p>
      <sec id="sec-6-1">
        <title>Main Goals</title>
        <p>Secondary Goals
Explanation design –</p>
        <p>What to include
Explanation design –
How to present</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Argumentation and Rhetoric</title>
      <p>
        Previous work in AI using argumentation approaches [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ][
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]
[
        <xref ref-type="bibr" rid="ref23">23</xref>
        ][
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] has mainly focused on how to represent and reason
about decisions, to generate explanations automatically using
arguments for and against a decisions, or how to draw on
inference categories to enrich the persuasiveness of
explanations. In contrast, our work uses argumentation and
rhetoric to provide guidance about what to include in an
explanation, and possibly how to present it.
      </p>
      <p>
        Argumentation has been significantly influenced by the work
of Toulmin [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ] who proposed that an argument has the
structure shown in Figure 1. The most basic form of an argument
is data (also known as premises or facts) and its link to a
qualified conclusion (i.e. the conclusion could be more or less
certain), and it usually suffices because it draws on accepted
inference steps for the targeted audience. An argument is thus a
set of premises that support a conclusion with some degree of
plausibility; an explanation contains arguments for and against
the conclusion, often without needing to give the actual details
of the inference step [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Rhetorical argumentation is concerned
with “increasing the adherence of a particular audience” to a
conclusion [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] and therefore focuses its research on what
inference steps are persuasive for certain audiences [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ].
      </p>
      <p>
        In the argument structure proposed by [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ] further ‘why’
questions by the person the argument is directed at might trigger
additional elements to be provided: warrants and backing
provide further reasons as to why the inference is valid, whereas
rebuttals might be drawn out that affect the certainty of the
conclusion.
      </p>
      <p>Data</p>
      <p>Qualifier, Conclusion
Warrants</p>
      <p>Rebuttals</p>
      <p>Backing</p>
    </sec>
    <sec id="sec-8">
      <title>Application to a Smart Heating Use Case</title>
      <p>We now present how to generate an explanation within the
persuasive engagement framework, drawing reference to the
argument structure presented in the previous section (Table 3).</p>
      <sec id="sec-8-1">
        <title>Present in easily understandable form</title>
        <p>To generate an explanation for a decision (i.e. the conclusion) in
this framework, we simply expose the inputs (i.e. the data) that
are used to make the decision and the reason for making the
decision or behavior (i.e. inference step). Only if the user
requests more information, does the explanation provide further,
more detailed input values (i.e. warrants, backing, and rebuttals).
The inference step draws on reasons that the intended user
group finds “agreeable” or persuasive, and thus might change
depending on the targeted user group. Ideally, these
explanations are in a form that the intended user will easily
understand, such as text, or simple graphics or visualization, etc.</p>
        <p>We now present a worked use case in smart heating systems
for persuasive engagement. Our research investigated increasing
trust and understanding of the smart heating system, specifically
the app that allowed users to manage and control their heating
(Figure 2).</p>
        <p>Our program of work was set in a UK project to understand
the overall value and user experience of hybrid heat pump
deployment in demand-response settings. This project,
FREEDOM1 (Flexible REsidential Energy Demand Optimisation
and Management), led by Passiv Systems Ltd. and funded by
Western Power Distribution and Wales and West Utilities.
1 https://www.westernpower.co.uk/projects/freedom</p>
        <p>
          Our endeavor sought to explain system behavior through
transparency design [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. The results of a previous user study [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ]
indicated that the system needed to provide explanations to
users when unexpected behavior occurred, at the point of or
even prior to starting this behavior. We also found that textual
explanations and simple visualizations were preferred by users,
and they wanted reasons for system behavior that included
reference to their comfort and cost.
        </p>
        <p>To start, we collaborated with an expert heating engineer
employed by our collaborative partner, Passiv Systems Ltd, to
generate a list of all system behaviors. These were when the
system decides to:
• pre-heat the home to reach a temperature setpoint for a
period in the user’s schedule when they have indicated
that they will be at home;
• heat to maintain a temperature setpoint if the user is at
home;
• not heat and run at a lower temperature than the
setpoint if the user is at home;
• heat at a higher temperature than the one set for when
the user is at home;
• switch between heat sources;
• to not heat when the user is not at home or asleep;
• to implement demand-response (i.e. shift the heating
pattern due to network demand, and variable energy
tariffs)</p>
        <p>
          For each of these behaviors, we then needed to explain their
respective inputs (i.e. the data that is drawn on to make a
conclusion). For each of the 7 decisions, we again interviewed
the heating expert from Passiv Systems Ltd. to investigate the
inputs that the algorithm used to make each of the above
decisions. Pre-heating was one of the most complex behaviors in
terms of inputs and also one of the most misunderstood system
behaviors in a previous study [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ]; all other decisions used
considerably less variety of inputs. We therefore illustrate the
design of how to explain using this rich example. For pre-heating
the home, we found that the following inputs mattered:
• Current internal temperature;
• Current external temperature;
• Learnt properties around the rate of heating of the
home;
• Schedule and associated temperature setpoint;
• User preference to optimize comfort versus cost;
• 24-hour weather forecast;
• Tariff information for heat sources.
        </p>
        <p>Once we had all of this information, we began to iteratively
design and prototype the presentation of explanations for all
these behaviors, following the persuasive engagement
framework. In its simplest form, a textual explanation included
at least one statement explaining the reasons (i.e. inference step)
that gave a motivation for linking the data to the decision, and a
set of inputs that underlie the decision. For example, in the
textual explanation for pre-heating (Figure 3), we included the
overall motivation for preheating (Figure 3 A). We drew on
comfort reasons by drawing attention to “so you are comfortable
in the &lt;morning&gt;”. In addition, if a demand-response situation
arises where tariffs increase based on network demand, we add
an additional reason about reducing energy costs: “Plus, […] this
means pre-heating now is better value for you.” The interface
also lists the key components and data that the behavior was
based on, as shown in Figure 3 B.</p>
        <p>A
A
B</p>
        <p>The user can switch to a graphical explanation on request by
pressing a control at the bottom of the screen, thereby indicating
that they want to have a deeper explanation of the heating
system’s behavior, akin to asking a further ‘why’ question. A
graphical explanation visualizes the main inputs underlying the
system behavior with their concrete values. Each timeline shows
the current time in the middle of the x-axis. The left part of the
graph shows the input values up to the current time, on the right
is a projected forecast of what the system will do in the future,
shown partially transparent and in dashed lines, to indicate
uncertainty. For example, in pre-heating (Figure 4) a wide
variety of data determines preheating to reach an indoor
temperature. It depicts the current schedule, the current
outdoors temperature, the tariff information (in case of demand
response situations), and the current trade-off setpoints for
comfort versus savings. It shows the period of time when the
system has been or will be pre-heating to achieve the set
temperature points when people are expected to be in the home.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Discussion and Future Work</title>
      <p>
        We have described the current main frameworks in existence
and their scope of application. We have introduced a new
framework – persuasive engagement – by drawing on
argumentation theory, and shown how it might be applied in a
use case. Our view fits well with how explanations are seen in
the social sciences as information about causality and
counterfactuals in answer to a ‘why’ question [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. In addition, it
comes closest to what is termed a “pragmatic view” of
explanations [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. We firmly believe that any advances in XAI
need to involve inter-disciplinary efforts to contribute new
thoughts and research directions.
      </p>
    </sec>
    <sec id="sec-10">
      <title>ACKNOWLEDGMENTS</title>
      <p>This work was supported by the FREEDOM project, funded by
Western Power Distribution, and Wales and West Utilities. We
thank Graeme Aymer from City, University of London, and Tom
Veli, Edwin Carter, Frasier Harding and Tim Cooper from Passiv</p>
      <sec id="sec-10-1">
        <title>Systems Ltd. for their help with this research.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Saleema</given-names>
            <surname>Amershi</surname>
          </string-name>
          , James Fogarty, Ashish Kapoor, and
          <string-name>
            <given-names>Desney</given-names>
            <surname>Tan</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Examining multiple potential models in end-user interactive concept learning</article-title>
          .
          <source>In Proceedings of the 28th international conference on Human factors in computing systems</source>
          ,
          <volume>1357</volume>
          -
          <fpage>1360</fpage>
          . https://doi.org/10.1145/1753326.1753531
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Bunt</surname>
          </string-name>
          , Matthew Lount, and
          <string-name>
            <given-names>Catherine</given-names>
            <surname>Lauzon</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Are explanations always important?: a study of deployed, low-cost intelligent interactive systems</article-title>
          .
          <source>In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces (IUI '12)</source>
          ,
          <fpage>169</fpage>
          -
          <lpage>178</lpage>
          . https://doi.org/10.1145/2166966.2166996
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bussone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Stumpf</surname>
          </string-name>
          , and
          <string-name>
            <surname>D. O'Sullivan</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems</article-title>
          .
          <source>In 2015 International Conference on Healthcare Informatics</source>
          ,
          <fpage>160</fpage>
          -
          <lpage>169</lpage>
          . https://doi.org/10.1109/ICHI.
          <year>2015</year>
          .26
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Finale</given-names>
            <surname>Doshi-Velez</surname>
          </string-name>
          and
          <string-name>
            <given-names>Been</given-names>
            <surname>Kim</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Towards a rigorous science of interpretable machine learning</article-title>
          .
          <source>arXiv preprint arXiv:1702</source>
          .
          <fpage>08608</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Mary</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Dzindolet</surname>
            ,
            <given-names>Scott A.</given-names>
          </string-name>
          <string-name>
            <surname>Peterson</surname>
          </string-name>
          , Regina A.
          <string-name>
            <surname>Pomranky</surname>
            , Linda G. Pierce, and
            <given-names>Hall P.</given-names>
          </string-name>
          <string-name>
            <surname>Beck</surname>
          </string-name>
          .
          <year>2003</year>
          .
          <article-title>The role of trust in automation reliance</article-title>
          .
          <source>International Journal of Human-Computer Studies 58</source>
          ,
          <issue>6</issue>
          :
          <fpage>697</fpage>
          -
          <lpage>718</lpage>
          . https://doi.org/10.1016/S1071-
          <volume>5819</volume>
          (
          <issue>03</issue>
          )
          <fpage>00038</fpage>
          -
          <lpage>7</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Malin</given-names>
            <surname>Eiband</surname>
          </string-name>
          , Hanna Schneider,
          <string-name>
            <given-names>Mark</given-names>
            <surname>Bilandzic</surname>
          </string-name>
          , Julian Fazekas-Con,
          <string-name>
            <given-names>Mareike</given-names>
            <surname>Haug</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Heinrich</given-names>
            <surname>Hussmann</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Bringing Transparency Design into Practice</article-title>
          .
          <source>In 23rd International Conference on Intelligent User Interfaces (IUI '18)</source>
          ,
          <fpage>211</fpage>
          -
          <lpage>223</lpage>
          . https://doi.org/10.1145/3172944.3172961
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Eiband</surname>
          </string-name>
          , Malin, Schneider, Hanna, and Buschek, Daniel. Normative vs. Pragmatic:
          <article-title>Two Perspectives on the Design of Explanations in Intelligent Systems</article-title>
          .
          <source>In Joint Proceedings of the ACM IUI</source>
          <year>2018</year>
          <article-title>Workshops co-located with the 23rd ACM Conference on Intelligent User Interfaces (ACM IUI</article-title>
          <year>2018</year>
          ). https://doi.org/http://ceur-ws.
          <source>org/</source>
          Vol-2068/exss7.pdf
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Groce</surname>
          </string-name>
          , T. Kulesza, Chaoqiang Zhang, S. Shamasunder, M. Burnett, WengKeen Wong,
          <string-name>
            <given-names>S.</given-names>
            <surname>Stumpf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Das</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shinsel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bice</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>McIntosh</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>You Are the Only Possible Oracle: Effective Test Selection for End Users of Interactive Machine Learning Systems</article-title>
          .
          <source>IEEE Transactions on Software Engineering</source>
          <volume>40</volume>
          ,
          <issue>3</issue>
          :
          <fpage>307</fpage>
          -
          <lpage>323</lpage>
          . https://doi.org/10.1109/TSE.
          <year>2013</year>
          .59
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Jonathan</surname>
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Herlocker</surname>
          </string-name>
          , Joseph A.
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>and John</given-names>
          </string-name>
          <string-name>
            <surname>Riedl</surname>
          </string-name>
          .
          <year>2000</year>
          .
          <article-title>Explaining collaborative filtering recommendations</article-title>
          .
          <source>In Proceedings of the 2000 ACM conference on Computer supported cooperative work</source>
          ,
          <fpage>241</fpage>
          -
          <lpage>250</lpage>
          . https://doi.org/10.1145/358916.358995
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Yuening</surname>
            <given-names>Hu</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jordan</surname>
            Boyd-Graber,
            <given-names>Brianna</given-names>
          </string-name>
          <string-name>
            <surname>Satinoff</surname>
            , and
            <given-names>Alison</given-names>
          </string-name>
          <string-name>
            <surname>Smith</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Interactive topic modeling</article-title>
          .
          <source>Machine Learning</source>
          <volume>95</volume>
          , 3:
          <fpage>423</fpage>
          -
          <lpage>469</lpage>
          . https://doi.org/10.1007/s10994-013-5413-0
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Ashish</surname>
            <given-names>Kapoor</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Bongshin</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Desney</given-names>
            <surname>Tan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Eric</given-names>
            <surname>Horvitz</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Interactive optimization for steering machine classification</article-title>
          .
          <source>In Proceedings of the 28th international conference on Human factors in computing systems</source>
          ,
          <volume>1343</volume>
          -
          <fpage>1352</fpage>
          . https://doi.org/10.1145/1753326.1753529
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Been</surname>
            <given-names>Kim</given-names>
          </string-name>
          , Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and
          <string-name>
            <given-names>Rory</given-names>
            <surname>Sayres</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)</article-title>
          .
          <source>In International Conference on Machine Learning</source>
          ,
          <fpage>2668</fpage>
          -
          <lpage>2677</lpage>
          . Retrieved December 11,
          <year>2018</year>
          from http://proceedings.mlr.press/v80/kim18d.html
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Margaret Burnett, Simone Stumpf,
          <string-name>
            <surname>Weng-Keen</surname>
            <given-names>Wong</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shubhomoy Das</surname>
          </string-name>
          ,
          <string-name>
            <surname>Alex Groce</surname>
          </string-name>
          , Amber Shinsel, Forrest Bice, and
          <string-name>
            <surname>Kevin McIntosh</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Where Are My Intelligent Assistant's Mistakes? A Systematic Testing Approach</article-title>
          . In
          <string-name>
            <surname>End-User</surname>
            <given-names>Development</given-names>
          </string-name>
          , Maria Francesca Costabile, Yvonne Dittrich,
          <source>Gerhard Fischer and Antonio Piccinno (eds.)</source>
          . Springer Berlin Heidelberg,
          <fpage>171</fpage>
          -
          <lpage>186</lpage>
          . Retrieved December 16,
          <year>2013</year>
          from http://link.springer.com/chapter/10.1007/978-3-
          <fpage>642</fpage>
          -21530-8_
          <fpage>14</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Margaret Burnett,
          <string-name>
            <surname>Weng-Keen Wong</surname>
            , and
            <given-names>Simone</given-names>
          </string-name>
          <string-name>
            <surname>Stumpf</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Principles of Explanatory Debugging to Personalize Interactive Machine Learning</article-title>
          .
          <source>In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI '15)</source>
          ,
          <fpage>126</fpage>
          -
          <lpage>137</lpage>
          . https://doi.org/10.1145/2678025.2701399
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Simone Stumpf, Margaret Burnett, and
          <string-name>
            <given-names>Irwin</given-names>
            <surname>Kwan</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Tell me more?: the effects of mental model soundness on personalizing an intelligent agent</article-title>
          .
          <source>In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI '12)</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . https://doi.org/10.1145/2207676.2207678
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Simone Stumpf, Margaret Burnett,
          <string-name>
            <surname>Weng-Keen</surname>
            <given-names>Wong</given-names>
          </string-name>
          , Yann Riche, Travis Moore, Ian Oberst, Amber Shinsel, and
          <string-name>
            <surname>Kevin McIntosh</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs</article-title>
          .
          <source>In Proceedings of the 2010 IEEE Symposium on Visual Languages and Human-Centric Computing (VLHCC '10)</source>
          ,
          <fpage>41</fpage>
          -
          <lpage>48</lpage>
          . https://doi.org/10.1109/VLHCC.
          <year>2010</year>
          .15
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Simone Stumpf,
          <string-name>
            <surname>Weng-Keen</surname>
            <given-names>Wong</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Margaret M. Burnett</surname>
            , Stephen Perona,
            <given-names>Andrew</given-names>
          </string-name>
          <string-name>
            <surname>Ko</surname>
            , and
            <given-names>Ian</given-names>
          </string-name>
          <string-name>
            <surname>Oberst</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Why-oriented End-user Debugging of Naive Bayes Text Classification</article-title>
          .
          <source>ACM Trans. Interact. Intell. Syst. 1</source>
          ,
          <issue>1</issue>
          :
          <issue>2</issue>
          :
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          :
          <fpage>31</fpage>
          . https://doi.org/10.1145/2030365.2030367
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Brian</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lim and Anind</surname>
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Dey</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Toolkit to Support Intelligibility in Context-aware Applications</article-title>
          .
          <source>In Proceedings of the 12th ACM International Conference on Ubiquitous Computing (UbiComp '10)</source>
          ,
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          . https://doi.org/10.1145/1864349.1864353
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Brian</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lim and Anind</surname>
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Dey</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Investigating Intelligibility for Uncertain Context-aware Applications</article-title>
          .
          <source>In Proceedings of the 13th International Conference on Ubiquitous Computing (UbiComp '11)</source>
          ,
          <fpage>415</fpage>
          -
          <lpage>424</lpage>
          . https://doi.org/10.1145/2030112.2030168
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Brian</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <surname>Anind K. Dey</surname>
            , and
            <given-names>Daniel</given-names>
          </string-name>
          <string-name>
            <surname>Avrahami</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Why and why not explanations improve the intelligibility of context-aware intelligent systems</article-title>
          .
          <source>In Proceedings of the 27th international conference on Human factors in computing systems (CHI '09)</source>
          ,
          <fpage>2119</fpage>
          -
          <lpage>2128</lpage>
          . https://doi.org/10.1145/1518701.1519023
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Tim</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Explanation in Artificial Intelligence: Insights from the Social Sciences</article-title>
          .
          <source>arXiv:1706</source>
          .07269 [cs]. Retrieved from http://arxiv.org/abs/1706.07269
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Martin</given-names>
            <surname>Možina</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Arguments in Interactive Machine Learning</article-title>
          .
          <source>Informatica 42</source>
          ,
          <fpage>1</fpage>
          . Retrieved December 14,
          <year>2018</year>
          from http://www.informatica.
          <source>si/ojs2.4</source>
          .3/index.php/informatica/article/view/2231
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Martin</surname>
            <given-names>Možina</given-names>
          </string-name>
          , Jure Žabkar, and Ivan Bratko.
          <year>2007</year>
          .
          <article-title>Argument based machine learning</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>171</volume>
          ,
          <fpage>10</fpage>
          :
          <fpage>922</fpage>
          -
          <lpage>937</lpage>
          . https://doi.org/10.1016/j.artint.
          <year>2007</year>
          .
          <volume>04</volume>
          .007
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Sidra</surname>
            <given-names>Naveed</given-names>
          </string-name>
          , Tim Donkers, and
          <string-name>
            <given-names>Jürgen</given-names>
            <surname>Ziegler</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Argumentation-Based Explanations in Recommender Systems: Conceptual Framework and Empirical Results</article-title>
          .
          <source>In Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization (UMAP '18)</source>
          ,
          <fpage>293</fpage>
          -
          <lpage>298</lpage>
          . https://doi.org/10.1145/3213586.3225240
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>Raja</given-names>
            <surname>Parasuraman</surname>
          </string-name>
          and
          <string-name>
            <given-names>Victor</given-names>
            <surname>Riley</surname>
          </string-name>
          .
          <year>1997</year>
          . Humans and Automation: Use, Misuse, Disuse, Abuse.
          <source>Human Factors</source>
          <volume>39</volume>
          ,
          <issue>2</issue>
          :
          <fpage>230</fpage>
          -
          <lpage>253</lpage>
          . https://doi.org/10.1518/001872097778543886
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Michael</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Pazzani</surname>
          </string-name>
          .
          <year>2000</year>
          .
          <article-title>Representation of electronic mail filtering profiles: a user study</article-title>
          .
          <source>In IUI</source>
          ,
          <fpage>202</fpage>
          -
          <lpage>206</lpage>
          . https://doi.org/10.1145/325737.325843
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Sean</surname>
            <given-names>Penney</given-names>
          </string-name>
          , Jonathan Dodge, Claudia Hilderbrand,
          <string-name>
            <surname>Andrew Anderson</surname>
            ,
            <given-names>Logan</given-names>
          </string-name>
          <string-name>
            <surname>Simpson</surname>
            , and
            <given-names>Margaret</given-names>
          </string-name>
          <string-name>
            <surname>Burnett</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Toward Foraging for Understanding of StarCraft Agents: An Empirical Study</article-title>
          .
          <source>In 23rd International Conference on Intelligent User Interfaces (IUI '18)</source>
          ,
          <fpage>225</fpage>
          -
          <lpage>237</lpage>
          . https://doi.org/10.1145/3172944.3172946
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>Chaim</given-names>
            <surname>Perelman</surname>
          </string-name>
          and
          <string-name>
            <given-names>Louise</given-names>
            <surname>Olbrechts-Tyteca</surname>
          </string-name>
          .
          <year>1971</year>
          .
          <article-title>The New Rhetoric: a treatise on Argumentation</article-title>
          . University of Notre Dame Press.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>Iyad</given-names>
            <surname>Rahwan and Guillermo R. Simari</surname>
          </string-name>
          .
          <year>2009</year>
          . Argumentation in artificial intelligence. Springer.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>Marco</given-names>
            <surname>Tulio</surname>
          </string-name>
          <string-name>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Sameer</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Carlos</given-names>
            <surname>Guestrin</surname>
          </string-name>
          .
          <year>2016</year>
          . “
          <article-title>Why Should I Trust You?”: Explaining the Predictions of Any Classifier</article-title>
          .
          <source>In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16)</source>
          ,
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          . https://doi.org/10.1145/2939672.2939778
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>Simonas</given-names>
            <surname>Skrebe</surname>
          </string-name>
          and
          <string-name>
            <given-names>Simone</given-names>
            <surname>Stumpf</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>An exploratory study to design constrained engagement in smart heating systems</article-title>
          .
          <source>In Proceedings of the 31st British Human Computer Interaction Conference.</source>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Simone</surname>
            <given-names>Stumpf</given-names>
          </string-name>
          , Vidya Rajaram,
          <string-name>
            <given-names>Lida</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>Weng-Keen</surname>
            <given-names>Wong</given-names>
          </string-name>
          , Margaret Burnett, Thomas Dietterich,
          <string-name>
            <given-names>Erin</given-names>
            <surname>Sullivan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Jonathan</given-names>
            <surname>Herlocker</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Interacting meaningfully with machine learning systems: Three experiments</article-title>
          .
          <source>Int. J. Hum.- Comput. Stud</source>
          .
          <volume>67</volume>
          ,
          <issue>8</issue>
          :
          <fpage>639</fpage>
          -
          <lpage>662</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <surname>Simone</surname>
            <given-names>Stumpf</given-names>
          </string-name>
          , Simonas Skrebe, Aymer, Graeme, and
          <string-name>
            <surname>Hobson</surname>
          </string-name>
          , Julie.
          <year>2018</year>
          .
          <article-title>Explaining Smart Heating Systems to Discourage Fiddling with Optimized Behavior</article-title>
          .
          <source>In Joint Proceedings of the ACM IUI</source>
          <year>2018</year>
          <article-title>Workshops co-located with the 23rd ACM Conference on Intelligent User Interfaces (ACM IUI</article-title>
          <year>2018</year>
          ). https://doi.org/http://ceur-ws.
          <source>org/</source>
          Vol-2068/exss13.pdf
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>William</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Swartout</surname>
          </string-name>
          .
          <year>1983</year>
          .
          <article-title>XPLAIN: a system for creating and explaining expert consulting programs</article-title>
          .
          <source>Artif. Intell</source>
          .
          <volume>21</volume>
          ,
          <issue>3</issue>
          :
          <fpage>285</fpage>
          -
          <lpage>325</lpage>
          . https://doi.org/10.1016/S0004-
          <volume>3702</volume>
          (
          <issue>83</issue>
          )
          <fpage>80014</fpage>
          -
          <lpage>9</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>J.</given-names>
            <surname>Talbot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kapoor</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Tan</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>EnsembleMatrix: interactive visualization to support machine learning with multiple classifiers</article-title>
          .
          <source>In Proceedings of the 27th international conference on Human factors in computing systems</source>
          ,
          <volume>1283</volume>
          -
          <fpage>1292</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>Nava</given-names>
            <surname>Tintarev</surname>
          </string-name>
          and
          <string-name>
            <given-names>Judith</given-names>
            <surname>Masthoff</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Effective explanations of recommendations: user-centered design</article-title>
          .
          <source>In Proceedings of the 2007 ACM conference on Recommender systems</source>
          ,
          <volume>153</volume>
          -
          <fpage>156</fpage>
          . https://doi.org/10.1145/1297231.1297259
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>Stephen</given-names>
            <surname>Toulmin</surname>
          </string-name>
          .
          <year>1958</year>
          .
          <article-title>The Uses of Argument</article-title>
          . Cambridge University Press, Cambridge,UK.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>Rayoung</given-names>
            <surname>Yang</surname>
          </string-name>
          and
          <string-name>
            <given-names>Mark W.</given-names>
            <surname>Newman</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Learning from a Learning Thermostat: Lessons for Intelligent Systems for the Home</article-title>
          .
          <source>In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '13)</source>
          ,
          <fpage>93</fpage>
          -
          <lpage>102</lpage>
          . https://doi.org/10.1145/2493432.2493489
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>