<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Alexandra Kirsch
[orcid:org=</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Explain to whom? Putting the User in the Center of Explainable AI</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Universitat Tubingen</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>0000</year>
      </pub-date>
      <volume>0002</volume>
      <abstract>
        <p>The ability to explain actions and decisions is often regarded as a basic ingredient of cognitive systems. But when researchers propose methods for making AI systems understandable, users are usually not involved or even mentioned. However, the purpose is to make people willing to accept the decision of a machine or to be better able to interact with it. Therefore, I argue that the evaluation of explanations must involve some form of user testing.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Usability Principles</title>
      <p>
        The idea to build comprehensible technical devices is much older than the
current trend in AI. Many usability guidelines from the elds of design and
humancomputer interaction include implicit or explicit explanation to users. Norman
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] proposes four basic design principles, all of them contain elements of
explainability:
conceptual model: users should understand the underlying mechanisms of the
system;
visibility: all functionality should be visible, a kind of explanation of what the
system is capable of;
mapping: the visible elements should intuitively map to functionality;
feedback: the user should always be informed about the system's state.
      </p>
      <p>Thus, any device should be designed in a way that users are aware of the
system's capabilities and current state. This can be interpreted as a kind of
explanation, one can also say that such a system does not need any additional
explanation. For AI researchers this means that 1) comprehensible systems can
often be built by adhering to well-known usability principles, 2) the design and
usability of the overall system is often more important than AI features to result
in an overall positive user experience.
3</p>
    </sec>
    <sec id="sec-2">
      <title>Legibility</title>
      <p>Embodied and virtual agents di er from typical human-computer interaction
in that they can perform more pronounced actions and often have additional
sensing capabilities. What is an explanation in this context? Of course, such
agents could generate explanations of their actions, possibly in natural language.
But in many cases a constant stream of explanations may annoy users and its
e ectiveness is questionable. When we look at human-human interaction, we
hardly ever need to explain our actions to others. So why should a machine do
so?</p>
      <p>
        A more natural way of interaction could be to implicitly communicate goals
and the necessity of actions. Lichtenthaler and Kirsch [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] de ne the term legibility
as follows:
      </p>
      <p>Robot behavior is legible if: (Factor 1) a human observer or interactor
is able to understand its intentions, and (Factor 2) the behavior met the
expectations of the human observer or interactor.</p>
      <p>This de nition centers on the human observer, implying that legibility can
only be determined by experiments involving users. Such tests can again be
inspired by standard usability testing, but the quality criteria and setup must
often be adapted to the task and embodied situation of the agent. In addition,
if physical robots are involved, there are additional questions of how to ensure
the safety of participants.</p>
      <p>
        As an example, consider the task of legible robot navigation [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Robot
navigation has traditionally been treated as a purely technical problem, but with
humans sharing the space of the robot or directly interacting with it, legibility
becomes an issue. But then, the quality of a navigation act cannot just be
characterized by its success and possibly the time needed to reach the goal point, one
must also determine whether human observers can indeed infer the goal point
and that the human expectations are met [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Performing user tests is time-consuming. Therefore, it would be nice to have
a general set of measures that determine legibility, but that can be measured
without direct user involvement. An experiment of Papenmeier et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] identi es
two factors that in uence the legibility of robot navigation: 1) the movement
direction: a robot looking towards the direction it is moving to (or towards
the goal point, both were identical in the experiment) is perceived as more
autonomous than one with a di erent orientation while moving; 2) the velocity
pro le: deceleration of the robot causes surprise in human observers (as measured
by a viewing time paradigm), while acceleration and constant velocities do not.
4
      </p>
    </sec>
    <sec id="sec-3">
      <title>Explainability and Interaction</title>
      <p>Legibility as used above is a criterion for passive interaction: even though a
person has no direct contact with the agent, the behavior should be legible. But
it is also a prerequisite for direct interaction, for example a person trying to
initiate an interaction would have to be able to predict the robot's movements.
But direct interaction may require more than just a basic, implicit understanding
of the other's intentions, it may at times require a more explicit explanation.</p>
      <p>
        Decision methods can be designed in a way that resembles human decision
making. The Heuristic Problem Solver [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and the FORR architecture [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] follow
closely the way that humans make decisions: they explicitly generate alternatives
and evaluate them. In this way, an explanation is generated along with the
solution: the alternatives that were considered as well as the reasons for choosing
one can directly be communicated to users [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>In addition, such an explicit decision-making paragidm enables an interactive
process of humans and machines jointly determining a solution. It is practically
impossible to model all the knowledge a person has with respect to a task. But
if people can interact with the decision process of the machine, they can use this
additional knowledge without having to formalize all of it. Thus, a person could
propose or delete alternatives, or change the evaluations of alternatives.</p>
      <p>Here again, the question is whether the explanation is adequate. Ideally,
the machine should provide enough information for a person to be satis ed
with or even be willing to take responsibility for the decision. Or in the case of
interaction, a possible measurement is whether the joint decision-making is more
e cient and e ective than a purely computational or purely human one.</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>The ability to explain one's decisions and actions is often unquestioningly
required from cognitive systems. But the frequency, form and implementation of
explanations depend on the speci c application and context. In many cases,
standard feedback mechanisms known from human-computer interaction are all
that is needed. Other applications may demand that users are able to identify
the source of errors. Even others may need an explicit dialogue between human
and machine that allows the machine to request help if necessary. Explanations
can take di erent forms, from standard interaction methods of ashing lights,
sounds and graphical displays, to language or the display of legible actions. In the
last section I have mentioned systems that generate an explanation along with
the solution. An alternative is the post-hoc generation of explanations for black
box algorithms, which is currently popular with the rise of statistical machine
learning.</p>
      <p>In all respects, the users are the leveling rule. An explanation is not a
mathematical construct, an explanation is good if people nd it helpful in the speci c
context. Therefore, the AI community should expand its evaluation metrics
beyond optimization criteria to user-centered measures and evaluation procedures.</p>
      <p>.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Brachman</surname>
          </string-name>
          , R.:
          <article-title>Systems that know what they're doing</article-title>
          .
          <source>IEEE Intelligent Systems (November/December</source>
          <year>2002</year>
          )
          <volume>67</volume>
          {
          <fpage>71</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Norman</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          :
          <article-title>The Design of Everyday Things</article-title>
          . Basic Books, Inc., New York, NY, USA (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3. Lichtenthaler,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Kirsch</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Legibility of Robot Behavior : A Literature Review</article-title>
          . preprint at https://hal.archives-ouvertes.fr/hal-01306977
          <source>(April</source>
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Kruse</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pandey</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alami</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kirsch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Human-aware robot navigation: A survey</article-title>
          .
          <source>Robotics and Autonomous Systems</source>
          <volume>61</volume>
          (
          <issue>12</issue>
          ) (
          <year>2013</year>
          )
          <volume>1726</volume>
          {
          <fpage>1743</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. Lichtenthaler,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Lorenz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Karg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Kirsch</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Increasing perceived value between human and robots | measuring legibility in human aware navigation</article-title>
          .
          <source>In: IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO)</source>
          .
          <article-title>(</article-title>
          <year>2012</year>
          )
          <volume>89</volume>
          {
          <fpage>94</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Papenmeier</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uhrig</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kirsch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Human understanding of robot motion: The role of velocity and orientation</article-title>
          . preprint at https://osf.io/tphnu/ (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Kirsch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Heuristic decision-making for human-aware navigation in domestic environments</article-title>
          .
          <source>In: 2nd Global Conference on Arti cial Intelligence (GCAI)</source>
          .
          <article-title>(</article-title>
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Epstein</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aroor</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Evanusa</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sklar</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Simon</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Navigation with learned spatial a ordances</article-title>
          . (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Korpan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Epstein</surname>
            ,
            <given-names>S.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aroor</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dekel</surname>
          </string-name>
          , G.:
          <article-title>WHY: Natural explanations from a robot navigator</article-title>
          . In:
          <article-title>AAAI 2017 Fall Symposium on Natural Communication for Human-Robot Collaboration</article-title>
          . (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>