<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Enhancing user trust in automation through explanation dialog</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rob Cole</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Raytheon Company Intelligence</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Information Systems</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Jim Jacobs Raytheon Company Network Centric Systems Ft.</institution>
          <addr-line>Wayne, IN</addr-line>
          ,
          <country country="US">U.S.A</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Michael J. Hirsch Raytheon Company Intelligence and Information Systems Orlando</institution>
          ,
          <addr-line>FL</addr-line>
          ,
          <country country="US">U.S.A</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Robert L. Sedlmeyer Indiana University - Purdue University Department of Computer Science Ft.</institution>
          <addr-line>Wayne, IN</addr-line>
          ,
          <country country="US">U.S.A</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>State College</institution>
          ,
          <addr-line>PA</addr-line>
          ,
          <country country="US">U.S.A</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>- Lack of trust in autonomy is a recurrent issue that is becoming more and more acute as manpower reduction pressures increase. We address the socio-technical form of this trust problem through a novel decision explanation approach. Our approach employs a semantic representation to capture decision-relevant concepts as well as other mission-relevant knowledge along with a reasoning approach that allows users to pose queries and get system responses that expose decision rationale to users. This representation enables a natural, dialogbased approach to decision explanation. It is our hypothesis that the transparency achieved through this dialog process will increase user trust in autonomous decisions. We tested our hypothesis in an experimental scenario set in the maritime autonomy domain. Participant responses on psychometric trust constructs were found to be significantly higher in the experimental group for the majority of constructs, supporting our hypothesis. Our results suggest the efficacy of incorporating a decision explanation facility in systems for which a sociotechnical trust problem exists or might be expected to develop.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Keywords-Semantic modeling; Maritime Autonomy; Trust in
Autonomy; Decision Explanation.</p>
      <p>I.</p>
      <p>INTRODUCTION
Large organizations such as the Department of Defense rely
heavily on automation as a means of ensuring high-quality
product, as well as cost control through manpower reduction.
However, lack of user trust has repeatedly stood in the way of
widespread deployment. We have observed two fundamental
forms of the problem: the technical and the socio-technical
form. The technical form is characterized by user reservations
regarding the ability of a system to perform its mission due to
known or suspected technical defects. For example, an
automated detection process might have a very high false
positive rate, conditioning operators to simply ignore its
output. Trust in such a situation can only be achieved by
addressing the issue of excessive false detections, a technical
problem suggesting a purely technical solution. As another
example, consider a situation in which automation is
introduced into a purely manual process characterized by
decision making in high-pressure situations. In such a
situation, operators might reject automation in favor of the
This research was supported by Raytheon Corporate IR&amp;D.
trusted, manual process for purely non-technical reasons. In
other words, in the absence of any specific evidence of
limitations of the automation, the automation could
nonetheless be rejected for reasons stemming from the social
milieu in which the system operates. This is the
sociotechnical form of the problem.</p>
      <p>One might address the socio-technical problem through
education: train the operators with sufficient knowledge of
system specifications and design detail to erase doubts they
may have regarding the automation. Such an approach is
costly since every operator would have to be trained to a high
degree. Operators would essentially have to be system
specialists. Instead, we propose an approach intended for
nonspecialist operators, stemming from the insight that the
sociotechnical trust problem results from a lack of insight into
system decision rationale. If an operator can be made to
understand the why of system behavior, that operator can be
expected to trust the system in the future to a greater degree, if
the rationale given to the operator makes sense in the current
mission context.</p>
      <p>
        Explanation mechanisms in expert systems have focused on
the use of explicit representations of design logic and problem
solving strategies [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The early history of explanation in expert
systems saw the emergence of three types of approaches, as
described in Chandrasekaran, Tanner, and Josephson [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Type
I systems explain how data matches local goals. Type 2
systems explain how knowledge can be justified [3]. Type 3
systems explain how control strategy can be justified [4]. A
more detailed description of these types is given by Saunders
and Dobbs [5, p. 1102]:
      </p>
      <p>Type 1 explanations are concerned with explaining why
certain decisions were or were not made during the
execution (runtime) of the system. These explanations use
information about the relationships that exist between
pieces of data and the knowledge (sets of rules for example)
available for making specific decisions or choices based on
this data. For example, Rule X fired because Data Y was
found to be true.</p>
      <p>Type 2 explanations are concerned with explaining the
knowledge base elements themselves. In order to do this,
explanations of this type must look at knowledge about
knowledge. For example, knowledge may exist about a rule
that identifies this rule (this piece of knowledge) as being
applicable ninety percent of the time. A type 2 explanation
could use this information (this knowledge about
knowledge) to justify the use of this rule. Other knowledge
used in providing this type of explanation consists of
knowledge that is used to develop the ES but which does
not affect the operation of the system. This type of
knowledge is referred to as deep knowledge.</p>
    </sec>
    <sec id="sec-2">
      <title>Type 3 explanations are concerned with explaining the</title>
      <p>runtime control strategy used to solve a particular problem.
For example, explaining why one particular rule (or set of
rules) was fired before some other rule is an explanation
about the control strategy of the system. Explaining why a
certain question (or type of question) was asked of the user
in lieu of some other logical or related choice is another
example. Therefore, type 3 explanations are concerned with
explaining how and why the system uses its knowledge the
way it does, a task that also requires the use of deep
knowledge in many cases.</p>
      <p>
        Design considerations for explanations with dialog are
discussed in a number of papers by Moore and colleagues ([
        <xref ref-type="bibr" rid="ref6">6</xref>
        ],
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]). These papers describe the explainable expert
systems (EES) project which incorporates a representation for
problem-solving principles, a representation for domain
knowledge and a method to link between them. In Moore and
Swartout [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], hypertext is used to avoid the referential
problems inherent in natural language analysis. To support
dialog with hypertext, a planning approach to explanation was
developed that allowed the system to understand what part of
the explanation a user is pointing at when making further
queries. Moore and Paris [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and Carenini and Moore [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
discuss architectures for text planners that allow for
explanations that take into account the context created by prior
utterances. In Moore [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], an approach to handling
badlyformulated follow-up questions (such as a novice might
produce after receiving an incomprehensible explanation from
an expert) is presented that enables the production of clarifying
explanations. Tanner and Keuneke [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] discuss an explanation
approach based on a large number of agents with well-defined
roles is described. A particular agent produces an explanation
of its conclusion by ordering a set of text strings in a sequence
that depends on the decision‘s runtime context. Based on an
explanation from one agent, users can request elaboration from
other agents.
      </p>
      <p>
        Weiner [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] focuses on the structure of explanations with
the goal of making explanations easy to understand by avoiding
complexity. Features identified as important for this goal
include syntactic form and how the focus of attention is located
and shifted. Eriksson [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] examines answers generated through
transformation of a proof tree, with pruning of paths, such as
non-informative ones. Millet and Gilloux [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] describe the
approach in Wallis and Shortliffe [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] as employing a user
model in order to provide users with explanations tailored to
their level of understanding. The natural language aspect of
explanation is the focus of Papamichail and French [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], which
uses a library of text plans to structure the explanations.
      </p>
      <p>
        In Carenini and Moore [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], a comprehensive approach
toward the generation of evaluative arguments (called GEA) is
presented. GEA focuses on the generation of text-based
arguments expressed in natural language. The initial step of
GEA‘s processing consists of a text planner selecting content
from a domain model by applying a communicative strategy to
achieve a communication goal (e.g. make a user feel more
positively toward an entity). The selected content is packaged
into sentences through the use of a computational grammar.
The underlying knowledge base consists of a domain model
with entities and their relationships and an additive
multiattribute value function (a decision-theoretic model of the
user‘s preferences).
      </p>
      <p>
        In Gruber and Gautier [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] and Gautier and Gruber [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] an
approach to explaining the behavior of engineering models is
presented. Rather than causal influences that are hard-coded
[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], this approach is based on the inference of causal
influences, inferences which are made at run time. Using a
previously developed causal ordering procedure, an influence
graph is built from which causal influences are determined. At
any point in the influence graph, an explanation can be built
based on the adjacent nodes and users can traverse the graph,
obtaining explanations at any node.
      </p>
      <p>
        Approaches to producing explanations in MDPs are
proposed in Elizalde et al. [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] and Khan, Poupart and Black
[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Two strategies exist for producing explanations in BNs.
One involves transforming the network into a qualitative
representation [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. The other approach focuses on the
graphical representation of the network. A software tool called
Elvira is presented which allows for the simultaneous display
of probabilities of different evidence cases along with a
monitor and editor of cases, allowing the user to enter evidence
and select the information they want to see [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
      </p>
      <p>
        An explanation application for JAVA debugging is
presented in Ko and Myers [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. This work describes a tool
called Whyline which supports programmer investigation of
program behavior. Users can pose ―why did‖ and ―why didn’t‖
questions about program code and execution. Explanations are
derived using a static and dynamic slicing, precise call graphs,
reachability analysis and algorithms for determining potential
sources of values.
      </p>
      <p>
        Explanations in case-based reasoning systems are examined
as well. Sørmo, Cassens, and Aamodt [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] present a framework
for explanation and consider specific goals that explanations
can satisfy which include transparency, justification, relevance,
conceptualization and learning. Kofod-Petersen and Cassens
[
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] consider the importance of context and show how context
and explanations can be combined to deal with the different
types of explanation needed for meaningful user interaction.
      </p>
      <p>
        Explanation of decisions made via decision trees is
considered in Langlotz, Shortliffe, and Fagan [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. An
explanation technique is selected and applied to the most
significant variables, creating a symbolic expression that is
converted to English text. The resulting explanation contains
no mathematical formulas, probability or utility values.
      </p>
      <p>
        Lieberman and Kumar [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] discuss the problem of
mismatch between the specialized knowledge of experts
providing help and the naiveté of users seeking help is
considered. Here, the problem consists of providing
explanations of the expert decisions in terms the users can
understand. The SuggestDesk system is described which
advises online help personnel. Using a knowledgebase,
analogies are found between technical problem-solution pairs
and everyday life events that can be used to explain them.
      </p>
      <p>
        Bader et al. [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] use explanation facilities in recommender
systems to convince users of the relevance of recommended
items and to enable fast decision making. In previous work,
Bader found that recommendations lack user acceptance if the
rationale was not presented. This work follows the approach of
Carenini and Moore [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        In Pu and Chen [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], a ―Why?‖ form of explanation was
evaluated against what the researchers termed an Organized
View (OV) form of explanation in the context of explanations
of product recommendations. The OV approach attempts to
group decision alternatives and provide group-level summary
explanations, e.g. ―these are cheaper than the recommendation
but heavier.‖ A trust model was used to conduct a user
evaluation in which trust-related constructs were assessed
through a Likert scale instrument. The OV approach was found
to be associated with higher levels of user trust than the
alternative approach.
      </p>
      <p>
        The important of the use of context in explaining the
recommendations of a recommendation system was
investigated in Baltrunas et al. [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. In this study of
point-ofinterest recommendation, customized explanation messages are
provided for a set of 54 possible contextual conditions (e.g.
―this place is good to visit with family‖). Even where more
than one contextual condition holds and is factored into the
system‘s decision, only one can be utilized for the explanation
(the most influential one in the predictive model is used). Only
a single explanatory statement is provided to the user.
      </p>
      <p>
        Explanation capabilities have also been shown to aid in
increasing user satisfaction with and establishing trust in
complex systems [
        <xref ref-type="bibr" rid="ref34 ref35 ref36">34, 35, 36</xref>
        ]. The key insight revealed by this
research is the need for transparency in system
decisionmaking. As noted by Glass et al., ―users identified explanations
of system behavior, providing transparency into its reasoning
and execution, as a key way of understanding answers and thus
establishing trust. [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ]‖ Dijkstra [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ] studied the
persuasiveness of decision aids, for novices and experts. In one
experiment, lawyers examined the results of nine legal cases
supported by one out of two expert systems. Both systems had
incomplete knowledge models. Because of the incomplete
models, the expert systems routinely gave opposite advice on
each legal case. This resulted in the lawyers being easily
misled. Therefore, adequate explanation facilities and a good
userinterface must provide the user with the transparency needed to
make the decision of trusting the system. Rieh and Danielson
[
        <xref ref-type="bibr" rid="ref39">39</xref>
        ] Outline four different explanation types of decision aids.
Line-of-reasoning explanations provide the logical justification
of the decision; justification explanations provide extensive
reference material to support the decision; control explanations
provide the problem-solving strategy to arrive at the decision;
and terminological explanations provide definition information
on the decision. In each case, the amount of transparency in
the decision-making process is a factor in the trust of the user.
      </p>
      <p>Our approach to providing transparency, the Why Agent, is
a decision explanation approach incorporating dialog between
the user and the system. Rather than attempting to provide
monolithic explanations to individual questions, our
dialogbased approach allows the user to pose a series of questions,
the responses to which may prompt additional questions.
Imitative of natural discourse, our dialog approach allows a
user to understand the behavior of the system by asking
questions about its goals, actions or observables and receiving
responses couched in similar terms. We implemented our
approach and conducted an evaluation in a maritime autonomy
scenario. The evaluation consisted of an experiment in which
two versions of an interface were shown to participants who
then answered questions related to trust. Results of the
experiment show response scores statistically consistent with
our expectations for the majority of psychometric constructs
tested, supporting our overall hypothesis that transparency
fosters trust. The rest of this paper is organized as follows.
Section II describes the problem domain and the technical
approach. Experiments and results are presented in Section III.
In Section IV, we provide some concluding remarks and future
research directions.</p>
      <p>II.</p>
    </sec>
    <sec id="sec-3">
      <title>TECHNICAL APPROACH</title>
      <sec id="sec-3-1">
        <title>A. Domain Overview</title>
        <p>Our approach to demonstrating the Why Agent
functionality and evaluating its effectiveness consisted of a
simulation-based environment centered on a maritime scenario
defined in consultation with maritime autonomy SMEs. The
notional autonomous system in our scenario was the X3
autonomous unmanned surface vehicle (AUSV) by Harbor
Wing Technologies 1 . Raytheon presently has a business
relationship with this vendor in which we provide ISR
packages for their AUSVs.</p>
        <p>The X3 was of necessity a notional AUSV for our
demonstration because the actual prototype was not operational
at the time of the Why Agent project. For this reason, a live,
on-system demonstration was not considered. Instead, our
demonstration environment was entirely simulation-based. An
existing route planning engine developed under Raytheon
research was modified to serve as the AUSV planner.
Additional code was developed to support the simulation
environment and Why Agent functionality, as described below.</p>
      </sec>
      <sec id="sec-3-2">
        <title>B. Software Architecture</title>
        <p>Our software architecture consists of four components
interacting in a service-oriented architecture, as shown in
Figure 1.</p>
        <p>The Planner component performed route planning functions
based on a plan of intended movement. A plan of intended
movement is input in the form of a series of waypoints. These
waypoints, along with environmental factors, such as weather
forecast data, are used in the planning algorithm to determine
1 http://www.harborwingtech.com
an actual over-ocean route. The planner was a pre-existing
component developed on R&amp;D that the Why Agent leveraged
for the demonstration. Modifications made to the planner to
support the Why Agent project include changes to expose route
change rationale to the controller and inform the controller of
weather report information.
available to the user in the context-sensitive menu for the</p>
      </sec>
      <sec id="sec-3-3">
        <title>ConductPatrol item. When the user selects the ConductPatrol</title>
        <p>item and the associated why? option, a query is generated that
contains IDs associated with the ConductPatrol node and the
servesPurpose link. The linked node, in this case
MissionExecution,is then returned to the user as the result of a
query against the associated OWL model.</p>
        <p>The Controller represents the embodiment of the majority
of the simulated AUSV decision logic and simulation control
logic. Because we did not employ an actual AUSV for the Why
Agent project, much of the decision logic of an actual AUSV
had to be simulated for our demonstration, logic implemented
in the Controller. The input to the Controller consisted of a test
control file that defined the event timeline for the simulation. In
addition to orchestrating simulation events defined in the
control file, the Controller mediated queries and responses
between the user interface and the semantic service.</p>
        <p>The graphical user interface was implemented as a web
application. Two versions of the GUI were developed, one with
and one without the Why Agent explanation facility. The Why
Agent version is shown in Figure 2. It has four screen regions:
a map, a status panel, a log data panel and an explanation
panel. The map, implemented with Google Map technology,
shows the current location and route of the AUSV. The status
panel shows various AUSV status values, such as location,
speed, current mode, etc. The log panel shows a time-stamped
series of event descriptions. Various items in the log panel are
user-selectable and have context-sensitive menus to support the
user interface functionality of the Why Agent facility. When a
user makes a selection, the response from the semantic service
is shown in the bottom (explanation) panel. Additionally,
responses in the explanation panel are also selectable for
further queries. In this manner, the user can engage in a dialog
with the system.</p>
        <p>The semantic service contains the knowledgebase
underlying the decision rationale exposed by the Why Agent.
The knowledge consists of event and domain ontology models
represented in web ontology language (OWL) format. The
semantic service provides responses to queries from the
controller through queries against its underlying models.</p>
        <p>An example of a domain model is shown in Figure 3.
Relationships in this figure encode potential queries linking
concepts and events that can be displayed in the user interface.
For example, the activity ConductPatrol relates to the function</p>
      </sec>
      <sec id="sec-3-4">
        <title>MissionExecution through the relationship servesPurpose. This</title>
        <p>relationship is statically associated with the query why? at the
user level. Thus, the existence of this link connected with the
node ConductPatrol implies a why? option being made</p>
        <p>Our evaluation approach consisted of an experiment in
which the Why Agent was the treatment. Two versions of a
prototype operator interface were developed. One version
incorporated the Why Agent functionality and the second did
not. The two versions were otherwise identical. Screenshots of
the two interface versions are presented in Figures 4 and 5.</p>
      </sec>
      <sec id="sec-3-5">
        <title>A. Demonstration Scenario</title>
        <p>The demonstration scenario consisted of autonomous
fishing law enforcement in the Northwestern Hawaiian Islands
Marine National Monument. The CONOP for this mission is as
follows:</p>
        <p>The AUSV operator selects waypoints corresponding to
a patrol area.</p>
        <p>The AUSV route planner finds a route through the
waypoints and a patrol is conducted.</p>
        <p>RADAR is used to detect potential illegal fishing vessels
(targets)
Targets are investigated visually after AUSV closes to
an adequate proximity.</p>
        <p>Automated analysis of the visual data is used to confirm
the target is engaged in illegal fishing.</p>
        <p>Targets engaged in illegal activity are visually identified
for subsequent manned enforcement action.</p>
        <p>Non-lethal self-defensive actions can be taken by the
AUSV in the presence of hostile targets.</p>
        <p>To support this demonstration, a software-based simulation
environment was developed. The demonstration consisted of
capturing video of user interactions with the baseline and Why
Agent versions of the operator interface while a scripted series
of events unfolded over a pre-determined timeline.</p>
        <p>Our experiment consisted of a single-factor, randomized
design. The factor is interface type and has two levels: baseline
(control) and Why Agent (experimental). Thus, we have two
treatment levels, corresponding to the two factor types. The
experimental subjects were Raytheon employees, recruited
across multiple Raytheon locations, during the project.</p>
        <p>
          Our general hypothesis is that the Why Agent fosters a
more appropriate level of trust in users than the baseline
system. By utilizing the information provided by the Why
Agent, users will be more able to calibrate their trust [
          <xref ref-type="bibr" rid="ref33">33</xref>
          ]. To
test this hypothesis, we needed to operationalize the concept of
―more appropriate level of trust‖ and thereby derive one or
more testable hypotheses. We accomplished this through the
following operationalization.
        </p>
        <p>Trust in a particular system, being an unobservable mental
aspect of a user, necessitates the use of psychometric readings
of constructs related to the overall concept of trust. Given the
broad nature of this concept, multiple constructs should be
defined. Using our domain insight and engineering judgment,
we selected the following set of five psychometric constructs:
1. General Competence, 2) Self-Defense, 3) Navigation, 4)
Environmental Conservation and 5) Mission. Each construct is
intended to capture the users‘ belief regarding the system‘s
ability to effectively perform in regard to that construct, i.e. the
user‘s level of trust for that construct. For example, the
construct Mission attempts to encompass user attitudes toward
the ability of the system to successfully execute its mission.
The Environmental Conservation construct was included as an
example of a construct under which we would not expect to see
a difference in psychometric responses.</p>
        <p>For each construct, we have a set of possible trust levels and a
set of psychometric participant response scores. Define these as
follows (for this study, k=5):</p>
        <p>Set of k constructs C = {cj : 1 ≤ j ≤ k}
Set of trust levels L = {low, high}</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Psychometric participant response scores for each construct:</title>
      <p>Control: RC = {rjC : 1 ≤ j ≤ k }</p>
      <p>Experimental: RE = {rjE : 1 ≤ j ≤ k }</p>
      <p>Here, we take the simplest possible approach, a binary trust
level set. We simply assume that the trust level for a particular
construct should either be low or high, with nothing in
between. Clearly, many other trust models are possible. To
operationalize the notion of ―more appropriate level of trust‖,
we need to define, for each construct, a ground truth
assignment of trust level. Thus, we need to define the following
mapping T:</p>
    </sec>
    <sec id="sec-5">
      <title>Mapping of construct to trust level: T(j)  L</title>
      <p>o
o</p>
      <p>T(j) = low: People should not trust the system
regarding construct j
T(j) = high: People should trust the system
regarding construct j.</p>
      <p>Additionally, we need to map the elements of the trust set
to psychometric scale values. In other words, we need to
normalize the scale as follows:</p>
    </sec>
    <sec id="sec-6">
      <title>Mapping of trust level to psychometric scale values</title>
      <p>S: S(low) = 1; S(high) = 5.</p>
      <p>At this point, we can define the concept of ―appropriate
level of trust‖ in terms of the psychometric scale through a
composition of the above mappings S and T. In other words, for
each construct, the appropriate level of trust is the
psychometric value associated with the trust level assigned to
that construct:</p>
    </sec>
    <sec id="sec-7">
      <title>Appropriate Level of Trust with respect to design</title>
      <p>intent A = {aj : 1 ≤ j ≤ k }</p>
      <p>For each construct cj, the appropriate level of trust aj for
that construct is given by
aj = S(T(j)), 1 ≤ j ≤ k
(1)</p>
      <p>A key aspect of the above definition is the qualifier with
respect to design intent. We assume the system functions
without defects. With respect to design intent simply means ―it
should be trusted to accomplish X if it is designed to
accomplish X.‖ We make this assumption for simplification
purposes, fully acknowledging that no real system is
defectfree. In the presence of defects, the notion of appropriate level
of trust becomes more complex.</p>
      <p>
        Having defined appropriate level of trust, we are finally in
a position to define the key concept, more appropriate level of
trust. The intuition underlying this notion is the observation
that if one‘s trust level is not appropriate to begin with, any
intervention that moves the trust level toward the appropriate
score by a greater amount than some other intervention can be
said to provide a ―more‖ appropriate level of trust. The Why
Agent specifically exposes information associated with the
purpose of AUSV actions. Such additional information serves
to build trust [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. If the psychometric score for the
experimental group is closer to the appropriate trust level than
the score for the control group, then we can say that the
experimental treatment provided a more appropriate level of
trust for that construct. Formally, we define this concept as
follows:
      </p>
    </sec>
    <sec id="sec-8">
      <title>More appropriate level of trust: Given observed</title>
      <p>response scores rjC and rjE for construct j, the
experimental response rjE reflects a more appropriate
level of trust when the following holds
rjE - rjC &lt; 0 if aj = 1
rjE - rjC &gt; 0 if aj = 5</p>
      <p>We expect the Why Agent to affect observed trust levels
only for those constructs for which relevant decision criteria
are exposed during the scenario. In these cases, we expect
Equations (2)-(3) to hold. In all other cases, we do not. For
example, since the AUSV is not designed to protect marine life,
we assert that the appropriate level of trust for the
Environmental Conservation construct is ―low.‖ However, we
do not expect to observe response levels consistent with
Equations (2) – (3) unless dialog exposing decision rationale
relevant to this concept is included in the scenario.</p>
      <p>Based on this reasoning, we expect the effect of decision
explanation to be one of pushing response scores up or down,
toward the appropriate trust level but only in cases where
explanation dialog related to the construct under test is
exposed. In other cases, we expect no difference in the
response scores, as indicated in Table 1. We note that the null
hypotheses are derived as the complementary sets to the
equations in Table 1. E.g., the ‗low, with relevant dialog‘ null
hypothesis equation would be rjE – rjC ≥ 0.</p>
      <p>A total of 44 control and 50 experimental subjects were
recruited for the Why Agent study. The experiment was
designed to be completed in one hour. Following a short
orientation, a pre-study questionnaire was presented to the
participants. The pre-study questionnaire contained questions
regarding participant demographics and technology attitudes.
The purpose of the pre-study questionnaire was to determine
whether any significant differences existed between the
experimental and control groups. Following the pre-study
questionnaire, participants were given a short training
regarding the autonomous system and their role in the study.
Participants were asked to play the role of a Coast Guard
commander considering use of the autonomous system for a
drug smuggling interdiction mission. Following the training,
participants were shown the scenario video which consisted of
several minutes of user interaction with either the baseline or
Why Agent interface. Following the video, participants
completed the main study questionnaire. The system training
was provided in a series of powerpoint slides. Screenshots
taken from the study video were provided to the participants in
hardcopy form, along with hardcopies of the training material.
This was done to minimize any dependence on memory for
participants when completing the study questionnaire.</p>
      <p>To investigate whether significant differences exist between
the control and experimental groups in terms of responses to
the technology attitudes questions, ANOVA was performed.
The results are shown in Table 2. Cronbach reliability
coefficients, construct variances and mean total response scores
are shown for the control and experimental groups in Tables 3
and 4.</p>
      <p>To investigate whether significant differences exist between
the control and experimental groups in terms of responses to
the study questions, ANOVA was performed. For this study,
we focused our analysis on individual constructs. Thus, we do
not present any statistics on, for example, correlations among
responses related to multiple constructs for either the control or
experimental group. The results are shown in Table 6.</p>
      <p>T-test results for each construct are shown in Table 5. Two
p-values are shown for each construct; p1 represents the
pvalue resulting from use of the pooled variance while p2
represents the p-value resulting from use of separate variances.</p>
      <p>The ANOVA results shown in Table 2 indicate that the
experimental and control groups did not significantly differ
across any attribute in terms of their responses to the
technology attitudes questions. In other words, we do not see
any evidence of a technology attitude bias in the study
participants.
For constructs one and two, the experimental response was
greater than the control response (p = 0.001 and 0.004,
respectively), consistent with our expectations. For construct
four, environmental conservation, we see no significant
difference between the experimental and control responses (p =
0.16), which is also consistent with our expectations as this
construct had no associated decision explanation content
exposed to the experimental group. The experimental response
for construct 3 was not significantly higher than the control
response, which is inconsistent with our expectations, although
the difference is only marginally outside the significance
threshold (p = 0.059).</p>
      <p>While the test results indicate moderate support for the
efficacy of the Why Agent approach, they are decidedly mixed,
so it is not possible to draw any definitive conclusions. As
discussed below, we recognize that a number of significant
limitations also hinder the application of our results. A pilot
study would have helped to create a stronger experimental
design and recruit a more representative sample population, but
this was not possible due to budget and schedule constraints.
Nevertheless, the study has provided initial evidence for how
and to what extent the Why Agent approach might influence
trust behavior in autonomous systems, and given impetus for
continued investigations.</p>
      <p>Construct Reliability: Referring to Table 4, we see that
reliability coefficients for some constructs are not above the
commonly-accepted value of 0.7. Had schedule permitted, a
pilot study could have uncovered this issue, providing an
opportunity to revise the questionnaire.</p>
      <p>Experiment Limitations: Clearly a variety of limitations
apply to our experiment. One is that participants did not
interact directly with the system interface; instead entire groups
of participants were shown a video of someone else interacting
with the system. Also, the participants were not drawn from the
population of interest. Consequently, our results may not apply
to that target group. Additionally, subjects were asked to play a
role with much less information than a real person in that role
would have. Also, as noted by a reviewer, the experimental
design does not allow us to determine whether decision
correctness is related to trust when clearly it should be; an
intervention that raises trust regardless of correctness is not
desirable. Finally, execution of the experiment could have been
improved. In particular, our maritime autonomy SME noted:
The Mode should have reflected the simulation events; The
LRAD light should have illuminated during the approach phase
with an audio warning; The subjects should have been trained
on the nonlethal defense functions.</p>
      <p>Semantic Modeling: A potentially significant drawback to
our approach is the manually-intensive nature of the semantic
modeling effort needed to populate our knowledgebase.
Identifying ways to automate this process is a key area of
potential future work related to this effort.</p>
      <p>CONCLUDING REMARKS</p>
      <p>We draw the following specific conclusions based on the
quantitative results reported above. First, the experimental and
control groups do not significantly differ across any attribute in
terms of their responses to the technology attitudes questions.
The experimental and control groups do not significantly differ
across any non-Group attribute in terms of their responses to
the study questions with the exception of gender differences for
construct. Construct reliability is low in some cases, indicating
the need for a prior pilot study to tune the psychometric
instrument. We accept the null hypothesis for construct 4 and
reject it for constructs 1 and 2, as predicted under our
assumptions. We cannot reject the hypothesis associated with
construct 3, although this is a very marginal case. The results of
construct 5 are contradictory to our expectations. Overall, we
conclude that the Why Agent approach does increase user trust
levels through decision transparency.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Chandrasekaran</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Swartout</surname>
          </string-name>
          ,
          <article-title>―Explanations in knowledge systems: the role of explicit representation of design knowledge,‖ IEEE Expert vol</article-title>
          .
          <volume>6</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>47</fpage>
          -
          <lpage>19</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Chandrasekaran</surname>
          </string-name>
          et al.,
          <article-title>―Explaining control strategies in problem solving,‖ IEEE Expert vol</article-title>
          .
          <volume>4</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>William R.</given-names>
            <surname>Swartout</surname>
          </string-name>
          .
          <article-title>―XPLAIN: a system for creating and explaining expert consulting programs</article-title>
          ,
          <source>‖ Artificial Intelligence</source>
          , vol.
          <volume>21</volume>
          , no.
          <issue>3</issue>
          , pp.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>William J. Clancey</surname>
          </string-name>
          ,
          <article-title>―The epistemology of a rule-based expert system - a framework for explanation</article-title>
          ,
          <source>‖ Artificial Intelligence</source>
          , vol
          <volume>20</volume>
          ., no.
          <issue>3</issue>
          , pp.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V. M.</given-names>
            <surname>Saunders</surname>
          </string-name>
          and
          <string-name>
            <given-names>V. S.</given-names>
            <surname>Dobbs</surname>
          </string-name>
          ,
          <article-title>―Explanation generation in expert systems</article-title>
          ,‖
          <source>in Proceedings of the IEEE 1990 National Aerospace and Electronics Conference</source>
          , vol.
          <volume>3</volume>
          , pp.
          <fpage>1101</fpage>
          -
          <lpage>1106</lpage>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Moore</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Swartout</surname>
          </string-name>
          , ―Pointing:
          <string-name>
            <given-names>A Way</given-names>
            <surname>Toward Explanation Dialog</surname>
          </string-name>
          ,‖
          <source>AAAI Proceedings</source>
          , pp.
          <fpage>457</fpage>
          -
          <lpage>464</lpage>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Swartout</surname>
          </string-name>
          et al.,
          <year>1991</year>
          .
          <article-title>―Explanations in knowledge systems: design for explainable expert systems</article-title>
          ,
          <source>‖ IEEE Expert</source>
          , vol.
          <volume>6</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>58</fpage>
          -
          <lpage>64</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Johanna</surname>
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Moore</surname>
          </string-name>
          and Cécile L. Paris,
          <article-title>―Planning text for advisory dialogues</article-title>
          ,‖
          <source>in Proceedings of the 27th annual meeting on Association for Computational Linguistics</source>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Giuseppe</given-names>
            <surname>Carenini and Johanna D. Moore</surname>
          </string-name>
          , ―Generating explanations in context,‖
          <source>in Proceedings of the 1st international conference on Intelligent user interfaces</source>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Moore</surname>
          </string-name>
          , ―Responding to ‗HUH?'
          <article-title>: answering vaguely articulated follow-up questions,‖ in Proceedings of the SIGCHI conference on Human factors in computing systems: Wings for the mind</article-title>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.C.</given-names>
            <surname>Tanner</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.M.</given-names>
            <surname>Keuneke</surname>
          </string-name>
          ,
          <article-title>―Explanations in knowledge systems: the roles of the task structure and domain functional models,‖ IEEE Expert</article-title>
          , vol.
          <volume>6</volume>
          , no.
          <issue>3</issue>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Weiner</surname>
          </string-name>
          ,
          <article-title>―BLAH, a system which explains its reasoning</article-title>
          ,
          <source>‖ Artificial Intelligence</source>
          , vol.
          <volume>15</volume>
          , no.
          <issue>1-2</issue>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>48</lpage>
          ,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Agneta</given-names>
            <surname>Eriksson</surname>
          </string-name>
          ,
          <article-title>―Neat explanation of Proof Trees</article-title>
          ,‖
          <source>in Proceedings of the 9th international joint conference on Artificial intelligence</source>
          , vol.
          <volume>1</volume>
          ,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Millet</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Gilloux</surname>
          </string-name>
          ,
          <article-title>―A study of the knowledge required for explanation in expert systems</article-title>
          ,‖
          <source>in Proceedings of Artificial Intelligence Applications</source>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.W.</given-names>
            <surname>Wallis</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.H.</given-names>
            <surname>Shortliffe</surname>
          </string-name>
          ,
          <article-title>"Customized explanations using causal knowledge," in Rule-based Expert Systems</article-title>
          , Addison-Wesley,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>K. N.</given-names>
            <surname>Papamichail</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>French</surname>
          </string-name>
          ,
          <article-title>―Explaining and justifying the advice of a decision support system: a natural language generation approach,‖ Expert Systems with Applications</article-title>
          , vol.
          <volume>24</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>35</fpage>
          -
          <lpage>48</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17] Carenini and
          <string-name>
            <surname>Moore</surname>
          </string-name>
          , ―Generating and evaluating evaluative arguments,
          <source>‖ Artificial Intelligence</source>
          , vol.
          <volume>170</volume>
          , no.
          <issue>11</issue>
          , pp.
          <fpage>925</fpage>
          -
          <lpage>952</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Gruber</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. O.</given-names>
            <surname>Gautier</surname>
          </string-name>
          ,
          <article-title>―Machine-generated explanations of engineering models: A compositional modeling approach</article-title>
          ,‖ IJCAI,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Patrice</surname>
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Gautier</surname>
          </string-name>
          and
          <string-name>
            <surname>Thomas R. Gruber</surname>
          </string-name>
          ,
          <article-title>―Generating Explanations of Device Behavior Using Compositional Modeling</article-title>
          and Causal Ordering,‖ AAAI,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>B.</given-names>
            <surname>White</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Frederiksen</surname>
          </string-name>
          ,
          <article-title>―Causal model progressions as a foundation for Intelligent learning</article-title>
          ,
          <source>‖ Artificial Intelligence</source>
          , vol.
          <volume>42</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>155</lpage>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>F.</given-names>
            <surname>Elizalde</surname>
          </string-name>
          et al.,
          <article-title>―An MDP approach for explanation</article-title>
          . Generation,‖ In Workshop on Explanation-Aware
          <source>Computing with AAAI</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>O. Z.</given-names>
            <surname>Khan</surname>
          </string-name>
          et al.,
          <article-title>―Explaining recommendations generated by MDPs</article-title>
          ,‖ In Workshop on Explanation Aware Computing,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>S.</given-names>
            <surname>Renooij</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Van-DerGaa</surname>
          </string-name>
          ,
          <article-title>―Decision making in qualitative influence diagrams,‖</article-title>
          <source>In Proceedings of the Eleventh International FLAIRS Conference</source>
          , pp.
          <fpage>410</fpage>
          -
          <lpage>414</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>C.</given-names>
            <surname>Lacave</surname>
          </string-name>
          et al.
          <article-title>―Graphical explanations in bayesian networks</article-title>
          ,
          <source>‖ In Lecture Notes in Computer Science</source>
          , vol.
          <year>1933</year>
          , pp.
          <fpage>122</fpage>
          -
          <lpage>129</lpage>
          . SpringerVeralg,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>Andrew</given-names>
            <surname>Ko and Brad Myers</surname>
          </string-name>
          ,
          <article-title>―Extracting and answering why and why not questions about Java program output</article-title>
          ,
          <source>‖ ACM Transactions on Software Engineering and Methodology</source>
          , vol.
          <volume>20</volume>
          , no.
          <issue>2</issue>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>F.</given-names>
            <surname>Sørmo</surname>
          </string-name>
          et al.,
          <source>―Explanation in case-based reasoning - perspectives and goals,‖ Artificial Intelligence Review</source>
          , vol
          <volume>24</volume>
          , no.
          <year>2005</year>
          , pp.
          <fpage>109</fpage>
          -
          <lpage>143</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kofod-Petersen</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Cassens</surname>
          </string-name>
          ,
          <article-title>―Explanations and context in ambient intelligent systems</article-title>
          ,
          <source>in Proceedings of the 6th international and interdisciplinary conference on Modeling and using context</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>C. P.</given-names>
            <surname>Langlotz</surname>
          </string-name>
          et al.,
          <article-title>―A methodology for generating computer-based explanations of decision-theoretic advice</article-title>
          ,
          <source>‖ Med Decis Making</source>
          , vol.
          <volume>8</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>290</fpage>
          -
          <lpage>303</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>H.</given-names>
            <surname>Lieberman</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>―Providing expert advice by analogy for on-line help</article-title>
          ,‖
          <source>in Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology</source>
          , pp.
          <fpage>26</fpage>
          -
          <lpage>32</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Baderet</surname>
          </string-name>
          et al.,
          <source>―Explanations in Proactive Recommender Systems in Automotive Scenarios,‖ Workshop on Decision Making and Recommendation Acceptance Issues in Recommender Systems Conference</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>P.</given-names>
            <surname>Pu</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>―Trust building with explanation interfaces</article-title>
          ,
          <source>‖ in: 11th International conference on Intelligent User Interfaces</source>
          , pp.
          <fpage>93</fpage>
          -
          <lpage>100</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Baltrunas</surname>
          </string-name>
          et al.,
          <string-name>
            <surname>―</surname>
          </string-name>
          Context-Aware
          <source>Places of Interest Recommendations and Explanations,‖ in 1st Workshop on Decision Making and Recommendation Acceptance Issues in Recommender Systems, (DEMRA</source>
          <year>2011</year>
          ),
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <surname>J. D. Lee K. A. See</surname>
          </string-name>
          , Trust in Automation:
          <article-title>Designing for Appropriate Reliance</article-title>
          .
          <source>Human Factors</source>
          , vol
          <volume>46</volume>
          , no 1, pp.
          <fpage>50</fpage>
          -
          <lpage>80</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>D. L. McGuinness</surname>
          </string-name>
          et al.,
          <source>―Investigations into Trust for Collaborative Information Repositories: A Wikipedia Case Study,‖ in Workshop on the Models of Trust for the Web</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>I.</given-names>
            <surname>Zaihrayeu</surname>
          </string-name>
          , P. Pinheiro da Silva, and
          <string-name>
            <given-names>D. L.</given-names>
            <surname>McGuinness</surname>
          </string-name>
          ,
          <article-title>―IWTrust: Improving User Trust in Answersfrom the Web,‖</article-title>
          <source>in Proceedings of the 3rd International Conference on Trust Management</source>
          , pp.
          <fpage>384</fpage>
          -
          <lpage>392</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>B. Y.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Dey</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Avrahami</surname>
          </string-name>
          ,
          <article-title>―Why and why not explanations improve the intelligibility of context-aware intelligent systems</article-title>
          ,‖
          <source>in Proceedings of the 27th international conference on Human factors in computing systems</source>
          , pp.
          <fpage>2119</fpage>
          -
          <lpage>2128</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>A.</given-names>
            <surname>Glass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. L.</given-names>
            <surname>McGuinness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and M.</given-names>
            <surname>Wolverton</surname>
          </string-name>
          ,
          <article-title>―Toward establishing trust in adaptive agents,‖ in Proceedings of the 13th international conference on Intelligent user interfaces</article-title>
          , pp.
          <fpage>227</fpage>
          -
          <lpage>236</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Dijkstra</surname>
          </string-name>
          ,
          <article-title>―On the use of computerised decision aids: an investigation into the expert system as persuasive communicator</article-title>
          ,
          <source>‖ Ph.D. dissertation</source>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>S. Y.</given-names>
            <surname>Rieh</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Danielson</surname>
          </string-name>
          ,
          <article-title>―Credibility: a multidisciplinary framework</article-title>
          .‖
          <source>In Annual Review of Information Science and Technology, B. Cronin (Ed.)</source>
          , Vol.
          <volume>41</volume>
          , pp.
          <fpage>307</fpage>
          -
          <lpage>364</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>