<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Robots That Make Sense: Transparent Intelligence Through Augmented Reality</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexandros Rotsidis∗</string-name>
          <email>A.Rotsidis@bath.ac.uk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andreas Theodorou∗</string-name>
          <email>andreas.theodorou@umu.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Robert H. Wortham</string-name>
          <email>r.h.wortham@bath.ac.uk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Umeå University</institution>
          ,
          <addr-line>Umeå</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Bath</institution>
          ,
          <addr-line>Bath</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>Autonomous robots can be dificult to understand by their developers, let alone by end users. Yet, as they become increasingly integral parts of our societies, the need for afordable easy to use tools to provide transparency grows. The rise of the smartphone and the improvements in mobile computing performance have gradually allowed Augmented Reality (AR) to become more mobile and afordable. In this paper we review relevant robot systems architecture and propose a new software tool to provide robot transparency through the use of AR technology. Our new tool, ABOD3-AR provides real-time graphical visualisation and debugging of a robot's goals and priorities as a means for both designers and end users to gain a better mental model of the internal state and decision making processes taking place within a robot. We also report on our on-going research programme and planned studies to further understand the efects of transparency to naive users and experts.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Human-centered computing → Mixed / augmented reality;
Systems and tools for interaction design; • Computing
methodologies → Artificial intelligence ; • Software and its engineering
→ Software creation and management; Software design
engineering; • Social and professional topics → Computing / technology
policy.
robots, mobile augmented reality, transparency, artificial
intelligence</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>The relationship between transparency, trust, and utility is a
complex one. By exposing the inner ‘smoke and mirrors’ of our agents,
we risk of making them look less interesting. Moreover, the wide
range of application domains for AI and of the diferent
stakeholders interacting with intelligent systems should not be
underestimated. Therefore, What is efectively transparent varies by who the
observer is, and what their goals and obligations are. There is
however a need for design guidelines on how to implement transparent
∗We thank the EPSRC grant [EP/L016540/1] for funding Rotsidis and Theodorou. Both
authors contributed equally to the paper.</p>
      <p>
        IUI Workshops’19, March 20, 2019, Los Angeles, USA
© 2019 Copyright for the individual papers by the papers’ authors. Copying permitted
for private and academic purposes. This volume is published and copyrighted by its
editors.
systems, alongside with a ‘bare minimum’ standardised
implementation [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In the end, the goal of transparency is should not be
complete comprehension, that would severely limit the scope of
human achievement. Instead, the goal of transparency is to provide
suficient information to ensure at least human accountability [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Still, the use real-time implementation can help users to calibrate
their trust in the machine [13, and references therein]. Calibration
refers to the correspondence between a person’s trust in the
system and the system’s capabilities [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Calibrating of trust occurs
when the end-user has a mental model of the system and relies
on the system within the system’s capabilities and is aware of its
limitations. If we are to consider transparency as mechanism that
exposes the decision-making of a system, then it can help users
adjust their expectations and forecast certain actions from the
system. This position about transparency is supported by Dzindolet
et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], who conducted a study where the participants decide
whether they trust a particular piece of pattern recognition
software. The users were given only the percentage of how accurate the
prediction of their probabilistic algorithm was in each image. Yet,
by having access to this easy-to-implement transparency feature,
they were able to calibrate their trust in real time. Our own studies
[discussed in 20], demonstrate how users of various demographic
backgrounds had inaccurate mental models about a mobile robot
running a BOD-based planner, Instinct [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The robot transmits
a transparency feed to the real-time debugging software ABOD3
[
        <xref ref-type="bibr" rid="ref16 ref7">7, 16</xref>
        ]. The transparency display is customised for a high-level
enduser display of the robot’s goals and process towards those goals.
Participants without access to the transparency software ascribe
unrealistic functionalities, potentially raising their expectations
for its intelligence and safety. When the same robot is used with
ABOD3, providing an end-user transparency visualisation, the users
are able to calibrate their mental models, leading to more realistic
expectations, but interestingly a higher respect for the system’s
intelligence.
      </p>
      <p>Yet, despite its efectiveness, there is a major disadvantage with
the ABOD3 solution: a computer and display is required to run the
software. One solution might be to port ABOD3 to run directly on
robots with built-in screens, such as SoftBank Robotic’s Pepper.
Albeit that this is a technologically feasible and potentially interesting
approach, it also requires that custom-made versions of ABOD3
will need to be made for each robotics system. Moreover, this is not
a compatible solution for robots without a display.</p>
      <p>
        Nowadays, most people carry a smartphone. Such mobile phones
are equipped with powerful multi-core processors, capable of
running complex computational-intensive applications, in a compact
package. Modern phones also integrate high-resolution cameras,
allowing them to capture and display a feed of the real world.
That feed can be enhanced with the real-time superimposition of
computer-generated graphics to provide Augmented Reality (AR)
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Unlike Virtual Reality that aims for complete immersion, AR
focuses on providing additional information of and means of
interaction with real-world object, locations, and even other agents.
      </p>
      <p>In this paper we demonstrate new software, ABOD3-AR, which
can run on mobile phones. ABOD3-AR, as its name suggests, uses
a phone’s camera to provide AR experience by superimposing the
ABOD3’s tree-like display of Instinct plans over a tracked robot.
2</p>
    </sec>
    <sec id="sec-3">
      <title>TOOLS AND TECHNOLOGIES FOR</title>
    </sec>
    <sec id="sec-4">
      <title>TRANSPARENCY</title>
      <p>In this section we describe in some detail the tools and technologies
used in our transparency experiments.
2.1</p>
    </sec>
    <sec id="sec-5">
      <title>Behaviour Oriented Design</title>
      <p>
        Behaviour Oriented Design is a cognitive architecture that provides
an ontology of required knowledge and a convenient representation
for expressing timely actions as the basis for modular
decomposition for intelligent systems [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. It takes inspiration both from
the well-established programming paradigm of object-oriented
design (ODD) and its associated agile design [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and an older but
well-known AI systems-engineering strategy, Behaviour-Based AI
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        BOD helps AI developers as it provides not only an ontology,
addressing the challenge of ‘how to link the diferent parts together’,
but also a development methodology; a solution to ‘how do I start
building this system’. It includes guidelines for modular
decomposition, documentation, refactoring, and code reuse. BOD aims to
enforce the good-coding practice ‘Don’t Repeat Yourself’, by
splitting the behaviour into multiple modules. Modularisation makes
the development of intelligent agents easier, faster, reusable and
cost eficient. Behaviour modules also store their own memories,
e.g. sensory experiences. Multiple modules grouped together form
a behaviour library. This ‘library’ can be hosted on a separate
machine, for example in the cloud.The planner executing within the
agent is responsible for exploiting a plan file; stored structures
describing the agent’s priorities and behaviour. This separation of
responsibilities into two major components enforces further code
reusability. The same planner, if coded with a generic API to
connect to a behaviour library, can be deployed in multiple agents,
regardless of their goals or embodiment. For example, the Instinct
planner has been successfully used in both robots and agent-based
modelling, while POSH-Sharp has been deployed in a variety of
computer games [
        <xref ref-type="bibr" rid="ref19 ref9">9, 19</xref>
        ].
2.2
      </p>
    </sec>
    <sec id="sec-6">
      <title>POSH and Instinct</title>
      <p>
        POSH planning is an action-selection system introduced by Bryson
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. It is designed as a reactive planning derivative of BOD to be
used in embodied agents. POSH combines faster response times,
similar to reactive approaches for BBAI, with goal-directed plans. Its
use of hierarchical fixed representations of priorities makes it easy
to visualise in a human, non-expert directed graph and sequentially
audit.
      </p>
      <p>
        Instinct is a lightweight alternative to POSH, incorporating
elements from the various variations and modifications of POSH
released over the years [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The planner was first designed to
run on low resources available on the ARDUINO micro-controller
system, such as the one used by the R5 robot seen in Figure 2.
2.3
      </p>
    </sec>
    <sec id="sec-7">
      <title>ABOD3</title>
      <p>ABOD3 is a substantial revision and extension of ABODE (A BOD
Environment), originally built by Steve Gray and Simon Jones.
ABOD3 directly reads and visualises POSH, Instinct, and UN-POSH
plans. Moreover, it reads log files containing the real-time
transparency data emanating from the Instinct Planner, in order to
provide a real-time graphical display of plan execution. Plan elements
are highlighted as they are called by the planner and glow based
on the number of recent invocations of that element. Plan elements
without recent invocations dim down over a user-defined
interval, until they return to their initial state. This ofers abstracted
backtracking of the calls, and the debugging of a common problem
in distributed systems: race conditions where two or more
subcomponents constantly trigger and interfere with or even cancel
each other. ABOD3 is also able to display a video and synchronise
it with the debug display. In this way it is possible to explore both
runtime debugging and wider issues of AI Transparency.</p>
      <p>
        The editor provides a user-customisable user interface (UI) in line
with the good practices for transparency introduced by Theodorou
et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Plan elements, their sub-trees, and debugging-related
information can be hidden, to allow diferent levels of abstraction
and present only relevant information to the present development
or debugging task. The application, as shown in Figure 3, allows
the user to override its default layout by moving elements and
zooming the display to suit the user’s needs and preferences. Layout
preferences can be stored in a separate file. We have successfully
used ABOD3 in both [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
2.4
      </p>
    </sec>
    <sec id="sec-8">
      <title>ABOD3-AR</title>
      <p>ABOD3-AR builds on the good practice and lessons learned through
the extended use of ABOD3. It provides a mobile-friendly interface,
facilitating transparency for both end users and experts. In this
section, we not only present the final system, but also look at the
technical challenges and design decisions faced during
development.
2.4.1 Deployment Platform and Architecture. The Android
Operating System (OS) 1 is our chosen development platform. Due to
the open-source nature of the Android operating system, a
number of computer vision and augmented reality (AR) libraries exist.
Moreover, no developer’s license is required to prototype or release
the final deliverable. Android applications are written in Java, like
ABOD3, making it possible to reuse its back-end code. Unlike the
original ABOD3, ABDO3-AR is aimed exclusively for
embodiedagents transparency. At the time of writing, Instinct (see Section 2.2)
is the only supported action-selection system.</p>
      <p>Our test configuration, as seen in Figure 1, includes the
triedand-tested R5 robot. In the R5 robot,the callbacks write textual
data to a TCP/IP stream over a wireless (WiFi) link. A JAVA based
Instinct Server receives this information, enriches it by replacing
element IDs with element names and filters out low-level information,
1https://www.android.com/
sending this information any mobile phones running ABOD3-AR.
Clients do not necessarily need to be on the same network, but it
is recommended to reduce latency. We decided to use this
‘middleman server’ approach to allow multiple phones to be connected at
the same time.
2.4.2 Robot tracking. Developing an AR application for a mobile
phone presents two major technical challenges: (1) managing the
limited computational resources available to achieve suficient
tracking and rendering of the superimposed graphics, and (2) to
successfully identify and continuously track the object(s) of interest.
2.4.3 Region of Interest. A simple common solution to both
challenges is to focus object tracking only within a region of the video
feed, referred to as the Region of Interest (ROI), captured by the
phone’s camera. It is faster and easier to extract features for
classification and sequentially track within a limited area rather than over
the full frame. The user registers an area as the ROI, by expanding a
yellow rectangle over the robot. Once selected, the yellow rectangle
is replaced by a single pivot located at the centre of the ROI.
2.4.4 Tracker. Various solutions were considered; from the
builtin black-box tracking of ARCore 2 to building and using our own
tracker. To speed-up development, we decided to use an existing
library BoofCV 3, a widely-spread Java library for image processing
and object tracking. BoofCV was selected due to its compatibility
with Android and the range of trackers available for prototyping.</p>
      <p>
        BoofCV receives a real-time feed of camera frames, processes
them, and then returns required information to the Android
application. A number of trackers, or processors as they are referred
to in BoofCV, are available. We narrowed down the choice to the
2https://developers.google.com/ar/
3https://boofcv.org/
Circulant Matrices tracker [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and Track-Learning-Detect (TLD)
tracker (TLD) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>The Track-Learning-Detect tracker follows an object from frame
to frame by localising all appearances that have been observed so
far and corrects the tracker if necessary. The learning estimates
the detector’s errors and updates it to avoid such errors, using a
learning process. The learning process is modelled as a discrete
dynamical system and the conditions under which the learning
guarantees improvement are found. However, the TLD is
computationally intensive. In our testing we found that when TLD was
used the application would crash in older phones, due to the high
memory usage.</p>
      <p>The Circulant Matrices tracker is fast local moving-objects tracker.
It uses the theory of Circulant matrices, Discrete Fourier Transform
(DCF), and linear classifiers to track a target and learn its changes
in appearance. The target is assumed to be rectangular with a fixed
size. A dense local search, using DCF, is performed around the
most recent target location. Texture information is used for feature
extraction and object description. However, as only one description
of the target is saved, the tracker has a low computational cost
and memory footprint. Our informal in-lab testing shown that the
Circulant tracker provides robust tracking.</p>
      <p>The default implementation of the Circulant Matrices tracker
in BoofCV does not work with coloured frames. Our solution first
converts the video feed, one frame at a time, to greyscale using a
simple RGB averaging function. The tracker returns back only the
coordinates of the centre of the ROI, while the original coloured
frame is rendered to the screen. Finally, to increase tracking
performance, the camera is set to record at a constant resolution of 640
by 480 pixels.
2.4.5 User Interface. ABOD3-AR renders the plan directly next to
the robot, as seen in Figure 2. A pivot connects the plan to the centre
of the user-selected ROI. The PC-targeted version of ABOD3 ofers
abstraction of information; the full plan is visible by default, but the
user has the ability to hide information. This approach works on
the large screens that laptops and desktops have. Contrary, at time
of writing, phones rarely sport a screen larger than 15cm. Thus,
to accommodate the smaller screen estate available on a phone,
ABOD3-AR displays only high-level elements by default. Drives
get their priority number annotated next to their name and are
listed in ascending order. ABOD3-AR shares the same real-time
transparency methodology as ABOD3; plan elements light up as
they are used, with an opposing thread dimming them down over
time.</p>
      <p>
        Like its ‘sibling’ application, ABOD3-AR is aimed to be used by
both end users and expert roboticists. A study conducted by Subin
et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] demonstrates how users of AR applications aimed at
developers that provide transparency-related information require
an AR interface that visualizes additional technical content
compared to naive users. These results are in-line with good practices
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] on how diferent users require diferent levels of abstraction
and overall amount of information. Still, we took these results into
consideration by allowing low-level technical data to be displayed
in ABOD3-AR upon user request. A user can tap on elements to
expand their substree. In order to avoid overcrowding the screen,
plan elements not part of the subtree ‘zoomed in’ become invisible.
Subin et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] shows that technical users in an AR application
Question
      </p>
      <sec id="sec-8-1">
        <title>Dead - Alive</title>
      </sec>
      <sec id="sec-8-2">
        <title>Stagnant - Lively</title>
        <p>Mechanical - Organic
Artificial - Lifelike
Inert - Interactive
Dislike - Like</p>
      </sec>
      <sec id="sec-8-3">
        <title>Unfriendly - Friendly</title>
        <p>Unpleasant - Pleasant
Unintelligent - Intelligent
Bored - Interested
Anxious - Relaxed</p>
        <p>Group 1 (N = 23)</p>
        <p>Group 2 (N = 22)
p-value
prefer to have low-level details. Hence, we added an option to
enable display of the Server data, in string format, as received by
ABOD3-AR.
3</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>USER STUDY</title>
      <p>A user study was carried out to investigate the efectiveness of
ABOD3-AR. This took place at the University of Bath in an open
public space. The study ran over five days. The principle
hypothesis of this experiment is that observers of a robot with access to
ABOD3-AR will be able to create more accurate mental models. In
this section, we present our results, and discuss how ABOD3-AR
provides an efective alternative to ABOD3 as a means to provide
robot transparency. Moreover, we argue that our results demonstrate
that the implementation of transparency with ABOD3-AR increases
not only the trust towards the system, but also its likeability.</p>
      <p>
        The R5 robot is placed in a small pen with a selection of objects,
e.g. a plastic duck. The participants are asked to observe the robot
and then answer our questionnaires. The participants are split in
two groups; Group 1 used the AR app and Group 2 did not use the
app. Participants are asked to observe the robot for at least three
minutes. A total of 45 participants took part in the experiment
(N = 45). The majority of users were aged 36 to 45. Each group had
same number of females and males. Although they worked regularly
with computers, most of them did not have a STEM background —
This was the main diference with participants in previous research
[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>
        The Godspeed questionnaire by Bartneck et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is used to
measure the perception of an artificial embodied agent with and
without access to transparency-related information. These are standard
questions often used in research regarding Human Robot
Interaction (HRI) projects, and also used in similar research [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. We used
a Likert scale of 1 to 5 as in Bartneck et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
3.1
      </p>
    </sec>
    <sec id="sec-10">
      <title>Results</title>
      <p>Individuals who had access to ABOD3-AR were more likely to
perceive the robot as alive (M = 3.27, SD = 1.202) compare the ones
without access to the app; t(43) = –0.692 and p = 0.01. Moreover,
participants in the no-transparency condition described the robot as
more stagnant (M = 3.30, SD = 0.926) compare to the ones in Group 2
(M = 414, SD = 0.710) who described the robot as Lively; t(43) = –
3.371, p = 02. Finally, in the ABOD3-AR condition, participants
perceived the robot to be friendlier (M = 3.17, SE = 1.029) than
participants in Group 1 (M = 3.77, SE = 0.869); ; t(43) = –2.104,
p = 041. No other significant results were reported. These results
are shown in Table 1.
3.2</p>
    </sec>
    <sec id="sec-11">
      <title>Discussion</title>
      <p>We found a statistically significant diference ( p-value &lt; 0.05) in
three Godspeed questions: Dead/Alive, Stagnant/ Lively, and
Unfriendly/ Friendly. The R5 has connecting wires and various chipsets
exposed. Yet, participants with access to ABOD3-AR were more
likely to describe the robot as alive, lively, and friendly. All three
dimensions had mean values over the ‘neutral’ score of 3. Although
not significantly higher, there was an indicatively increased
attribution of the descriptors Interactive and Pleasant; again both with
values over the neutral score. At first glance, these results suggest
an increase of anthropomorphic — or at least biologic —
characteristics. However, transparency decreased the perception of the
robot being Humanoid and Organic; both characterizations having
means below the neutral score.</p>
      <p>
        Action selection takes place even when the robot is already
performing a lengthy action, e.g. moving, or when it may appears
‘stuck’, e.g. it is in Sleep drive to save battery. These results also
support that a sensible implementation of transparency, in line to
the principles set by Theodorou et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], can maintain or even
improve the user experience and engagement.
      </p>
      <p>An explanation for the high levels of Interest (3.8 mean for
Group 1 and 4.19 mean for Group 2) is that embodied agents —
unlike virtual agents— are not widely available. Participants in both
groups may have bezen intrigued by the ideal of encountering a
real robot. Nonetheless, our findings indicate that transparency
does not necessary reduces the utility or ‘likeability’ of a system.
Instead, the use of a transparency display can increase the utility
and likeability of a system.</p>
      <p>There are several characteristics of augmented reality that makes
it a promising platform to provide transparency information for
both industrial and domestic robots. These include the afordability
of AR enabled devices, its availability on multiple platforms such
as mobile phones and tablets, the rapidly increasing progress in
mobile processors and cameras, and the convenience of not
requiring headsets or other paraphernalia unlike its competitor virtual
reality.
4</p>
    </sec>
    <sec id="sec-12">
      <title>CONCLUSIONS AND FUTURE WORK</title>
      <p>In this paper we presented a new tool, ABOD3-AR, which runs on
modern mobile phones to provide transparency-related information
to end users. Our tool uses a purpose-made user interface with
augmented-reality technologies to display the real-time status of
any robot running the Instinct planner.</p>
      <p>As far as we are aware this is the first use of mobile augmented
reality focusing solely on increasing transparency in robots and users’
trust towards them. Previous research regarding transparency in
robots relied on screen and audio output or non real-time
transparency. Building upon past research, we provide an afordable,
compact solution, which makes use of augmented reality.</p>
      <p>The results from a user study presented in this paper
demonstrate how ABOD3-AR can be successfully used to provide real-time
transparency to end users. Our results demonstrate how naive users
calibrate their mental models and alter their perception of a robot
as its machine nature is made cleared. Moreover, they indicate that
participants with access to ABOD3-AR has higher interest to the
system; potentially increasing its utility and user engagement.</p>
      <p>The work presented in this paper is part of a research programme
to investigate the efects of transparency on the perceived
expectations, trust, and utility of a system. Initially this is being explored
using the non-humanoid R5 robot and later we plan to expand
the study using the Pepper humanoid robot manufactured by
SoftBank Robotics. We argue that humanoid appearance will always
be deceptive at the implicit level. Hence, we want see how explicit
understanding of the robot’s machine nature efects its perceived
utility. Moreover, if transparency alters trust given to the machine
by its human users.</p>
      <p>Planned future work also aims at improving the usability of
the application further. Currently, the robot-tracking mechanism
requires the user to manually select an area of ROI which contains
the robot. Future versions of ABOD3 - AR would skip this part and
replace it with a machine learning (ML) approach. This will enable
the app to detect and recognize the robot by a number of features,
such as colour and shape. The app will also be enhanced to be able
to retrieve the robot type and plan of execution from a database of
robots.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Ronald</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Azuma</surname>
          </string-name>
          .
          <year>1997</year>
          .
          <article-title>A Survey of Augmented Reality</article-title>
          .
          <source>Presence: Teleoper. Virtual Environ</source>
          .
          <volume>6</volume>
          ,
          <issue>4</issue>
          (Aug.
          <year>1997</year>
          ),
          <fpage>355</fpage>
          -
          <lpage>385</lpage>
          . DOI:http://dx.doi.org/10.1162/pres.
          <year>1997</year>
          .
          <volume>6</volume>
          .4.
          <fpage>355</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Christoph</given-names>
            <surname>Bartneck</surname>
          </string-name>
          , Dana Kulić, Elizabeth Croft, and
          <string-name>
            <given-names>Susana</given-names>
            <surname>Zoghbi</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Measurement Instruments for the Anthropomorphism</article-title>
          , Animacy, Likeability,
          <string-name>
            <given-names>Perceived</given-names>
            <surname>Intelligence</surname>
          </string-name>
          , and Perceived Safety of Robots.
          <source>International Journal of Social Robotics</source>
          <volume>1</volume>
          ,
          <issue>1</issue>
          (
          <issue>01</issue>
          <year>Jan 2009</year>
          ),
          <fpage>71</fpage>
          -
          <lpage>81</lpage>
          . DOI:http://dx.doi.org/10.1007/ s12369-008-0001-3
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Margaret</given-names>
            <surname>Boden</surname>
          </string-name>
          , Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geof Pegman, Tom Rodden, Tom Sorrell, Mick Wallis, Blay Whitby, and
          <string-name>
            <given-names>Alan</given-names>
            <surname>Winfield</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Principles of robotics: regulating robots in the real world</article-title>
          .
          <source>Connection Science</source>
          <volume>29</volume>
          ,
          <issue>2</issue>
          (
          <year>2017</year>
          ),
          <fpage>124</fpage>
          -
          <lpage>129</lpage>
          . DOI:http://dx.doi.org/10.1080/09540091.
          <year>2016</year>
          .1271400
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Brooks</surname>
          </string-name>
          .
          <year>1991</year>
          . New Approaches to Robotics.
          <source>Science</source>
          <volume>253</volume>
          ,
          <issue>5025</issue>
          (
          <year>1991</year>
          ),
          <fpage>1227</fpage>
          -
          <lpage>1232</lpage>
          . DOI:http://dx.doi.
          <source>org/10.1126/science.253.5025.1227</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Joanna</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Bryson</surname>
          </string-name>
          .
          <year>2001</year>
          .
          <article-title>Intelligence by Design : Principles of Modularity and Coordination for Engineering Complex Adaptive Agents</article-title>
          .
          <source>Ph.D. Dissertation.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Joanna</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Bryson</surname>
          </string-name>
          .
          <year>2003</year>
          .
          <article-title>Action Selection and Individuation in Agent Based Modelling</article-title>
          .
          <source>In Proceedings of Agent</source>
          <year>2003</year>
          :
          <article-title>Challenges in Social Simulation</article-title>
          , David L. Sallach and Charles Macal (Eds.). Argonne National Laboratory, Argonne, IL,
          <fpage>317</fpage>
          -
          <lpage>330</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          , Joanna and
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Theodorou</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>How Society Can Maintain Human-Centric Artificial Intelligence</article-title>
          .
          <article-title>In Human-centered digitalization</article-title>
          and services, Marja Toivonen-Noro, Evelina Saari, Helinä Melkas, and Mervin Hasu (Eds.).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Mary</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Dzindolet</surname>
            ,
            <given-names>Scott A.</given-names>
          </string-name>
          <string-name>
            <surname>Peterson</surname>
          </string-name>
          , Regina A.
          <string-name>
            <surname>Pomranky</surname>
            , Linda G. Pierce, and
            <given-names>Hall P.</given-names>
          </string-name>
          <string-name>
            <surname>Beck</surname>
          </string-name>
          .
          <year>2003</year>
          .
          <article-title>The role of trust in automation reliance</article-title>
          .
          <source>International Journal of Human Computer Studies</source>
          <volume>58</volume>
          ,
          <issue>6</issue>
          (
          <year>2003</year>
          ),
          <fpage>697</fpage>
          -
          <lpage>718</lpage>
          . DOI:http://dx.doi.org/10.1016/ S1071-
          <volume>5819</volume>
          (
          <issue>03</issue>
          )
          <fpage>00038</fpage>
          -
          <lpage>7</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Swen</given-names>
            <surname>Gaudl</surname>
          </string-name>
          , Simon Davies, and
          <string-name>
            <given-names>Joanna J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Behaviour oriented design for real-time-strategy games: An approach on iterative development for STARCRAFT AI</article-title>
          .
          <source>Foundations of Digital Games Conference</source>
          (
          <year>2013</year>
          ),
          <fpage>198</fpage>
          -
          <lpage>205</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>João</surname>
            <given-names>F Henriques</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rui Caseiro</surname>
            , Pedro Martins, and
            <given-names>Jorge</given-names>
          </string-name>
          <string-name>
            <surname>Batista</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Exploiting the circulant structure of tracking-by-detection with kernels</article-title>
          .
          <source>In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</source>
          , Vol.
          <volume>7575</volume>
          LNCS.
          <fpage>702</fpage>
          -
          <lpage>715</lpage>
          . DOI:http://dx.doi.org/ 10.1007/978-3-
          <fpage>642</fpage>
          -33765-9_
          <fpage>50</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Zdenek</surname>
            <given-names>Kalal</given-names>
          </string-name>
          , Krystian Mikolajczyk, and
          <string-name>
            <given-names>Jiri</given-names>
            <surname>Matas</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Tracking-LearningDetection</article-title>
          .
          <source>IEEE transactions on pattern analysis and machine intelligence 34</source>
          ,
          <issue>1</issue>
          (
          <year>2011</year>
          ),
          <fpage>1409</fpage>
          -
          <lpage>1422</lpage>
          . DOI:http://dx.doi.org/10.1109/TPAMI.
          <year>2011</year>
          .239
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>John</surname>
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            and
            <given-names>Neville</given-names>
          </string-name>
          <string-name>
            <surname>Moray</surname>
          </string-name>
          .
          <year>1994</year>
          .
          <article-title>Trust, self-</article-title>
          <string-name>
            <surname>Confidence</surname>
          </string-name>
          ,
          <article-title>and operators' adaptation to automation</article-title>
          .
          <source>International Journal of Human - Computer Studies 40</source>
          ,
          <issue>1</issue>
          (jan
          <year>1994</year>
          ),
          <fpage>153</fpage>
          -
          <lpage>184</lpage>
          . DOI:http://dx.doi.org/10.1006/ijhc.
          <year>1994</year>
          .1007
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Joseph</surname>
            <given-names>B</given-names>
          </string-name>
          <string-name>
            <surname>Lyons</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Being Transparent about Transparency : A Model for Human-Robot Interaction</article-title>
          .
          <source>Trust and Autonomous Systems: Papers from the 2013 AAAI Spring Symposium</source>
          (
          <year>2013</year>
          ),
          <fpage>48</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Maha</surname>
            <given-names>Salem</given-names>
          </string-name>
          , Gabriella Lakatos, Farshid Amirabdollahian, and
          <string-name>
            <given-names>Kerstin</given-names>
            <surname>Dautenhahn</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Would You Trust a (Faulty) Robot?: Efects of Error, Task Type and Personality on Human-Robot Cooperation and Trust</article-title>
          .
          <source>In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI '15)</source>
          . ACM, New York, NY, USA,
          <fpage>141</fpage>
          -
          <lpage>148</lpage>
          . DOI:http://dx.doi.org/10.1145/ 2696454.2696497
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>E. K.</given-names>
            <surname>Subin</surname>
          </string-name>
          , Ashik Hameed,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Sudheer</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Android based augmented reality as a social interface for low cost social robots</article-title>
          .
          <source>In Proceedings of the Advances in Robotics on - AIR '17</source>
          . ACM Press, New York, New York, USA,
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          . DOI: http://dx.doi.org/10.1145/3132446.3134907
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Theodorou</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>ABOD3: A graphical visualisation and real-time debugging tool for bod agents</article-title>
          .
          <source>In CEUR Workshop Proceedings</source>
          , Vol.
          <year>1855</year>
          .
          <volume>60</volume>
          -
          <fpage>61</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Andreas</surname>
            <given-names>Theodorou</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Robert H. Wortham</surname>
            , and
            <given-names>Joanna J.</given-names>
          </string-name>
          <string-name>
            <surname>Bryson</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Designing and implementing transparency for real time inspection of autonomous robots</article-title>
          .
          <source>Connection Science</source>
          <volume>29</volume>
          ,
          <issue>3</issue>
          (jul
          <year>2017</year>
          ),
          <fpage>230</fpage>
          -
          <lpage>241</lpage>
          . DOI:http://dx.doi.org/10. 1080/09540091.
          <year>2017</year>
          .1310182
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Robert</surname>
            <given-names>H Wortham</given-names>
          </string-name>
          , Andreas Theodorou, and
          <string-name>
            <surname>Joanna</surname>
          </string-name>
          J Bryson.
          <year>2017</year>
          .
          <article-title>Improving robot transparency: real-time visualisation of robot AI substantially improves understanding in naive observers</article-title>
          . http://www.ro-man2017.
          <source>org/site/ IEEE ROMAN 2017 : 26th IEEE International Symposium on Robot and Human</source>
          Interactive Communication ; Conference date:
          <fpage>28</fpage>
          -
          <lpage>08</lpage>
          -2017 Through 01-
          <fpage>09</fpage>
          -
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Robert</surname>
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Wortham</surname>
            ,
            <given-names>Swen E.</given-names>
          </string-name>
          <string-name>
            <surname>Gaudl</surname>
            , and
            <given-names>Joanna J.</given-names>
          </string-name>
          <string-name>
            <surname>Bryson</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Instinct: A biologically inspired reactive planner for intelligent embedded systems</article-title>
          .
          <source>Cognitive Systems Research</source>
          (
          <year>2018</year>
          ). DOI:http://dx.doi.org/https://doi.org/10.1016/j.cogsys.
          <year>2018</year>
          .
          <volume>10</volume>
          .016
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Robert</surname>
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Wortham</surname>
            , Andreas Theodorou, and
            <given-names>Joanna J.</given-names>
          </string-name>
          <string-name>
            <surname>Bryson</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Robot transparency: Improving understanding of intelligent behaviour for designers and users</article-title>
          .
          <source>Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10454 LNAI</source>
          (
          <year>2017</year>
          ),
          <fpage>274</fpage>
          -
          <lpage>289</lpage>
          . DOI:http://dx.doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -64107-2_
          <fpage>22</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>