=Paper=
{{Paper
|id=Vol-2327/IUI-ATEC3
|storemode=property
|title=Robots That Make Sense: Transparent Intelligence Through Augumented Reality
|pdfUrl=https://ceur-ws.org/Vol-2327/IUI19WS-IUIATEC-3.pdf
|volume=Vol-2327
|authors=Alexandros Rotsidis,Andreas Theodorou,Robert H. Wortham
|dblpUrl=https://dblp.org/rec/conf/iui/RotsidisTW19
}}
==Robots That Make Sense: Transparent Intelligence Through Augumented Reality==
Robots That Make Sense: Transparent Intelligence Through Augmented Reality Alexandros Rotsidis∗ Andreas Theodorou∗ Robert H. Wortham University of Bath Umeå University University of Bath Bath, United Kingdom Umeå, Sweden Bath, United Kingdom A.Rotsidis@bath.ac.uk andreas.theodorou@umu.se r.h.wortham@bath.ac.uk ABSTRACT systems, alongside with a ‘bare minimum’ standardised implemen- Autonomous robots can be difficult to understand by their develop- tation [3]. In the end, the goal of transparency is should not be ers, let alone by end users. Yet, as they become increasingly integral complete comprehension, that would severely limit the scope of parts of our societies, the need for affordable easy to use tools to human achievement. Instead, the goal of transparency is to provide provide transparency grows. The rise of the smartphone and the sufficient information to ensure at least human accountability [7]. improvements in mobile computing performance have gradually Still, the use real-time implementation can help users to calibrate allowed Augmented Reality (AR) to become more mobile and afford- their trust in the machine [13, and references therein]. Calibration able. In this paper we review relevant robot systems architecture refers to the correspondence between a person’s trust in the sys- and propose a new software tool to provide robot transparency tem and the system’s capabilities [12]. Calibrating of trust occurs through the use of AR technology. Our new tool, ABOD3-AR pro- when the end-user has a mental model of the system and relies vides real-time graphical visualisation and debugging of a robot’s on the system within the system’s capabilities and is aware of its goals and priorities as a means for both designers and end users limitations. If we are to consider transparency as mechanism that to gain a better mental model of the internal state and decision exposes the decision-making of a system, then it can help users making processes taking place within a robot. We also report on adjust their expectations and forecast certain actions from the sys- our on-going research programme and planned studies to further tem. This position about transparency is supported by Dzindolet understand the effects of transparency to naive users and experts. et al. [8], who conducted a study where the participants decide whether they trust a particular piece of pattern recognition soft- CCS CONCEPTS ware. The users were given only the percentage of how accurate the prediction of their probabilistic algorithm was in each image. Yet, • Human-centered computing → Mixed / augmented reality; by having access to this easy-to-implement transparency feature, Systems and tools for interaction design; • Computing methodolo- they were able to calibrate their trust in real time. Our own studies gies → Artificial intelligence; • Software and its engineering [discussed in 20], demonstrate how users of various demographic → Software creation and management; Software design engineer- backgrounds had inaccurate mental models about a mobile robot ing; • Social and professional topics → Computing / technology running a BOD-based planner, Instinct [19]. The robot transmits policy. a transparency feed to the real-time debugging software ABOD3 [7, 16]. The transparency display is customised for a high-level end- KEYWORDS user display of the robot’s goals and process towards those goals. robots, mobile augmented reality, transparency, artificial intelli- Participants without access to the transparency software ascribe gence unrealistic functionalities, potentially raising their expectations for its intelligence and safety. When the same robot is used with 1 INTRODUCTION ABOD3, providing an end-user transparency visualisation, the users The relationship between transparency, trust, and utility is a com- are able to calibrate their mental models, leading to more realistic plex one. By exposing the inner ‘smoke and mirrors’ of our agents, expectations, but interestingly a higher respect for the system’s we risk of making them look less interesting. Moreover, the wide intelligence. range of application domains for AI and of the different stakehold- Yet, despite its effectiveness, there is a major disadvantage with ers interacting with intelligent systems should not be underesti- the ABOD3 solution: a computer and display is required to run the mated. Therefore, What is effectively transparent varies by who the software. One solution might be to port ABOD3 to run directly on observer is, and what their goals and obligations are. There is how- robots with built-in screens, such as SoftBank Robotic’s Pepper. Al- ever a need for design guidelines on how to implement transparent beit that this is a technologically feasible and potentially interesting approach, it also requires that custom-made versions of ABOD3 ∗ We thank the EPSRC grant [EP/L016540/1] for funding Rotsidis and Theodorou. Both will need to be made for each robotics system. Moreover, this is not authors contributed equally to the paper. a compatible solution for robots without a display. Nowadays, most people carry a smartphone. Such mobile phones are equipped with powerful multi-core processors, capable of run- IUI Workshops’19, March 20, 2019, Los Angeles, USA © 2019 Copyright for the individual papers by the papers’ authors. Copying permitted ning complex computational-intensive applications, in a compact for private and academic purposes. This volume is published and copyrighted by its package. Modern phones also integrate high-resolution cameras, editors. allowing them to capture and display a feed of the real world. IUI Workshops’19, March 20, 2019, Los Angeles, USA A. Rotsidis et al. That feed can be enhanced with the real-time superimposition of released over the years [19]. The planner was first designed to computer-generated graphics to provide Augmented Reality (AR) run on low resources available on the ARDUINO micro-controller [1]. Unlike Virtual Reality that aims for complete immersion, AR system, such as the one used by the R5 robot seen in Figure 2. focuses on providing additional information of and means of inter- action with real-world object, locations, and even other agents. 2.3 ABOD3 In this paper we demonstrate new software, ABOD3-AR, which ABOD3 is a substantial revision and extension of ABODE (A BOD can run on mobile phones. ABOD3-AR, as its name suggests, uses Environment), originally built by Steve Gray and Simon Jones. a phone’s camera to provide AR experience by superimposing the ABOD3 directly reads and visualises POSH, Instinct, and UN-POSH ABOD3’s tree-like display of Instinct plans over a tracked robot. plans. Moreover, it reads log files containing the real-time trans- parency data emanating from the Instinct Planner, in order to pro- 2 TOOLS AND TECHNOLOGIES FOR vide a real-time graphical display of plan execution. Plan elements TRANSPARENCY are highlighted as they are called by the planner and glow based In this section we describe in some detail the tools and technologies on the number of recent invocations of that element. Plan elements used in our transparency experiments. without recent invocations dim down over a user-defined inter- val, until they return to their initial state. This offers abstracted 2.1 Behaviour Oriented Design backtracking of the calls, and the debugging of a common problem in distributed systems: race conditions where two or more sub- Behaviour Oriented Design is a cognitive architecture that provides components constantly trigger and interfere with or even cancel an ontology of required knowledge and a convenient representation each other. ABOD3 is also able to display a video and synchronise for expressing timely actions as the basis for modular decompo- it with the debug display. In this way it is possible to explore both sition for intelligent systems [5, 6]. It takes inspiration both from runtime debugging and wider issues of AI Transparency. the well-established programming paradigm of object-oriented de- The editor provides a user-customisable user interface (UI) in line sign (ODD) and its associated agile design [9], and an older but with the good practices for transparency introduced by Theodorou well-known AI systems-engineering strategy, Behaviour-Based AI et al. [17]. Plan elements, their sub-trees, and debugging-related [4]. information can be hidden, to allow different levels of abstraction BOD helps AI developers as it provides not only an ontology, ad- and present only relevant information to the present development dressing the challenge of ‘how to link the different parts together’, or debugging task. The application, as shown in Figure 3, allows but also a development methodology; a solution to ‘how do I start the user to override its default layout by moving elements and building this system’. It includes guidelines for modular decompo- zooming the display to suit the user’s needs and preferences. Layout sition, documentation, refactoring, and code reuse. BOD aims to preferences can be stored in a separate file. We have successfully enforce the good-coding practice ‘Don’t Repeat Yourself’, by split- used ABOD3 in both [20]. ting the behaviour into multiple modules. Modularisation makes the development of intelligent agents easier, faster, reusable and cost efficient. Behaviour modules also store their own memories, 2.4 ABOD3-AR e.g. sensory experiences. Multiple modules grouped together form ABOD3-AR builds on the good practice and lessons learned through a behaviour library. This ‘library’ can be hosted on a separate ma- the extended use of ABOD3. It provides a mobile-friendly interface, chine, for example in the cloud.The planner executing within the facilitating transparency for both end users and experts. In this agent is responsible for exploiting a plan file; stored structures de- section, we not only present the final system, but also look at the scribing the agent’s priorities and behaviour. This separation of technical challenges and design decisions faced during develop- responsibilities into two major components enforces further code ment. reusability. The same planner, if coded with a generic API to con- 2.4.1 Deployment Platform and Architecture. The Android Oper- nect to a behaviour library, can be deployed in multiple agents, ating System (OS) 1 is our chosen development platform. Due to regardless of their goals or embodiment. For example, the Instinct the open-source nature of the Android operating system, a num- planner has been successfully used in both robots and agent-based ber of computer vision and augmented reality (AR) libraries exist. modelling, while POSH-Sharp has been deployed in a variety of Moreover, no developer’s license is required to prototype or release computer games [9, 19]. the final deliverable. Android applications are written in Java, like ABOD3, making it possible to reuse its back-end code. Unlike the 2.2 POSH and Instinct original ABOD3, ABDO3-AR is aimed exclusively for embodied- POSH planning is an action-selection system introduced by Bryson agents transparency. At the time of writing, Instinct (see Section 2.2) [5]. It is designed as a reactive planning derivative of BOD to be is the only supported action-selection system. used in embodied agents. POSH combines faster response times, Our test configuration, as seen in Figure 1, includes the tried- similar to reactive approaches for BBAI, with goal-directed plans. Its and-tested R5 robot. In the R5 robot,the callbacks write textual use of hierarchical fixed representations of priorities makes it easy data to a TCP/IP stream over a wireless (WiFi) link. A JAVA based to visualise in a human, non-expert directed graph and sequentially Instinct Server receives this information, enriches it by replacing el- audit. ement IDs with element names and filters out low-level information, Instinct is a lightweight alternative to POSH, incorporating el- ements from the various variations and modifications of POSH 1 https://www.android.com/ Robots That Make Sense: Transparent Intelligence Through Augmented Reality IUI Workshops’19, March 20, 2019, Los Angeles, USA Figure 1: R5 uses a WiFi connection to send the transparency feed to the Instinct Server for processing. Smartphones, running ABOD3-AR, can remotely connect to the server and receive the processed information. sending this information any mobile phones running ABOD3-AR. Circulant Matrices tracker [10] and Track-Learning-Detect (TLD) Clients do not necessarily need to be on the same network, but it tracker (TLD) [11]. is recommended to reduce latency. We decided to use this ‘middle- The Track-Learning-Detect tracker follows an object from frame man server’ approach to allow multiple phones to be connected at to frame by localising all appearances that have been observed so the same time. far and corrects the tracker if necessary. The learning estimates the detector’s errors and updates it to avoid such errors, using a 2.4.2 Robot tracking. Developing an AR application for a mobile learning process. The learning process is modelled as a discrete phone presents two major technical challenges: (1) managing the dynamical system and the conditions under which the learning limited computational resources available to achieve sufficient track- guarantees improvement are found. However, the TLD is compu- ing and rendering of the superimposed graphics, and (2) to success- tationally intensive. In our testing we found that when TLD was fully identify and continuously track the object(s) of interest. used the application would crash in older phones, due to the high memory usage. 2.4.3 Region of Interest. A simple common solution to both chal- The Circulant Matrices tracker is fast local moving-objects tracker. lenges is to focus object tracking only within a region of the video It uses the theory of Circulant matrices, Discrete Fourier Transform feed, referred to as the Region of Interest (ROI), captured by the (DCF), and linear classifiers to track a target and learn its changes phone’s camera. It is faster and easier to extract features for classifi- in appearance. The target is assumed to be rectangular with a fixed cation and sequentially track within a limited area rather than over size. A dense local search, using DCF, is performed around the the full frame. The user registers an area as the ROI, by expanding a most recent target location. Texture information is used for feature yellow rectangle over the robot. Once selected, the yellow rectangle extraction and object description. However, as only one description is replaced by a single pivot located at the centre of the ROI. of the target is saved, the tracker has a low computational cost 2.4.4 Tracker. Various solutions were considered; from the built- and memory footprint. Our informal in-lab testing shown that the in black-box tracking of ARCore 2 to building and using our own Circulant tracker provides robust tracking. tracker. To speed-up development, we decided to use an existing The default implementation of the Circulant Matrices tracker library BoofCV 3 , a widely-spread Java library for image processing in BoofCV does not work with coloured frames. Our solution first and object tracking. BoofCV was selected due to its compatibility converts the video feed, one frame at a time, to greyscale using a with Android and the range of trackers available for prototyping. simple RGB averaging function. The tracker returns back only the BoofCV receives a real-time feed of camera frames, processes coordinates of the centre of the ROI, while the original coloured them, and then returns required information to the Android ap- frame is rendered to the screen. Finally, to increase tracking perfor- plication. A number of trackers, or processors as they are referred mance, the camera is set to record at a constant resolution of 640 to in BoofCV, are available. We narrowed down the choice to the by 480 pixels. 2 https://developers.google.com/ar/ 2.4.5 User Interface. ABOD3-AR renders the plan directly next to 3 https://boofcv.org/ the robot, as seen in Figure 2. A pivot connects the plan to the centre IUI Workshops’19, March 20, 2019, Los Angeles, USA A. Rotsidis et al. Figure 2: Screenshot of ABOD3-AR demonstrating its real-time debugging functionality. The plan is rendered next to the robot with the drives shown in a hierarchical order based on their priority. The robot here is executing one of its predefined action - detecting for humans and lighting up its LEDs user has the ability to hide information. This approach works on the large screens that laptops and desktops have. Contrary, at time of writing, phones rarely sport a screen larger than 15cm. Thus, to accommodate the smaller screen estate available on a phone, ABOD3-AR displays only high-level elements by default. Drives get their priority number annotated next to their name and are listed in ascending order. ABOD3-AR shares the same real-time transparency methodology as ABOD3; plan elements light up as they are used, with an opposing thread dimming them down over time. Like its ‘sibling’ application, ABOD3-AR is aimed to be used by both end users and expert roboticists. A study conducted by Subin et al. [15] demonstrates how users of AR applications aimed at developers that provide transparency-related information require an AR interface that visualizes additional technical content com- pared to naive users. These results are in-line with good practices Figure 3: The ABOD3 Graphical Transparency Tool display- [17] on how different users require different levels of abstraction ing a POSH plan in debugging mode. The highlighted ele- and overall amount of information. Still, we took these results into ments are the ones recently called by the planner. The inten- consideration by allowing low-level technical data to be displayed sity of the glow indicates the number of recent calls. ABOD3 in ABOD3-AR upon user request. A user can tap on elements to (used as an IDE for the entire hierarchical cycle) show every- expand their substree. In order to avoid overcrowding the screen, thing ABOD3-AR uses parts only (2 levels only) plan elements not part of the subtree ‘zoomed in’ become invisible. Subin et al. [15] shows that technical users in an AR application of the user-selected ROI. The PC-targeted version of ABOD3 offers abstraction of information; the full plan is visible by default, but the Robots That Make Sense: Transparent Intelligence Through Augmented Reality IUI Workshops’19, March 20, 2019, Los Angeles, USA Question Group 1 (N = 23) Group 2 (N = 22) p-value Dead - Alive 2.39 (σ =0.988) 3.27 (σ =1.202) 0.01 Stagnant - Lively 3.30 (σ =0.926) 4.14 (σ =0.710) 0.02 Mechanical - Organic 1.91 (σ =1.276) 1.45 (σ =0.8) 0.158 Artificial - Lifelike 1.96 (σ =1.065) 1.95 (σ =1.214) 0.995 Inert - Interactive 3.26 (σ =1.176) 3.68 (σ =1.041) 0.211 Dislike - Like 3.57 (σ =0.728) 3.77 (σ =1.02) 0.435 Unfriendly - Friendly 3.17 (σ =1.029) 3.77 (σ 0.869) 0.041 Unpleasant - Pleasant 3.43 (σ 0.788) 3.77 (σ 1.066) 0.232 Unintelligent - Intelligent 3.17 (σ =0.937) 3.14 (σ =1.153) 0.922 Bored - Interested 3.80 (σ =0.834) 4.19 (σ =0.680) 0.110 Anxious - Relaxed 4.15 (σ =0.933) 3.81 (σ =1.167) 0.308 Table 1: ABOD3-AR Experiment: Means (SD) of the ratings given by each group at various questions. The results show that participants in Group 2 perceive the robot as significantly more alive if they had used ABOD3-AR compare to participants in Group 1. Moreover, participants in the no-app condition described the robot as more stagnantcompare to the ones in Group 2. Finally, in the ABOD3-AR condition, participants perceived the robot to be friendlier than participants in Group 1. prefer to have low-level details. Hence, we added an option to en- participants in the no-transparency condition described the robot as able display of the Server data, in string format, as received by more stagnant (M = 3.30, SD = 0.926) compare to the ones in Group 2 ABOD3-AR. (M = 414, SD = 0.710) who described the robot as Lively; t(43) = – 3.371, p = 02. Finally, in the ABOD3-AR condition, participants 3 USER STUDY perceived the robot to be friendlier (M = 3.17, SE = 1.029) than A user study was carried out to investigate the effectiveness of participants in Group 1 (M = 3.77, SE = 0.869); ; t(43) = –2.104, ABOD3-AR. This took place at the University of Bath in an open p = 041. No other significant results were reported. These results public space. The study ran over five days. The principle hypoth- are shown in Table 1. esis of this experiment is that observers of a robot with access to ABOD3-AR will be able to create more accurate mental models. In 3.2 Discussion this section, we present our results, and discuss how ABOD3-AR We found a statistically significant difference (p-value < 0.05) in provides an effective alternative to ABOD3 as a means to provide ro- three Godspeed questions: Dead/Alive, Stagnant/ Lively, and Un- bot transparency. Moreover, we argue that our results demonstrate friendly/ Friendly. The R5 has connecting wires and various chipsets that the implementation of transparency with ABOD3-AR increases exposed. Yet, participants with access to ABOD3-AR were more not only the trust towards the system, but also its likeability. likely to describe the robot as alive, lively, and friendly. All three The R5 robot is placed in a small pen with a selection of objects, dimensions had mean values over the ‘neutral’ score of 3. Although e.g. a plastic duck. The participants are asked to observe the robot not significantly higher, there was an indicatively increased attri- and then answer our questionnaires. The participants are split in bution of the descriptors Interactive and Pleasant; again both with two groups; Group 1 used the AR app and Group 2 did not use the values over the neutral score. At first glance, these results suggest app. Participants are asked to observe the robot for at least three an increase of anthropomorphic — or at least biologic — charac- minutes. A total of 45 participants took part in the experiment teristics. However, transparency decreased the perception of the (N = 45). The majority of users were aged 36 to 45. Each group had robot being Humanoid and Organic; both characterizations having same number of females and males. Although they worked regularly means below the neutral score. with computers, most of them did not have a STEM background — Action selection takes place even when the robot is already This was the main difference with participants in previous research performing a lengthy action, e.g. moving, or when it may appears [18]. ‘stuck’, e.g. it is in Sleep drive to save battery. These results also The Godspeed questionnaire by Bartneck et al. [2] is used to mea- support that a sensible implementation of transparency, in line to sure the perception of an artificial embodied agent with and with- the principles set by Theodorou et al. [17], can maintain or even out access to transparency-related information. These are standard improve the user experience and engagement. questions often used in research regarding Human Robot Interac- An explanation for the high levels of Interest (3.8 mean for tion (HRI) projects, and also used in similar research [14]. We used Group 1 and 4.19 mean for Group 2) is that embodied agents — a Likert scale of 1 to 5 as in Bartneck et al. [2]. unlike virtual agents— are not widely available. Participants in both groups may have bezen intrigued by the ideal of encountering a 3.1 Results real robot. Nonetheless, our findings indicate that transparency Individuals who had access to ABOD3-AR were more likely to does not necessary reduces the utility or ‘likeability’ of a system. perceive the robot as alive (M = 3.27, SD = 1.202) compare the ones Instead, the use of a transparency display can increase the utility without access to the app; t(43) = –0.692 and p = 0.01. Moreover, and likeability of a system. IUI Workshops’19, March 20, 2019, Los Angeles, USA A. Rotsidis et al. There are several characteristics of augmented reality that makes [3] Margaret Boden, Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian it a promising platform to provide transparency information for Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geoff Pegman, Tom Rod- den, Tom Sorrell, Mick Wallis, Blay Whitby, and Alan Winfield. 2017. Principles both industrial and domestic robots. These include the affordability of robotics: regulating robots in the real world. Connection Science 29, 2 (2017), of AR enabled devices, its availability on multiple platforms such 124–129. DOI:http://dx.doi.org/10.1080/09540091.2016.1271400 [4] R. A. Brooks. 1991. New Approaches to Robotics. Science 253, 5025 (1991), as mobile phones and tablets, the rapidly increasing progress in 1227–1232. DOI:http://dx.doi.org/10.1126/science.253.5025.1227 mobile processors and cameras, and the convenience of not requir- [5] Joanna J. Bryson. 2001. Intelligence by Design : Principles of Modularity and ing headsets or other paraphernalia unlike its competitor virtual Coordination for Engineering Complex Adaptive Agents. Ph.D. Dissertation. [6] Joanna J. Bryson. 2003. Action Selection and Individuation in Agent Based reality. Modelling. In Proceedings of Agent 2003: Challenges in Social Simulation, David L. Sallach and Charles Macal (Eds.). Argonne National Laboratory, Argonne, IL, 317–330. [7] J. Bryson, Joanna and Andreas Theodorou. 2019. How Society Can Maintain 4 CONCLUSIONS AND FUTURE WORK Human-Centric Artificial Intelligence. In Human-centered digitalization and In this paper we presented a new tool, ABOD3-AR, which runs on services, Marja Toivonen-Noro, Evelina Saari, Helinä Melkas, and Mervin Hasu (Eds.). modern mobile phones to provide transparency-related information [8] Mary T. Dzindolet, Scott A. Peterson, Regina A. Pomranky, Linda G. Pierce, and to end users. Our tool uses a purpose-made user interface with Hall P. Beck. 2003. The role of trust in automation reliance. International Journal augmented-reality technologies to display the real-time status of of Human Computer Studies 58, 6 (2003), 697–718. DOI:http://dx.doi.org/10.1016/ S1071-5819(03)00038-7 any robot running the Instinct planner. [9] Swen Gaudl, Simon Davies, and Joanna J. Bryson. 2013. Behaviour oriented As far as we are aware this is the first use of mobile augmented re- design for real-time-strategy games: An approach on iterative development for STARCRAFT AI. Foundations of Digital Games Conference (2013), 198–205. ality focusing solely on increasing transparency in robots and users’ [10] João F Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista. 2012. Exploiting trust towards them. Previous research regarding transparency in the circulant structure of tracking-by-detection with kernels. In Lecture Notes in robots relied on screen and audio output or non real-time trans- Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 7575 LNCS. 702–715. DOI:http://dx.doi.org/ parency. Building upon past research, we provide an affordable, 10.1007/978-3-642-33765-9_50 compact solution, which makes use of augmented reality. [11] Zdenek Kalal, Krystian Mikolajczyk, and Jiri Matas. 2011. Tracking-Learning- The results from a user study presented in this paper demon- Detection. IEEE transactions on pattern analysis and machine intelligence 34, 1 (2011), 1409–1422. DOI:http://dx.doi.org/10.1109/TPAMI.2011.239 strate how ABOD3-AR can be successfully used to provide real-time [12] John D. Lee and Neville Moray. 1994. Trust, self-Confidence, and operators’ transparency to end users. Our results demonstrate how naive users adaptation to automation. International Journal of Human - Computer Studies 40, 1 (jan 1994), 153–184. DOI:http://dx.doi.org/10.1006/ijhc.1994.1007 calibrate their mental models and alter their perception of a robot [13] Joseph B Lyons. 2013. Being Transparent about Transparency : A Model for as its machine nature is made cleared. Moreover, they indicate that Human-Robot Interaction. Trust and Autonomous Systems: Papers from the 2013 participants with access to ABOD3-AR has higher interest to the AAAI Spring Symposium (2013), 48–53. [14] Maha Salem, Gabriella Lakatos, Farshid Amirabdollahian, and Kerstin Dauten- system; potentially increasing its utility and user engagement. hahn. 2015. Would You Trust a (Faulty) Robot?: Effects of Error, Task Type The work presented in this paper is part of a research programme and Personality on Human-Robot Cooperation and Trust. In Proceedings of the to investigate the effects of transparency on the perceived expecta- Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI ’15). ACM, New York, NY, USA, 141–148. DOI:http://dx.doi.org/10.1145/ tions, trust, and utility of a system. Initially this is being explored 2696454.2696497 using the non-humanoid R5 robot and later we plan to expand [15] E. K. Subin, Ashik Hameed, and A. P. Sudheer. 2017. Android based augmented reality as a social interface for low cost social robots. In Proceedings of the Ad- the study using the Pepper humanoid robot manufactured by Soft- vances in Robotics on - AIR ’17. ACM Press, New York, New York, USA, 1–4. DOI: Bank Robotics. We argue that humanoid appearance will always http://dx.doi.org/10.1145/3132446.3134907 be deceptive at the implicit level. Hence, we want see how explicit [16] Andreas Theodorou. 2017. ABOD3: A graphical visualisation and real-time debugging tool for bod agents. In CEUR Workshop Proceedings, Vol. 1855. 60–61. understanding of the robot’s machine nature effects its perceived [17] Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson. 2017. Design- utility. Moreover, if transparency alters trust given to the machine ing and implementing transparency for real time inspection of autonomous by its human users. robots. Connection Science 29, 3 (jul 2017), 230–241. DOI:http://dx.doi.org/10. 1080/09540091.2017.1310182 Planned future work also aims at improving the usability of [18] Robert H Wortham, Andreas Theodorou, and Joanna J Bryson. 2017. Improving the application further. Currently, the robot-tracking mechanism robot transparency: real-time visualisation of robot AI substantially improves understanding in naive observers. http://www.ro-man2017.org/site/ IEEE RO- requires the user to manually select an area of ROI which contains MAN 2017 : 26th IEEE International Symposium on Robot and Human Interactive the robot. Future versions of ABOD3 - AR would skip this part and Communication ; Conference date: 28-08-2017 Through 01-09-2017. replace it with a machine learning (ML) approach. This will enable [19] Robert H. Wortham, Swen E. Gaudl, and Joanna J. Bryson. 2018. Instinct: A biologically inspired reactive planner for intelligent embedded systems. Cognitive the app to detect and recognize the robot by a number of features, Systems Research (2018). DOI:http://dx.doi.org/https://doi.org/10.1016/j.cogsys. such as colour and shape. The app will also be enhanced to be able 2018.10.016 to retrieve the robot type and plan of execution from a database of [20] Robert H. Wortham, Andreas Theodorou, and Joanna J. Bryson. 2017. Robot transparency: Improving understanding of intelligent behaviour for designers robots. and users. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10454 LNAI (2017), 274–289. DOI:http://dx.doi.org/10.1007/978-3-319-64107-2_22 REFERENCES [1] Ronald T. Azuma. 1997. A Survey of Augmented Reality. Presence: Teleoper. Virtual Environ. 6, 4 (Aug. 1997), 355–385. DOI:http://dx.doi.org/10.1162/pres. 1997.6.4.355 [2] Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Jour- nal of Social Robotics 1, 1 (01 Jan 2009), 71–81. DOI:http://dx.doi.org/10.1007/ s12369-008-0001-3