Agent Archetypes for Human-Drone Interaction: Social Robots or Objects with Intent? Mehmet Aydın Baytaş Joseph La Delfa Abstract Chalmers University of RMIT University Departing from our earlier work on conceptualizing “social Technology Melbourne, Australia drones,” we enrich the discussion using notions of “agent Gothenburg, Sweden joseph@exertiongameslab.org archetypes” and “objects with intent” from recent interac- baytas@chalmers.se tion design literature. We briefly unpack these notions, and argue that they are useful in characterizing both design in- tentions and human perceptions. Thus they have the poten- tial to inform the creation and study of HDI artifacts. Upon these notions, we synthesize relevant implications and di- Sara Ljungblad Morten Fjeld rections for design research, in the form of research ques- University of Gothenburg Chalmers University of tions and design challenges. These questions and chal- Gothenburg, Sweden Technology lenges inform our current and future work. We submit our sara.ljungblad@gu.se Gothenburg, Sweden resources, arguments, aims, and hypotheses to the iHDI fjeld@chalmers.se 2020 community as a reflective exercise, aiming to refine our work in synergy with other participants. Author Keywords Autonomous drones; design philosophy; design theory; drones; human-drone interaction; social drones; unmanned aerial vehicles. This paper is published under the Creative Commons Attribution 4.0 International Introduction (CC-BY 4.0) license. Authors reserve their rights to disseminate the work on their personal and corporate Web sites with the appropriate attribution. In previous work, we had proposed the term social drones Interdisciplinary Workshop on Human-Drone Interaction (iHDI 2020) to describe applications where autonomous drones oper- CHI ’20 Extended Abstracts, 26 April 2020, Honolulu, HI, US © Creative Commons CC-BY 4.0 License. ate in human-populated environments [2, 3]. Here, our choice of the term social was inspired by a particular def- inition for “social animals” as beings that “regulate each other’s nervous systems” [6]. Departing from this definition Boon, and Kaptelinin’s (2019) analysis of agent archetypes for sociality, we made the observation that some form of in HCI [15]. In doing so, we wish to move beyond social social/regulatory interaction between any two living agents drones as a category that is limited to archetypal social is unavoidable when they occupy the same space and can robots. Rather, we highlight that much of the work that is observe each other. For example, a cat and a human in the relevant to this space (including ours) spans, in addition to same room, given enough time, will regulate the affect and social robots, categories like ambient agents and objects behavior of each other in various ways. Similarly, we argued with intent (OWI). Further, we make the case that there are that an autonomous embodied agent in an inhabited space numerous situations where it will serve designers to explic- can be described as social. Thus we meant to imply two itly prefer non-anthropomorphically grounded archetypes things: to scaffold mental models. These situations may include, but are not limited to, safety-critical and professional ap- plications such as search and rescue, fire response, con- 1. It is unavoidable that a flying machine in the same Figure 1: We had proposed the struction, etc. – where the correctness of both users’ and space will affect any humans present. Thus, human term social drones to cover bystanders’ mental models might be consequential. factors must be foregrounded in the design of au- autonomous drones operating in tonomous drones operating in human-populated envi- In what follows, we first introduce the “agent archetypes” human-populated environments. (Figure from [2].) ronments. analysis and the four relevant categories of agents that 2. A social drone must have capabilities and present af- might figure in scaffolding HDI design work, drawing heavily fordances to capture human input. If the drone does on Rozendaal et al. [15]. We then discuss the implications not perceive and respond to the human (i.e. be regu- of applying such an analysis in HDI research, with a focus lated by the human), the design risks being perceived on open questions and related hypotheses. Our plan is to as “antisocial” and undesirable. explore these implications in our own future work, through creating and studying HDI via constructive design research. We are publishing these discussions as a reflective exer- This identification of autonomous drones in human-populated cise, in order to explore opportunities for synergy with other environments as a distinct category of HDI has been fruit- participants at the Interdisciplinary Workshop on Human- ful in scaffolding our work.1 However, the term “social” has Drone Interaction (iHDI 2020) [4]. also turned out to be problematic, in that it evokes men- tal models and expectations grounded in consciousness Agent Archetypes and sentience. This can be an issue for users; and also for In their 2019 article, in order to unpack the OWI concept designers, as it can limit the design space, perhaps unnec- used to scaffold their design work, Rozendaal et al. present essarily. an analysis where they cluster four “agent archetypes” rel- evant for computing artifacts [15]. The first category here is In this position paper, we depart from our previous con- that of ambient agents, which are appear as part of “am- ceptualization of “social drones” and consider Rozendaal, bient intelligent environments” [1]. In principal, ambient 1 See: wasp-hs.org/projects/the-rise-of-social-drones-a-constructive- agents are components that sense, interpret, and actuate design-research-agenda/ Grounding Interaction: Interaction: Metaphor Explicit vs. Implicit Direct vs. Semantic Ambient Agents Environment Implicit Direct Conversational Agents Human Explicit Semantic Social Robots Being Flexible Semantic Objects with Intent Thing Flexible Direct Non-agents Thing Explicit Direct Table 1: Comparing agent archetypes. Based on [15], with the addition of the category of non-agents; to show – within the same analysis framework – how artifacts meant not to evoke a sense of agency differ from “agent” artifacts. the environment, thus being “experienced collectively as a work – how artifacts meant not to evoke a sense of agency supportive ambient intelligent presence in the environment” differ from “agent” artifacts. [15]. The second category is conversational agents that “rely on natural language to interact with humans through Implications and Directions written text or speech” [15]. These may be implemented We would argue that the agent archetypes framework has as parts of GUIs, virtual characters, within physical arti- significant implications in terms of how it might inform the facts, or through instant messaging interfaces. Third, the creation and study of HDI designs. Here, we propose a analysis exposes the category of social robots, which are number of topics and directions for HDI research where physical “mechatronic agents.” Often, these are designed such analysis may be fruitful. We intend to adopt some of with humanoid or animal-inspired forms. The authors note these proposals as research questions and design chal- that studies with such robots indicate that their “intelligence” lenges to direct our own future work. We hope that we will may often be “overestimated,” since people’s expectations also find other participants at iHDI 2020 who have interest may be influenced by their experiences with living beings. in these topics, and some of our future efforts may ensue in synergy. The central topic in Rozendaal et al.’s work is the category of OWI, which describes artifact designs that exploit “the What are the places for different agent archetypes in HDI? meaning of everyday things as the site for their intelligence Rozendaal et al.’s analysis is useful for characterizing differ- and agency” [15]. Thus, the OWI concept can scaffold inter- ent agent archetypes within the broader contexts of product action designs meaning to evoke a sense of “collaborative design, human-agent interaction (HAI), and human-robot partnership” between the user and the thing, while avoid- interaction (HRI). However, this framework is not prescrip- ing issues such as overestimation, uncanniness [12], and tive in the sense that it might tell us where and when each over-attachment [13]. archetype might be useful – particularly in the context of HDI. Relevant open questions include: Table 1 summarizes the key characteristics of the four agent archetypes explained above. We also add a fifth category of non-agents, which shows – within the same analysis frame- • Where and when is it desirable to design to embody are designers’ intentions and human perceptions cor- particular agent archetypes? What are some spe- related, with respect to the agent archetypes analy- cific use cases where each agent archetype might be sis, in the context of HDI? How stable are these per- more appropriate than the others? ceptions, between different populations of users and bystanders, and across time?4 • What happens if the agent type is inappropriate for the context or use case? What might be some “modes • How might we create frameworks, tools, and strate- of failure” that relate to agent archetypes, and how gies based on agent archetypes to expedite HDI de- might we trace them back to their cause? signs? • Would it be sensible to design HDI agents that may In response to these questions, for example, we hypoth- ‘switch’ the archetype they embody, depending on the esize: in HDI, Objects with Intent and Ambient Agents context? archetypes (and “non-agents”) may be more relevant and/or desirable over Social Robots in safety-critical and profes- Critical Discussion sional contexts (e.g. search and rescue, fire response, As an additional point for discussion, we believe that the construction) where the correctness of both users’ and by- particular design choices and human perceptions related standers’ mental models might be consequential. to agent archetypes may in fact remain inconsequential, How might we embody agent archetypes in HDI? as long as the artifact is working fine. We thus hypothesize Though the agent archetypes framework itself is not pre- that the relevance of agent archetypes is amplified when scriptive, there exists ample literature with theory, tools, the agent is not behaving as we expect it to believe. Invok- and exemplars that can scaffold design work based on ing terminology from the literature;5 our argument is that any one of the archetypes.2 Focusing on HDI, we note an design choices and perceptions around agent archetypes abundance of such resources that could support design- are less consequential when the agent-artifacts are ready- ing drones as Social Robots,3 but we are not aware of any to-hand, and they become consequential when the agent- resources which might inform HDI designs based on the artifact becomes present-at-hand. Objects with Intent concept. Thus, for HDI, we might pose Furthermore: agent archetypes and OWI are relatively new the following open questions: ideas, and here we have based our thoughts on one partic- ular reading of them. Other interpretations may be possi- • To what extent is the designer even in control of how ble. Our reading focuses on the comparative classification the interaction artifact will be perceived? How strongly expressed on Table 1. Specifically, departing from the sum- marization of OWI as designs that exploit “the meaning of 2 4 Ambient Agents, Conversational Agents, and Social Robots are now See: [11] canonical topics in the relevant literatures. For Objects with Intent, see: 5 Our terminology is based on Dourish’s unpacking [5] of Heidegger’s [14, 15, 16, 17] phenomenology [7]; an unpacking that draws on earlier work by Winograd 3 For reviews, see: [2, 10] and Flores [18]. everyday things as the site for their intelligence and agency” Fjeld. 2019. The Design of Social Drones: A Review of [15]: the way we understand the phrase "everyday things" Studies on Autonomous Flyers in Inhabited has been not as a synonym for "familiar objects," but as Environments. In Proceedings of the 2019 CHI just "objects" as opposed to animate beings. This relates to Conference on Human Factors in Computing Systems how the “grounding metaphors” (Table 1) for different agent (CHI ’19). ACM, New York, NY, USA, Article 250, 13 archetypes compare. However, while we focus on “every- pages. DOI: day things,” we acknowledge that another reading may find http://dx.doi.org/10.1145/3290605.3300480 value focusing on “everyday things.” Thus, the idea of toys, [3] Mehmet Aydın Baytaş, Mohammad Obaid, Joseph furniture, and other familiar everyday objects could turning La Delfa, Asım Evren Yantaç, and Morten Fjeld. 2019. agents – and even being perceived to have intelligence – Integrated Apparatus for Empirical Studies with can become a design resource. Embodied Autonomous Social Drones. In 1st International Workshop on Human-Drone Interaction. Conclusion Ecole Nationale de l’Aviation Civile [ENAC], Glasgow, In this position paper, we aimed to capture the notions of United Kingdom. “agent archetypes” and Objects with Intent which came to https://hal.archives-ouvertes.fr/hal-02128387 our attention through work published by Rozendaal et al. [15], and bring these to the attention of the iHDI 2020 com- [4] Mehmet Aydın Baytaş, Markus Funk, Sara Ljungblad, munity. Our discussion departs from our earlier work in con- Joseph La Delfa, and Florian ‘Floyd’ Mueller. 2020. ceptualizing social drones [2, 3], and aims to move beyond iHDI 2020: Interdisciplinary Workshop on this conceptualization towards design resources that might Human-Drone Interaction. In Proceedings of the 2020 serve a broader variety of contexts and use cases. Build- CHI Conference Extended Abstracts on Human ing on ideas in the literature, we synthesized a number of Factors in Computing Systems (CHI EA ’20). ACM, implications and directions for design research, in the form New York, NY, USA. of research questions and design challenges. We have al- [5] Paul Dourish. 2004. Where the action is: the ready been engaged in design research efforts that, albeit foundations of embodied interaction. MIT press, indirectly, relate to these ideas [9, 8]. In future work, we Chapter 4. hope to address some of these questions and challenges more directly. We welcome critiques and contributions from [6] Lex Fridman. 2018. Lisa Feldman Barrett: How the the iHDI 2020 community towards this agenda. Brain Creates Emotions | MIT Artificial General Intelligence (AGI). https://youtu.be/qwsft6tmvBA. REFERENCES (2018). Accessed: 2019-02-11. [1] Emile Aarts and Reiner Wichert. 2009. Ambient [7] Martin Heidegger, John Macquarrie, and Edward intelligence. In Technology guide. Springer, 244–249. Robinson. 1962. Being and time. (1962). [2] Mehmet Aydın Baytaş, Damla Çay, Yuchong Zhang, Mohammad Obaid, Asım Evren Yantaç, and Morten [8] Joseph La Delfa, Mehmet Aydın Baytaş, Rakesh Robotics & Automation Magazine 19, 2 (2012), Patibanda, Hazel Ngari, Rohit Ashok Khot, and 98–100. Florian “Floyd” Mueller. 2020. Drone Chi: [13] Byron Reeves and Clifford Ivar Nass. 1996. The media Somaesthetic Human-Drone Interaction. In equation: How people treat computers, television, and Proceedings of the 2020 CHI Conference on Human new media like real people and places. Cambridge Factors in Computing Systems (CHI ’20). ACM, New university press. York, NY, USA. [14] Marco Rozendaal. 2016. Objects with Intent: A New [9] Joseph La Delfa, Mehmet Aydın Baytaş, Olivia Paradigm for Interaction Design. Interactions 23, 3 Wichtowski, Rohit Ashok Khot, and Florian Mueller. (April 2016), 62–65. DOI: 2019. Are Drones Meditative?. In Proceedings of the http://dx.doi.org/10.1145/2911330 2019 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’19). ACM, [15] Marco C. Rozendaal, Boudewijn Boon, and Victor New York, NY, USA. DOI: Kaptelinin. 2019. Objects with Intent: Designing http://dx.doi.org/10.1145/3290607.3313274 Everyday Things as Collaborative Partners. ACM Trans. Comput.-Hum. Interact. 26, 4, Article Article 26 [10] Chun Fui Liew and Takehisa Yairi. 2020. Companion (June 2019), 33 pages. DOI: Unmanned Aerial Vehicles: A Survey. (2020). http://dx.doi.org/10.1145/3325277 [11] Sara Ljungblad, Jirina Kotrbova, Mattias Jacobsson, [16] E. van Beek. 2017. What does it have in mind?: Henriette Cramer, and Karol Niechwiadowicz. 2012. Collaborating with guide dogs, backpacks and Objects Hospital Robot at Work: Something Alien or an with Intent. Master’s thesis. TU Delft. Intelligent Colleague?. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative [17] F.A.D. Van Boheemen. 2016. Diem: An animated Work (CSCW ’12). Association for Computing house that cares with Intent. Master’s thesis. TU Delft. Machinery, New York, NY, USA, 177–186. DOI: [18] Terry Winograd, Fernando Flores, and Fernando F http://dx.doi.org/10.1145/2145204.2145233 Flores. 1986. Understanding computers and cognition: [12] Masahiro Mori, Karl F MacDorman, and Norri Kageki. A new foundation for design. Intellect Books. 2012. The uncanny valley [from the field]. IEEE