Shared approaches to mentally drive telepresence robots Gloria Beraldoa,b, Luca Tonina, Amedeo Cestab and Emanuele Menegattia a Department of Information Engineering, University of Padova b Institute for Cognitive Science and Technology, National Research Council, ISTC-CNR Abstract Recently there has been a growing interest in designing human-in-the-loop applications based on shared approaches that fuse the user’s commands with the perception of the context. In this scenario, we focus on user-supervised telepresence robots, designed to improve the quality of life of people suffering from severe physical disabilities or elderly who cannot move anymore. In this regard, we introduce brain-machine interfaces that enable users to directly control the robot through their brain activity. Since the nature of this interface, characterized by low bit rate and noise, herein, we present different methodologies to augment the human-robot interaction and to facilitate the research and the development of these technologies. Keywords 1 Neurorobotics, Brain-Machine Interface, Telerobotics and Teleoperation, Behavior-Based Systems 1. Introduction In shared approaches the user and the robot cooperate to reach a particular goal together. Specifically, the user interacts by sending high-level commands (e.g., the selection of a target or the sending of commands), while the robot contextualizes them according to its perception of the environment. Since the robot manages the low-level operations, they are commonly exploited to relieve the human from the burden of fully controlling every details of the task and to reduce his/her mental workload. For these reasons, they are crucial when the user interacts with the robot through uncertain communication channels. One example is the brain-machine interfaces (BMIs), systems allowing the user to interact with the robot directly from the brain signals [1]. Since the non- muscular nature, over the last decades, several studies have demonstrated the possibility of successfully controlling many robotic devices through BMIs, from prosthesis and exoskeletons to wheelchairs and telepresence robots [2-5]. This idea has led to the birth of a new research field that it is called “neurorobotics”. Although the proliferation of neurorobotics applications, most of the proposed approaches are currently based on basic implementations of the robotic part [6]. On the one side, they rely on a simple interaction between the user and the robot. On the other, the robot passively implements the user commands as a mere end-effector, limiting seriously the potentialities of the robotics in the current BMI driven neuroprostheses. 2. Shared approaches: A revisiting of the literature This section aims to better introduce shared approaches to fuse user’s and robot’s inputs. In the related literature many methods have been designed, even if there is still lack of an uniform Proccedings Italian Workshop on Artificial Intelligence and Robotics EMAILgloria.beraldo@dei.unipd.it (G. Beraldo); luca.tonin@dei.unipd.it (L. Tonin); amedeo.cesta@istc.cnr.it (A. Cesta); emg@dei.unipd.it (E. Menegatti) ORCID: 0000-0001-8937-9739 (G. Beraldo); 000-0002-9751-7190 (L. Tonin); 0000-0002-0703-9122 (A. Cesta); 0000-0001-5794-9979 (E.Menegatti) ©️ 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) terminology by causing misunderstanding [7]. The most common terms are supervisory control [8], traded control [9], shared control [10-12], shared autonomy [13], adjustable autonomy [14], mixed-initiative interaction [15], mixed initiative planning and execution [16,17]. Herein, we briefly summarize the main idea of the proposed taxonomy resulted from a critical revisiting of the literature [7]. Since in shared approaches, the human’s and the robot’s contributions are combined in the decision-making phase, we have taken inspiration from the theory of decision-making from Donges [18,11]. That theory classifies our decisions according to three levels: operational, tactical and strategic. Therefore, we speculate that also the cooperation between user and robot can happen at different levels according to the role of the two agents in the decision-making process and the level of detail in managing the task (see Fig. 1). Figure 1: Our proposed classification of shared approaches. We identify three forms of cooperation between the human and the robot: shared control, shared autonomy, and shared intelligence. Shared control and shared intelligence are designed to optimize the coupling between the human and the robot, even if it is achieved at a different level of interaction. In the first, the human interacts at the low-level (control) for instance by specifying steering commands to the robot. In the latter, the two agents equally contribute to the decision-making process and, therefore the robot could implement behaviors that are originally not considered by the user. In the case of shared autonomy approaches, instead, they are focused on reducing the user workload rather than on creating a mutual interaction with the user. With this aim, the robot autonomously performs specific pre-defined behaviors under user supervision. 3. A robotic approach for human-in-the-loop applications To augment the human-robot interaction, our first contribution has been to investigate how to transfer the knowledge from classical robotics, mainly orientated to autonomous solutions, to the case of human-in-the-loop applications [19,20]. This choice is motivated also by the fact that, in shared approaches, the robotic agent maintains some degree of autonomy allowing the user to focus only on the final goals by ignoring the low-level problems. With this aim, we inspect different methods designed to enrich the perception of the robot, as well as to implement reactive robot behaviors (e.g., shared autonomy based on behavior assistance method) according to procedures defined a priori, that are activated when specific conditions are detected. We have examined three conditions (in Fig. 2) that are common in telepresence applications: • The perception of landmarks in the environment. Specifically, we focused on doors (Fig. 2a) that can activate “the passage through the door” procedure according to the door’s state (open enough or close) [19]; • The perception of people that can trigger “social behaviors” of the robot according to their status (targets or obstacles) and the distance (Fig. 2b); • Advanced “obstacle avoidance behaviors” based on the enriched knowledge a priori of the environment provided to the robot and robust localization techniques (Fig. 2c) [20]. Figure 2: Explored “shared autonomy based on behavior assistance method”. (a) The door is perceived, detected, and tracked by our system on board the robot and it estimates the door’s aperture in real-time. When the door is a target, it generates an attractive force on the robot, regulating its motion through the door. Our approach significantly differs from the artificial potential field [21], because we use the attractive/repulsive effects not to directly change the robot motion, but to determine a target position for the robot by weighing them into a cost function [19] (b) We investigate an extended version of the artificial potential field (behavioral potential field [22]) in the context of shared autonomy, to influence the motion of the robot according to the presence of people (both static and dynamic [23]). (c) The robot exploits a global map of the environment and it localizes itself to compute the best trajectory to follow during the navigation by increasing the reliability of the system designed to be used with noise interfaces as BMI. In contrast to classical robotics, two different maps (one more detailed for navigation, one less detailed for localization) are simultaneously used, by demonstrating an improvement of the navigation [20]. Our preliminary results related to all the three scenarios showed that the proposed shared approaches reduce the number of commands, suggesting also a reduction of the workload required to the user with the respect to the joystick teleoperation (taken as a reference), with a few increments of time. This aspect is fundamental in the case of demanding situations as the door passage, where it would be impossible for the user to control every single robot movements, especially by using BMI. Finally, in the case of a remote telepresence application, where the user is asked to interact with a target person, the proposed shared approach demonstrated not only a reduction of the number of delivered commands with respect to a joystick teleoperation (in line with other my experiment and the literature) but also the number of collisions with objects in the environment. Furthermore, this shared modality was evaluated by participants as the best form of interaction to complete the task, in comparison to a joystick teleoperation and a completely autonomous modality (e.g., where the human reaches the target person without requiring any user interaction). 4. A novel shared intelligence system based on policies Although the aforementioned approaches promote the human-robot interaction, there are characterized by a high specificity. They rely on strategies that are strongly dependent on the environment in which the robot is acting, limiting their reproducibility. Furthermore, they pre-set the possible robot’s behaviors, that are activated upon the occurrence of triggers from user and robot’s perception. To overcome this drawback, we propose a new shared intelligence approach for mentally driving telepresence robots, where the robot behavior naturally emerges from the fusion of a set of policies (Fig. 3) [24]. Since we do not make any assumption on the user input nor on the robotic actuators, we speculate that general approach can be also applied in several human-in-the-loop applications. We tested the system with 13 healthy people that mentally drive a telepresence robot in a typical office environment. Figure 3: An illustrative representation of our novel shared intelligence approach based on policies [24]. The user inputs are combined with the robot perception, according to a set of policies. Each policy computes a probability grid, covered the area around the robot. The result of the fusion of the set of policies is a position in the environment that the robot tries to reach and that it is continuously updated. Once known that position, a navigation module optimizes the motion of the robot, by planning the best trajectory the robot should follow. Finally, the robot base controller computes the corresponding velocity commands. The system showed the correct functioning in several circumstances as free space area, door passage, corridor, crossroad, area covered by obstacles [24]. Furthermore, BMI users achieved performance comparable with the continuous teleoperation by a joystick, coherently with the literature and our previous studies, after only a few training runs. Moreover, the resulting robot behavior was qualitatively evaluated with a questionnaire by the participants: they reported to be natural and in line with their intentions. This is key aspect to develop technologies based on human and to promote their acceptance. 5. ROS-Neuro: an open-source framework for neurorobotics applications With the studies presented in the two previous sections, we demonstrated the potentialities of augmenting the human-robot interaction by exploiting the advances in robotics and artificial intelligence. In this section, we propose to exploit the tools and the standards already available in robotics also to facilitate the integration between BMI and the robot control. Indeed, in the current neurorobotic scenario, each research group is inclined towards the development of home-made solutions to combine BMI system with the robotic devices, by leading the spread of heterogeneous platforms and lack of standards. In the case of classical robotics, instead, it is well-known the Robot Operating System (ROS), a middleware that has become the worldwide standard de facto in robotics in the last decade [25]. Therefore, we have considered to exploit the advantages of ROS for developing neurorobotics application, by proposing ROS-Neuro (originally ROS-Health [6]). ROS-Neuro is designed to be a common software framework, both for the implementation of BMI systems and robotics controllers (Fig. 4) [26, 27]. Figure 4: ROS-Neuro architecture [27]. On the one side, we propose a modular architecture matching the requirement of any BMI systems, by meaning a flow of information among different modules. On the other side, our aim is to exploit the standards and the tools available in ROS such as the optimized real-time capabilities, the flexibility in designing BMI system, and the direct access to state-of-the-art robotic algorithms shared over a growing community. In our first results, not only the correct functioning of ROS-Neuro emerged, but also a strong stability with the respect to our previous software thanks to the efficient communication infrastructure guaranteed by ROS [27]. 6. Conclusions In this work, we present different approaches to boost the integration between telepresence robots and brain-machine interfaces, by exploiting the knowledge of classical robotics and artificial intelligence as well as by revisiting the literature. In light of the results, these methods might be a first little step to create a new generation of telepresence robots driven by BMI, that is focused both on creating a natural interaction between human and robot and to advance the robot behaviors. 7. Acknowledgements This research was partially supported by Fondazione Ing. Aldo Gini, by MIUR (Italian Minister for Education) under the initiative ``Departments of Excellence" (Law 232/2016) and by SI Robotics project (Invecchiamento sano e attivo attraverso SocIal ROBOTICS). 8. References [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain– computer interfaces for communication andcontrol,”Clinical neurophysiology, vol. 113, no. 6, pp. 767–791,2002. [2] K. Lee, D. Liu, L. Perroud, R. Chavarriaga, and J. d. R. Millán, “A brain-controlled exoskeleton with cascaded event-related desynchro-nization classifiers,”Robotics and Autonomous Systems, vol. 90, pp.15–23, 2017 [3] Y. He, D. Eguren, J. M. Azorìn, R. G. Grossman, T. P. Luu,and J. L. Contreras-Vidal, “Brain–machine interfaces for controlling lower-limb powered robotic systems,”Journal of neural engineering,vol. 15, no. 2, p. 021004, 2018 [4] L. Tonin, R. Leeb, M. Tavella, S. Perdikis, J. D. R. & Millán, “The role of shared-control in BCI- based telepresence”. In 2010 IEEE International Conference on Systems, Man and Cybernetics (pp. 1462-1466). IEEE. [5] I. Iturrate, J. M. Antelis, A. Kubler and J. Minguez, “A noninvasive brain-actuated wheelchair based on a p300 neurophysiologicalprotocol and automated navigation,”IEEE transactions on robotics,vol. 25, no. 3, pp. 614–627, 2009 [6] G. Beraldo, N. Castaman, R. Bortoletto, E. Pagello, J. Del R. Millán, L. Tonin and E. Menegatti, “ROS-Health: An open-source framework for neurorobotics”, 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), pp. 174-179, May 2018, DOI: 10.1109/SIMPAR.2018.8376288 [7] G. Beraldo, L. Tonin, A. Cesta, E. Menegatti, “Shared-Control, Shared-Autonomy and Shared- Intelligence in Assistive Technologies: Three forms of Cooperation between User and Robot”, 2020 IEEE International Workshop ADAPTIVE BEHAVIORAL MODELS OF ROBOTIC SYSTEMS BASED ON BRAIN-INSPIRED AI COGNITIVE ARCHITECTURES [8] W. R. Ferrell, T.B. Sheridan, “Supervisory control of remote manipulation”. IEEE spectrum, 1967, 4.10: 81-88. [9] T.B. Sheridan, “Telerobotics”. Automatica, 25(4), 487-507, 1989 [10] K. H. Goodrich, P. C Schutte., F. O Flemisch, R. A.Williams, “Application of the H-mode, a design and interaction concept for highly automated vehicles, to aircraft”. In 2006 IEEE/AIAA 25TH Digital Avionics Systems Conference (pp. 1-13). IEEE. [11] F. Flemisch,, D Abbink., M Itoh, M. P Pacaux-Lemoine., & Weßel, G. (2016). “Shared control is the sharp end of cooperation: towards a common framework of joint action, shared control and human machine cooperation”. IFAC-PapersOnLine, 49(19), 72-77. [12] D Abbink, T. Carlson, M. Mulder, J. C. de Winter, F Aminravan, T. L Gibo, & E. R. Boer,”A topology of shared control systems—finding common ground in diversity”. IEEE Transactions on Human-Machine Systems, 48(5), 509-525. [13] M. Schilling, S. Kopp, S. Wachsmuth, B.Wrede, H. Ritter, T. Brox et al. “Towards a multidimensional perspective on shared autonomy,” in Proceedings of the AAAI Fall Symposium Series 2016 (Stanford, CA). [14] J. M. Bradshaw, P. J. Feltovich, H. Jung, S. Kulkarni, W. Taysom,& A. Uszok, “Dimensions of adjustable autonomy and mixed-initiative interaction”. In International Workshop on Computational Autonomy (pp. 17-39). Springer, Berlin, Heidelberg. (2003, July). [15] J. E. Allen, C. I Guinn.& E. Horvtz,,“Mixed-initiative interaction”. IEEE Intelligent Systems and their Applications, 14(5), 14-23. 1999 [16] A. Finzi, & A. Orlandini,“Human-robot interaction through mixed-initiative planning for rescue and search rovers”. In Congress of the Italian Association for Artificial Intelligence (pp. 483-494). Springer, Berlin, Heidelberg. 2005 [17] G. Bevacqua, J. Cacace, A. Finzi, & V. Lippiello, Mixed-Initiative Planning and Execution for Multiple Drones in Search and Rescue Missions. In Icaps (pp. 315-323), 2015 [18] E. Donges, “Aspekte der aktiven Sicherheit bei der Führung von Personenkraftwagen”. Automob- Ind, 27(2), 1982. [19] G. Beraldo, E. Termine, E. Menegatti, “Shared-Autonomy Navigation for mobile robots driven by a door detection module”, 18th International Conference of the Italian Association for Artificial Intelligence (AIIA2019), November 2019, DOI: 10.1007/978-3-030-35166-3_36 [20] G. Beraldo, M. Antonello, A. Cimolato, E. Menegatti, L. Tonin, “Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots”, 2018 IEEE International Conference on Robotics and Automation, pp. 1-6, May 2018, DOI: 10.1109/ICRA.2018.8460578 [21] Y. Koren, & J. Borenstein, “Potential field methods and their inherent limitations for mobile robot navigation”, 1991 IEEE International Conference on Robotics and Automation, pp. 1398-1404, 1991. [22] S. Hoshino, K. Maki. "Safe and efficient motion planning of multiple mobile robots based on artificial potential for human behavior and robot congestion." Advanced Robotics 29.17 (2015): 1095-1109. [23] K. Koide and J.Miura, “Convolutional Channel Features-based Person Identification for Person Following Robots” , 15th International Conference IAS-15, Baden-Baden, Germany, 2018 [24] G. Beraldo, L. Tonin, J. Del. R. Millán, E. Menegatti, “Shared autonomy for teleoperation of intelligent robots: A novel approach based on policies”, IEEE Transactions on Robotics (UNDER REVIEW) [25] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “ROS: an open-source robot operating system,” in IEEE International Conference on Robotics and Automation workshop on open source software, vol. 3, no. 3.2., Kobe, Japan, 2009. [26] L. Tonin, G. Beraldo, S. Tortora, L. Tagliapietra, J. Del R. Millán, E. Menegatti, “ROS-Neuro: A common middleware for BMI and robotics. The acquisition and recorder packages”, 2019 IEEE International Conference on Systems, Man, and Cybernetics (SMC2019), October 2019, DOI: 10.1109/SMC.2019.8914364 [27] G. Beraldo, S. Tortora, E. Menegatti, L. Tonin, “ROS-Neuro: implementation of a closed-loop BMI based on motor imagery”, 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020), October 2020