A Multimodal Interface for Robot-Children Interaction in Autism Treatment Giuseppe Palestra Floriana Esposito Department of Computer Science, University Department of Computer Science, University of Bari of Bari Bari, Italy Bari, Italy giuseppe.palestra@uniba.it floriana.esposito@uniba.it Berardina De Carolis Department of Computer Science, University of Bari Bari, Italy berardina.decarolis.@uniba.it ABSTRACT Recent studies suggest that robots play an important role to cope Autistic Spectrum Disorder (ASD). This paper presents a multimodal interface based on a multilevel treatment protocol customized to improve eye contact, joint attention, and imitation. An evaluation of the system has been performed involving 6 high functioning children with autism spectrum disorders. The experiments carried out make it possible to evaluate the behavioral response of the children in the eye contact exercise. ASD children had achieved better results than traditional therapy thanks to the multimodal interface. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI; CHItaly’17, September 2017, Cagliari, ITALY © 2017 Copyright held by the owner/author(s). A Multimodal Interface for Robot-Children Interaction in Autism Treatment CHItaly’17, September 2017, Cagliari, ITALY KEYWORDS Multimodal interface; social robots; autism spectrum disorder; eye contact. INTRODUCTION Autism is a severe disorder of development that is characterized by social interaction/communication difficulties and tendency to engage in repetitive patterns of behavior. Usually, a broad range of symptoms and levels of difficulties in functioning can occur in autistic children, therefore the term Autism Spectrum Disorder (ASD) is used. ASD characteristics usually arise in the first years of life, and can be associated together with motor coordination problems and mental retardation. New and innovative technologies, in particular in robotics, suggest that that robots play a promising role to build up the interventions to help autistic children and to cope their impairments related to eye contact, joint attention, imitation, and emotion recognition and production. Several social robots are enable to execute tasks in autistic treatment. They differ for physical appearance, targeted eliciting behavior, and level of autonomy [5]. Positive effects of social robots in autistic children treatment are already reported in the literature. Social robots are used to develop interventions to help autistic children and to cope their impairments as emotion recognition and generation [4], joint attention and triadic interaction [2], eye contact and social gaze [1]. Palestra et. al [3] present the implementation of an interface for digital PECS Therapy that enable ASD people to overcome imitation and motors skills difficulties. In these and others studies, what is missing is a social robot that should be able to cope more impairments at the same time during the therapy, capable to understand the social signals of the child and consequently performs adaptive behaviors in a therapy context. The aim of this paper is to present a multi-modal interface for a humanoid social robot based on a therapeutic treatment protocol personalized to improve eye contact, joint attention, imitation, and basic emotion recognition. In this multi-modal interface, the robot acts as a social mediator, trying to elicit specific behaviors in child. The therapeutic behavioral based protocol is organized in exercises with increasing difficulty to elicit a different behavior. The therapeutic behavioral based protocol has been partially used in the SARACEN (Social Assistive Robots for Autistic Children EducatioN) project aimed at developing innovative methods for early diagnosis of ASD and therapy support for autistic children with socially assistive robots. SARACEN has been partially supported by the Italian Ministry for Education, University and Research (MIUR) and by the European Union in the framework of Smart Cities and Communities and Social Innovation of 2007-2013. This paper is organized as follows. Section 2 reports the state of the art relative to the current social robots-based systems used in autism therapy. Section 3 presents an overview of the multi- modal interface. Section 4 provides a description of the protocol and details each exercise. Section A Multimodal Interface for Robot-Children Interaction in Autism Treatment CHItaly’17, September 2017, Cagliari, ITALY 5 describes the Procedure used during the experiments. Section 6 presents the experimental setup and the results. Finally, conclusions and future work are reported. MULTI-MODAL INTERFACE OVERVIEW The proposed Multi-Modal Interface for ASD treatment includes three fundamentals modules: the sensors (cameras), the workstation, and the social robot. The interface is able to perform a stimulus, to capture the behavioral response of the child, and to provide him/her a reinforcement. The child’s behavior is captured by two cameras. The social robot is the RobokindTM Zeno R25 humanoid 1 http://www.robokindrobots.com/robots/ robot 1 running on Ubuntu GNU/Linux. It has a realistic flubber (silicon rubber) face that is used to display several reasonably human-like facial expression in real-time. The social robot is able to perform the exercises outlined in the protocol in order to carry out the training of social interaction abilities in ASD children and in order to measure with specific metrics the children-robot interaction in a therapy session. The workstation uses the Zeno R25 API to communicate with the robot in a bidirectional way in order to grab video frames from the robot camera and in order to trigger specific robot behaviors. A log file is recorded on the workstation storage during the whole interaction. This file reports for each row the time stamp, an anonymous identification of the child, the behavior performed from the child, and the exercise performed by Zeno. Figure 1 depicts the overview of the interface. THE THERAPEUTIC PROTOCOL The protocol is conceived to elicit, through a social robot, a specific behavior resulting difficult to be performed from an ASD child [4]. It is based on the traditional ABA treatment of ASD children that includes three main steps: stimulus presentation, behavioral response, and reinforcement. The novelty aspect of this protocol consists of the presence of a robotic partner handling these three steps and in a multilevel exercise structure for a complete and tailored autism treatment for children. The protocol has four exercises with different levels of difficulty. The first exercise focuses on eye contact. Eye Contact The eye contact exercise is conceived to elicit the eye contact behavior that it has been shown reduced in autistic children. This exercise includes three increasing difficult levels that differs with regard to the stimulus and the reinforcement steps. As detailed here in the following. Easy Level. The robot call the child by name and it says "Look at me" (stimulus). It repeats this behavior until the child looks into the eyes of the robot (expected behavioral response). When eye A Multimodal Interface for Robot-Children Interaction in Autism Treatment CHItaly’17, September 2017, Cagliari, ITALY Figure 1: Multi-modal Interface Overview. The child-robot interaction loop is depicted: the robot acts as social mediator to elicit specific behavior in ASD child, taking into account his/her social signals. contact happens, the robot says "Good!" and repeats the name of the child. Finally, the robot plays a music (reinforcement). Medium Level. This level differs from the previous one only with regards to the stimulus. The robot call the child by name and it does not say "Look at me". Hard Level. In this level, the reinforcement is changed: the robot does not play the music. EXPERIMENTAL RESULTS This section is devoted to provide a first assessment of the multi-modal interface evaluating the behavioral responses elicited by the robot of ASD children. Six high functioning ASD children (C1, C2, C3, C4, C5, and C6) within the age range of 6-13 years and with a diagnosis of high functioning autism spectrum disorder have been involved in this evaluation. General speaking, the multi-modal interface was able to operate well in the real environment for all the children. As reported in the log files, the multi-modal interface recorded all social signals of the user properly and the robot performed all stimuli without stopping. To evaluate the responses of the children when they interact with the system, the focus has been done on: the time (ms) needed to have eye contact (tEC), and the number of the eye contact have been correctly performed by the child (nEC). The system A Multimodal Interface for Robot-Children Interaction in Autism Treatment CHItaly’17, September 2017, Cagliari, ITALY considers 1000 seconds as the maximum time (tMAX) to perform the eye contact. With respect to the tEC, it has been observed that all the participants have increased their tEC with the session and level of difficulty. All participants showed a significant tEC to perform eye contact in S3 (the hard level), whereas tEC decrease in S2 and s1. REFERENCES [1] Henny Admoni and Brian Scassellati. 2017. Social Eye Gaze in Human-Robot Interaction: A Review. Journal of Human-Robot Interaction 6, 1 (2017), 25–63. [2] Pauline Chevalier, Jean-Claude Martin, Brice Isableu, Christophe Bazile, David-Octavian Iacob, and Adriana Tapus. 2016. Joint Attention using Human-Robot Interaction: Impact of sensory preferences of children with autism. In Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on. IEEE, 849–854. [3] Giuseppe Palestra, Dario Cazzato, Francesco Adamo, Ilaria Bortone, and Cosimo Distante. 2016. Assistive Robot, RGB-D Sensor and Graphical User Interface to Encourage Communication Skills in ASD Population. Journal of Medical Robotics Research (2016), 1740002. [4] Giuseppe Palestra, Giovanna Varni, Mohamed Chetouani, and Floriana Esposito. 2016. A multimodal and multilevel system for robotics treatment of autism in children. In Proceedings of the International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents. ACM, 3. [5] Brian Scassellati, Henny Admoni, and Maja Matarić. 2012. Robots for use in autism research. Annual review of biomedical engineering 14 (2012), 275–294.