Would a Robot Trust You? Developmental Robotics Model of Trust and Theory of Mind Samuele Vinanzi∗ , Massimiliano Patacchiola∗ , Antonio Chella† and Angelo Cangelosi∗ ∗ Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom † RoboticsLab, Università degli Studi di Palermo, Palermo, Italy Correspondence: Samuele Vinanzi samuele.vinanzi@plymouth.ac.uk Abstract—Trust is a critical issue in human-robot interaction: as robotic systems gain complexity, it becomes crucial for them to be able to blend in our society by maximizing their acceptability and reliability. Various studies have examined how trust is attributed by people to robots, but less have investigated the opposite scenario, where a robot is the trustor and a human is the trustee. The ability for an agent to evaluate the trustworthiness of its sources of information is particularly useful in joint task situations where people and robots must collaborate to reach shared goals. We propose an artificial cognitive architecture based Fig. 1. Experimental setup. A Pepper robot (1) and an informant (2) face on the developmental robotics paradigm that can estimate the each other in front of a table where a sticker can be moved between two reliability of its human interactors for the purpose of decision positions (3). making. This is accomplished using Theory of Mind (ToM), the psychological ability to assign to others beliefs and intentions that can differ from one’s owns. Our work is focused on an humanoid of its informants. In particular, inference is computed on the robot cognitive architecture that integrates a probabilistic ToM and trust model supported by an episodic memory system. probability distribution of a Bayesian network’s nodes. We We tested our architecture on an established developmental tested this architecture replicating Vanderbilt’s experiment [3], psychological experiment, achieving the same results obtained which consists in a sticker finding game where the child, or in by children, thus demonstrating a new method to enhance the our case the robot, must face and learn to distinguish helpers quality of human and robot collaborations. and trickers. Our system is able to generate a belief network Keywords—trust, theory of mind, episodic memory, cognitive for each user and to perform decision making and belief robotics, developmental robotics, human-robot interaction estimation. In addition, an episodic memory module makes Trust is a central component of social interactions between the robot able to build a personal character that depends on both humans and robots. It can be defined as the willingness how it has been treated in the past, thus making it more or less of a party (the trustor) to rely on the actions of another keen to trust someone it never met. The results we obtained are party (the trustee) with the former having no control on in line with the original experiments, thus confirming that our the latter [1]. The fundamental role of trust evaluation is architecture correctly modeled trust and ToM mechanisms in a to ensure successful relationships, especially during shared humanoid robot. In the future, we plan to use this model in a goal interactions where all the parties must cooperate in a wider scenario where trust estimation and intention reading joint task to reach a common objective. The development will generate and modulate collaborative behavior between of trust during childhood is still under debate, but one of humans and robots. the most interesting theories is the “trust vs mistrust” stage ACKNOWLEDGMENT by Erikson [2], which states that the propensity to trust is This material is based upon work supported by the Air Force Office of proportionate to the quality of cares received during infancy. Scientific Research, Air Force Materiel Command, USAF under Award No. A psychological trait that relates to the mastery of one’s FA9550-15-1-0025. self trustfulness is Theory of Mind (ToM), the ability to R EFERENCES attribute mental states to others, as for example beliefs and [1] Roger C Mayer, James H Davis, and F David Schoorman. An inte- intentions, that can differ from one’s owns. Vanderbilt et al. grative model of organizational trust. Academy of management review, [3] have demonstrated that children are not good at identifying 20(3):709–734, 1995. [2] Erik H Erikson. Childhood and Society. W. W. Norton & Company, misleading sources of information until their fifth year of age, 1993. when their ToM fully develops. Following these psychological [3] Kimberly E Vanderbilt, David Liu, and Gail D Heyman. The development results, we designed an artificial cognitive architecture for of distrust. Child development, 82(5):1372–1380, 2011. [4] Massimiliano Patacchiola and Angelo Cangelosi. A developmental a Softbank Pepper humanoid robot that uses a probabilistic bayesian model of trust in artificial cognitive systems. In Development approach first theoretically proposed by Patacchiola et al. [4] and Learning and Epigenetic Robotics (ICDL-EpiRob), 2016 Joint IEEE to model trust and ToM in order to estimate the reliability International Conference on, pages 117–123. IEEE, 2016.