Automated Prediction of Extraversion during Human-Robot Interaction Salvatore M. Anzalone2 , Giovanna Varni1 , Elisabetta Zibetti3 , Serena Ivaldi4 , and Mohamed Chetouani1 1 ISIR, CNRS & Université Pierre et Marie Curie, Paris, France 2 Psychiatrie de l‘Enfant et de l‘Adolescent, GH Pitié-Salpêtrière, Paris, France 3 CHART-Lutin, Université Paris 8, Paris, France 4 Inria, Villers-lès-Nancy, F-54600, France {anzalone,varni,chetouani}@isir.upmc.fr, serena.ivaldi@inria.fr Abstract. This paper introduces an automatic system for the predic- tion of extraversion during the first minutes interaction between humans and and a humanoid robot. In such interactions the behavioural response of people depends by their personality traits and by their attitude to- wards robots. A set of non-verbal features is proposed to characterize such behavioural responses. Results obtained using such features on a dataset of adults interacting with the iCub robot show the effectiveness of this approach. Keywords: Human-robot interaction, personality, non-verbal behaviour 1 Introduction A crucial challenge for social robots is to adapt their behaviours to the person- ality of their interacting partners. They should be able to deal with physical features, preferences, social behaviours and psicololgical traits that make unique the personality and the behaviour of each human being[14]. To achieve this, so- cial robots should accordingly create and maintain a model of known partners, as well as unknown acquaintances, updating it according to new experiences and new information[1]. Several studies focus on the automated online estimation of personality traits of human partners[6], on their influence on the exchange of verbal and non-verbal signals, as well as on the mechanisms underlying the production of behaviors, emotions and thoughts during interaction with social robots[13][7]. In partic- ular these issues have been investigated in the project EDHHI[5]1 , focused on studying social interactions between humans and the humanoid robot iCub. Part of the dataset collected in EDHHI has been exploited in this study to investigate the possiblity of predicting the personality trait of extraversion from a set of non-verbal features extracted during a first, short interaction with the robot (i.e., the first minutes). The focus on the extraversion proposed in this 1 http://www.loria.fr/ ∼sivaldi/edhhi.htm study has been motivated by its greater impact on social behaviours respect to other traits during the interactions[15]. The particular context of a social interaction with a robot can induce novelty and anxiety effects in people. The presented study uses an estimate of the negative attitude towards the robot[9] displayed by people to incorporate such effects. 2 Materials and Methods Figure 1 shows a sketch of the system designed in this study. Before the interaction, ques- tionnaires have been adminis- tered to the subjects to esti- mate their personality traits. During the experiment, data is collected through an RGB- D sensor. From the dataset Fig. 1. Overview of the proposed system. obtained, in particular from the depth image of the RGB-D sensor, a set of non-verbal features are extracted. Finally, a model of such features is trained in a supervised way using the ground truth provided by the scores of the question- naires. Questionnaires: The assessment of the personality traits of the participants to the experiment has been done using two questionnaires: the Revised Person- ality Inventory (NEO-PIR) [3], assessing the personality traits according to the Big Five model [6], and the Negative Attitude towards Robots Scale (NARS) [9]. From the NEO-PIR questionnaire, only the 48 questions (likert scale: 1, to- tally disagree; 5, totally agree) related to Extraversion were retained. The NARS questionnaire consists of 14 questions (likert scale: 1, totally disagree; 7, totally agree) divided in 3 classes: “Negative attitude toward situation of interaction with robots” (NARS-S1), “Negative attitude toward social influence of robots” (NARS-S2) and “Negative attitude toward emotions in interaction with robots” (NARS-S3). Experimental Setup: The experiments were carried out using the child- like robot iCub[8] controlled by an operator hidden behind a wall. Through a Wizard-of-Oz GUI, the robot was controlled in velocity when there was no phisical interaction. Otherwhise the operator was able to adjust the stifness of the robot to make it compliant[4]. Facial expressions and speech were enabled. The robot was able to say only few sentences, such as “yes”, “no”, “thank you”. Experimental Protocol: The experiments of Project EDHHI followed a protocol2 developed to study the spontaneous behavior of ordinary people interacting with a robot. Each participant was introduced in front of the robot without providing any specific istruction about how to behave with it. Standing on a support pole, the 2 Ivaldi et al., IRB n.20135200001072. robot greeted the participant by genlty waving the hands, looking at him, holding a colored toy in its right hand. Standing in front of the robot, the participant was free to act as he likes: talking to the robot, touching it, and so on. As the participant did not receive any indication by the experimenter, if he wanted to, he could start interacting more actively with iCub, asking questions, picking and giving back the small toy, and so on (see Fig. 2). Due to the absence of constraints, the design of the experiment is focused on induce spontaneous reactions in the human partners. Participants : 39 healthy adults without any prior ex- perience with robots volun- teered to participate to the experiments (11 male and 28 female, aged 37.8y±15.2y). Each participant received an ID number to preserve the anonymity of the study and signed an informed consent form to partake in the study Fig. 2. iCub interacting with two participants. and granted us the use of their recorded data and videos. Data Collection : The dataset from Project EDHHI includes the video stream collected by a Kinect RGB-D sensor (v.1, 30fps) placed above the head of the robot in such a way to retrieve the body and face of the human interacting with the robot. The dataset used in this work includes 39 videos (one for each participants) of the first minutes of their interaction with iCub, synchronized with the robot events logged by the Wizard-Of-Oz application used to control the robot. The average duration of the videos was 110.1s (SD=63.9s). 3 Non-verbal Features Extraction According to the state of art of psychology on personality traits, the extraversion dimension encompasses specific facets as sociability, energy, assertiveness and excitement-seeking. The features adopted in this work address those facets[12]. F1) Histogram of Quantity of Motion (h-QoM): The quantity of mo- tion is an approximation of the energy of the movement and it is computed as the area (i.e. the number of pixel) of a Silhouette Motion Image[2] normalised over the area of the silhouette. The histogram included 64 bins to guarantee a consistent dynamics. F2-3) Histograms of Synchrony and dominance (h-Sync, h-dom): Synchrony and dominance are calculated according to the Event Synchronisation technique[11]. Events considered are: a subset of the iCub actions and the full- body energy peaks of the participant. Event Synchronisation provides a couple of measures counting: i. the synchrony, as the number of actions occourring quasi- simultaneously with respect to the global number of actions occurred through the overall interaction; ii. the dominance, as how many times an action of one of the Features Precision Recall F-score std-d, h-QoM 33% 27% 46% std-d, h-QoM, h-dom 59% 62% 61% std-d, h-QoM, h-sync 60% 64% 63% std-d, h-QoM, h-sync, h-dom 64% 69% 66% Table 1. Average Percentage of Precision, Recall and F-score two interactants comes before the corresponding action performed by the other one. Histograms of such measures have been calculated from sliding windows. F4) Standard deviation of human-robot distance (STD-d): This fea- ture is computed as the average of the pixels’ values of the silhouette extracted from the depth image of the Kinect. 4 Extraversion Prediction The extracted features from the interaction with iCub are used to create a model of the personality of each participant to the EDHHI project. The NEO-PIR questionnaire is a powerful instrument to evaluate the person- ality traits of people, however in the studied scenario the model should take in account that people’s behaviour could vary accordingly to their attitude towards the robots. This “contextual” information depends on self-believes and previ- ous experiences of the participants with robotics and can strongly affect their behaviour during the experiment. The scores of the NARS questionnaire have been combined with the NEO-PIR scores to take in account this phenomenon. In particular a Principal Component Analysis has been carried out on a scores vector including: NEO-PIR, NARS-S1, NARS-S2, and N ARS − S3 scores. The analysis shown that only the first principal component was meaningful, with an eigenvalue greater than 1 (PCA’s eigenvalues: 2.17, 0.85, 0.56 and 0.32; PCA’s component load: 0.32, -0.56, -0.57 and 0.51), revealing that the personality can be captured by its score. The distribution of the values of this first component is computed and its median allowed the definition of the two classes for the machine learning process. The dataset resulted in a 39 (instances) x 72 (features) matrix. A Logistic Regression Classifier (LRC) [10] with penalty parameter C = 1 and L2 norm L2 has been then adopted, using a 10-run, 10-fold, stratified, cross-validation. Table 1 shows the performances obtained when different subsets of features feed used. Notably, the classification results using dominance and synchrony information overtake the chance level. The results obtained are consistent with previous results on prediction of extraversion in human-human interaction using non-verbal features (e.g., [15]). References 1. Anzalone, S.M., et al.: Towards partners profiling in human robot interaction con- texts. In: SIMPAR, pp. 4–15. Springer (2012) 2. Camurri, A., et al.: Interactive systems design: A kansei-based approach. In: Proc. Conf. New interf. musical expression. pp. 1–8 (2002) 3. Costa, P., McCrae, R., Rolland, J.: NEO-PI–R. Inventaire de Personnalité révisé. Editions du Centre de Psychologie Appliquée, Paris. (1998) 4. Fumagalli, M., Ivaldi, S., et al.: Force feedback exploiting tactile and proximal force/torque sensing. theory and implementation on the humanoid robot icub. Autonomous Robots (4), 381–398 (2012) 5. Ivaldi, S., et al.: Towards engagement models that consider individual factors in hri: on the relation of extroversion and negative attitude towards robots to gaze and speech during a human-robot assembly task. arXiv:1508.04603 [cs.RO] pp. 1–24 (2015) 6. McCrae, R.: The five-factor model of personality. In: P.Corr, Mathhews, G. (eds.) Handbook of Personality Psychology, pp. 148–161 (2009) 7. Meerbeek, B., Saerbeck, M., Bartneck, C.: Iterative design process for robots with personality. Kerstin Dautenhahn, editeur, AISB2009 Symposium on New Frontiers in Human-Robot Interaction pp. 94–101 (2009) 8. Natale, L., et al.: The icub platform: a tool for studying intrinsically motivated learning. In: B., G.and M., M. (ed.) Intr. Motiv. Learning etc. Springer (2013) 9. Nomura, T., et al.: Experimental investigation into influence of negative attitudes toward robots on human-robot interaction. AI & SOCIETY 20(2), 138–150 (2006) 10. Pampel, F.: Logistic Regression: a primer. Sage Publications (2000) 11. Quiroga, R.Q., et al.: Event synchronization: a simple and fast method to measure synchronicity and time delay patterns. Phys Rev E 66(4) 12. Rahbar, F., Anzalone, S., Varni, G., Zibetti, E., Ivaldi, S., Chetouani, M.: Pre- dicting extraversion from non-verbal features during a face-to-face human-robot interaction. In: International Conference on Social Robotics. Sprinegr (2015) 13. Tapus, A., et al.: User-robot personality matching and assistive robot behavior adaptation for post-stroke rehabilitation therapy. Int. Serv. Rob. 1(2) (2008) 14. Vinciarelli, A., Mohammadi, G.: A survey of personality computing. IEEE Trans- actions on Affective Computing 5(3), 273–291 (2014) 15. Zen, G., et al.: Space speaks: towards socially and personality aware visual surveil- lance. In: 1st ACM Int. Worksh. on Multimodal Perv. Video Anal. pp. 37–42 (2010)