=Paper=
{{Paper
|id=Vol-3261/paper1
|storemode=property
|title=Emotional behavior trees for empathetic human-automation interaction
|pdfUrl=https://ceur-ws.org/Vol-3261/paper1.pdf
|volume=Vol-3261
|authors=Stefania Costantini,Pierangelo Dell'Acqua
|dblpUrl=https://dblp.org/rec/conf/woa/CostantiniD22
}}
==Emotional behavior trees for empathetic human-automation interaction==
Emotional Behavior Trees for Empathetic Human-Automation Interaction Pierangelo Dell’Acqua1 , Stefania Costantini2 1 Dept. of Science and Technology, Linköping University, Sweden 2 Department of Information Engineering, Computer Science and Mathematics, University of L’Aquila, Italy Abstract For the tasks of improving caregiving in medicine and other sectors (i.e., teaching) and of constructing effective human-AI teams, agents should be endowed with an emotion recognition and management module, capable of empathy, and of modelling aspects of the Theory of Mind, in the sense of being able to reconstruct what what someone is thinking or feeling. In this paper, we propose an architecture for such a module, based upon an enhanced notion of Behavior Trees. We illustrate the effectiveness of the proposed architecture on a significant example, and on a wider case study. Keywords Human-centered computing, Human-automation interaction, Affective computing, Behavior trees 1. Introduction A long-term goal in the field of human-machine interaction and in assistive robotics (e.g., in healthcare applications) is to formalize aspects of the “Theory of Mind” (ToM), which is (cf. the Oxford Handbook of Philosophy and Cognitive Science [1], Chapter by Alvin I. Goldman) an important social-cognitive skill that involves the ability to attribute mental states, including emotions, desires, beliefs, and knowledge both one’s own and those of others, and to reason about the practical consequences of such mental states. Theory of Mind, developed originally by Philosophers and Psychologists, is starting to be applied to robotics (and some suitable logic formalizations are being developed [2, 3]), in particular with the arrival of “service robots” devised to support users in their everyday tasks (e.g., in eHealth robots support on the one hand patients, by reminding them to take their medicines and by providing advice and reassurance, but on the other hand such robots also support doctors, by constantly monitoring the user’s vital parameters, creating alerts whenever necessary). In order to render these robots acceptable and even appreciated by users, they will have to be programmed so as to mimic basic social skills and behave in a socially acceptable manner, which means that their behaviour is to some extent predictable by the user, and conformant to social standards. Theory of Mind is often linked to so-called “Affective Computing”, which is a set of techniques able to elicit a human’s emotional condition from physical signs, to enable the system to respond intelligently to human WOA 2022: 23rd Workshop From Objects to Agents, September 1–2, Genova, Italy Envelope-Open pierangelo.dellacqua@liu.se (P. Dell’Acqua); stefania.costantini@univaq.it (S. Costantini) GLOBE https://dellacqua.se/ (P. Dell’Acqua); http://www.di.univaq.it/stefcost (S. Costantini) Orcid 0000-0003-3780-0389 (P. Dell’Acqua) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) emotional feedback, and to enhance ToM activities by providing it with perceptions related to the user’s emotional signs. In this paper we go beyond simple affective computing, as we aim to define a module, to be possibly incorporated into any agent architecture, responsible for the emotional interaction between the agent and the user. In particular, the envisaged agents are empathetic in the sense that they are capable of ”empathy”. They can sense user’s emotions, coupled with the ability to work out what the user might be thinking or feeling. Therefore, our notion of empathy is strongly linked to ToM. Empathy has be generally seen as a positive quality, but latety formal studies in Neurosciences (cf„ e.g., [4] and the references therein) have tried to provide a formal perspective on the neuro- biological and cognitive mechanisms that underlie the positive role of empathy, in particular in medicine, where emphatic providers have the effect of patients healing faster and experiencing lesser symptoms. According to neuroscientists, the notion of empathy encompasses: (1) the capacity to react to the the valence and intensity of others’ emotions; (2) conscious awareness of the emotional state of another person; (3) empathic concern, implying the motivation to care for someone’s welfare; (4) cognitive empathy, similar in fact to ToM in the sense of being able to reconstruct what what someone is thinking or feeling. There is nowadays a growing attention on building intelligent systems where humans and AIs form teams, exploiting the potentially synergistic relationships between human and automation, thus devising Artificial Intelligence (AI) systems that should cooperate in order to perform complex tasks, possibly involving a high degree of risk. As a simple example, in an AI-based self-driving vehicle, the AI component is expected to evaluate and “co-manage” situations and risks, where the driver is used as a fallback, or even self-manage the risks in the case this should be required by the circumstances. Human-automation interaction has been studied, and is one of the main themes of “Human-centered AI”. This issue also falls in the realm of ”Trustworthy”, whose requirements are respect for human autonomy, prevention of harm, fairness, and explainability. Trustworthy AI is meant to guarantee compliance, safety, security, reliability, adaptability. Working together, AI and humans can produce results that exceed what either can achieve alone. For instance, a human driver might train in some way the co-driving automation via a cooperative task shared between the human driver and the AI-based system installed on the vehicle. In this synergistic relationship humans improve automation efficacy (capabilities and performance) while automation improves human efficiency, and compensates for human inadequacies, catching and correcting possible misbehaviours, possibly also due to physically or emotionally impaired states, and providing useful suggestions. So, for the tasks of improving caregiving in medicines and other sectors (i.e., teaching) and of constructing effective human-AI teams, agents should be endowed with an emotion recognition and management module. In this paper, we propose an architecture for such a module, based upon an enhanced notion of Behavior Trees. The paper is organized as follows. In Section 2 we provide the necessary background on Behavior Trees. In Section 3 we discuss the proposed architecture, and in Section 4 we illustrate the enhanced Behavior Trees that we devised as the core of this architecture. In Sections 5 and 6 we propose a possible example of application of the proposed architecture, and a wider case study. Finally, in Section 7 we conclude. 2. Background: Behavior Trees Behavior Trees (BTs) were invented as a tool to enable modular AI in computer games [5]. In BTs, the state transition logic is not dispersed across the individual states (like in finite state machines), but organized in a hierarchical tree structure with the states as leaves. This has a significant effect on modularity, which in turn simplifies both synthesis and analysis by humans and algorithms alike. A behavior tree is essentially a mathematical model of plan execution. BTs describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create complex tasks composed of simple tasks without worrying how the simple tasks are implemented. In the last decade BTs received an increasing amount of attention both in computer science, robotics, control systems and video games. For comprehensive survey of BTs in Artificial Intelligence and Robotic applications see [6]. Despite the fact that there is no formal definition, behavior trees have been comprehensively described by Champandard [7, 8, 9, 10] and Knafla [11]. Damian Isla has also discussed the implementation of behavior trees in commercial games [5, 12]. In this section, we introduce a definition of behavior trees based on the description of Champandard and Knafla. A behavior tree is a directed acyclic graph consisting of different types of nodes. Most of the time the behavior tree is tree-shaped, hence the name. However, unlike a traditional tree, a node in a behavior tree can have multiple parents which allows the reuse of that part of the BT. The traversal of a behavior tree starts at the top node. When a node is executed, it returns one of the three states: success, failure or running. The first two are self-explanatory and running signifies that the node has not yet finished executing. A behavior tree consists of the following types of nodes. 2.1. Leaf nodes Action: An action represents a behavior that the character can perform. The action returns the state success or failure when it completes its execution depending on the outcome. When an action needs more time to complete, it returns the state running. An action is depicted as a white, rounded rectangle. Condition: A condition checks an internal or external state. It returns either success or failure. Conditions are similar to actions except that they execute immediately and hence never return running. A condition is represented as a gray, rounded rectangle. 2.2. Inner nodes Sequence Selector: A sequence selector is a node that typically has several child nodes that are executed sequentially. As long as a child node completes its execution successfully, the sequence selector continues executing the next child node in the sequence. If every child node returns success, then the sequence selector returns success. Should one of the child nodes return failure, the sequence selector immediately returns failure. If a child node returns running, the sequence selector also returns running. The sequence selector keeps track of which, if any, of its child nodes is currently running. A sequence selector is depicted as a gray square with an arrow across the links to its child nodes. Figure 1: A general architecture for emotional empathetic agents Priority Selector: A priority selector has a list of child nodes which it tries to execute one at a time, with respect to the specified order, until one of the child nodes returns success. If none of the child nodes executes successfully, the priority selector returns failure. If a child node returns running, the priority selector also returns running. The priority selector keeps track of which of its child nodes, if any, is currently running. A priority selector is represented with a gray circle with a question mark in it. Parallel Node: A parallel node executes all of its child nodes in parallel. A parallel node can have different ways of determining when to stop executing its child nodes. One may specify the number of child nodes that must execute successfully for the parallel node to succeed, and likewise the number of child nodes that must fail in order for the parallel node to fail. A parallel node is depicted as a gray circle with a P in it. Decorator: A decorator is a node that acts as a filter that places certain constraints on the execution of its single child node without affecting the child node itself. For instance, a decorator can prevent a child node from executing more often than every five second. Decorators are represented as diamonds with descriptive text inside. 3. An Architecture for an Emotional Empathetic Agent In this section we present the proposed architecture for emotional empathetic agents, depicted in Figure 1 and discuss its main components. Sensor Agents act in their environment. The environment contains the user (human) and possibly other agents. An agent perceives its environment through sensors and acts upon it through actuators. Every agent perceives its own actions and may perceive their effects later via its sensory input. Emotion Recognition The emotion recognition module (also call affective sensing) receives the raw data from the sensory input and processes it to synthesize the user’s affective state 𝐸. The output < 𝑃, 𝐸 > differentiates between the input data 𝑃 from the sensor about the environment and the synthesized emotional state 𝐸. There are three main approaches to observe emotional traits: speech analysis; visual observation of gestures, body posture or facial features; and measuring physiological parameters through sensors with direct body contact. These approaches observe changes to physiological processes related to emotional states. The reader may refer to [13] for an approach to emotion recognition via a physiological background. Affective Appraisal Affective appraisal refers to the process in which events from the environment are evaluated in terms of their emotional significance. Appraisal theory is the theory in psychology that emotions are extracted from our evaluations (appraisals or estimates) of events that cause specific reactions in different people. Essentially, our appraisal of a situation causes an emotional, or affective, response that is going to be based on that appraisal. Appraisal theories of emotion state that emotions result from people’s interpretations and explanations of their circumstances even in the absence of physiological arousal. There are two basic approaches; the structural approach and process model. These models both provide an explanation for the appraisal of emotions and explain in different ways how emotions can develop. In the absence of physiological arousal we decide how to feel about a situation after we have interpreted and explained the phenomena. Several appraisal theories have been proposed in literature. Notably, Lazarus [14, 15] proposes a multidimensional appraisal theory of emotion, where an appraisal is an evaluation of an external event. His theory of emotion can be broken down into a sequence: (1) cognitive appraisal, (2) physiological response, and (3) action. Ortony, Clore and Collins’s [16] model of emotion is a widely used model of emotion that states that the strength of a given emotion primarily depends on the events, agents, or objects in the environment of the agent exhibiting the emotion. For an implementation of the OCC model within an emotional agent architecture for believable characters, the reader may refer to [17]. Affective State The term “affective state” refers to how an entity is currently feeling, that is the product of its emotions at a certain moment in time. Within an emotional agent architecture [18], emotions were represented as signals1 coming from the Affective Appraisal module. The set of all signals of the same type forms the corresponding emotional state. Each signal has the form of a sigmoid curve and consists of the following phases: delay, attack, sustain and decay. The sigmoid curve is defined as: 𝑔 𝑠𝑖𝑔𝑚𝑜𝑖𝑑(𝑡) = +𝑣 1 + 𝑒 −(𝑡+ℎ)/𝑠 where 𝑡 is the time, 𝑔 is the gain, ℎ is the horizontal shift, 𝑠 is the slope steepness and 𝑣 is the 1 The signals correspond to what in neuroscience is the concentration of certain chemical substances in human brain. The signal we use is a simplified representation of the concentration levels. vertical shift. Being the signals parameterized, it is possible to create fairly diverse types of emotion signals. To address more complex emotional situations scenarios, in this approach emotions could influence each other through a sophisticated filtering system, refer to [19] for a comprehensive presentation. Decision Making This module is responsible for selecting the next action to execute. It receive inputs from the Emotion Recognition module as well as the Affective State and the Agent Memory module. Note that here with 𝐸 we indicate the emotional state of the human that interact with the automation (namely, the agent), and with 𝐸̂ we indicate the emotional state of the agent (that is, the automation). 𝐸 and 𝐸̂ are therefore distinct and assume different values. Possibly, the emotions elicited from the human are different from the emotions designed and deployed for the agent. In this paper we focus on employing the technique of emotional behavior trees as core technology for the decision making module and discuss it further in the next section. Agent Memory The knowledge base represents the memory of the agent. Here all information from sensory inputs 𝑝 from the environment and user’s emotional state 𝐸 as well the actions 𝐴 selected to be executed are stored with a time stamp. Actuator The actions 𝐴 selected by the Decision Making module are passed to the actuator whose role is to execute them on the environment. Actions come together with an emotion encoding to display agents emotions via verbal or visual communication. The actuator, depending on the type of application, must be equipped with the ability to render the emotional aspect of actions. One example could be a verbal communication to a human while having a smiling face. 4. Emotional Behavior Trees It is not straightforward to couple behavior trees with emotions to mimic human emotional decision making. If we wish to have a natural and interesting behavior, it is important that the characters behave in an emotional way. It could be claimed that it is possible to incorporate emotions into behavior trees by merely using emotions in the conditions. However, doing so may create large cumbersome behavior trees that are difficult to manage. For each behavior, a specific set of conditions would have to be placed on emotional states. These conditions would most likely take the form of checking the emotional values against a fixed threshold, which would disable a subtle emotional effect on decision making. Using this approach would most likely lead to a large behavior tree with numerous nested conditions, making it difficult to construct and manage. Furthermore, in this paper we focus on emotion-based interaction between humans and machines, and human (the end user) will certainly feel that the system is programmed to react to his/her inputs in a rational way as a machine. 4.1. Emotional Selector To take emotions into consideration, Johansson and Dell’Acqua [20] extended the definition of behavior trees and introduced a new type of selector, called the emotional selector. They called the resulting model the emotional behavior tree (EmoBT). Emotional Selector: The emotional selector orders its child nodes according to a number of identified relevant factors (see Method) and the affective state of the agent. Once the ordering has been established by selecting the child nodes based upon their probabilities, the emotional selector behaves as a priority selector. When the emotional selector has completed its execution, and it is executed again, the ordering of the nodes must be re-calculated. An emotional selector is represented with a gray circle with the character ’E’ in it. Below we present the methodology to define the ordering of the child nodes of an emotional selector. METHOD 1. Define the objectives of the given application. 2. Identify the relevant aspects 𝑅 = {𝑟1 , … , 𝑟𝑚 } of the human-automation interaction with respect to the objectives. 3. Identify the emotions of the emotion state 𝐸 = {𝑒1 , … , 𝑒𝑛 }. 4. For every aspect 𝑟 ∈ 𝑅 define the 𝑟-value of every node type of the behavior tree, that is, the 𝑟 value of action, condition, sequence selector, priority selector, parallel node and decorator. 5. For every aspect 𝑟 ∈ 𝑅 define the emotions that affect 𝑟 positively or negatively, and define the emotional weight 𝐿𝑟 in term of the emotional state 𝐸: 𝐿𝑟 = 𝑓𝑟 (𝐸) 6. For every aspect 𝑟 ∈ 𝑅 define the weight 𝑊𝑟,𝑖 of every children 𝑖 of emotional selectors in term of an equation ℎ whose parameters are 𝐿𝑟 and the 𝑟-value of the node: 𝑊𝑟,𝑖 = ℎ(𝐿𝑟 , 𝑟(𝑖)) 7. Define the overall weight 𝑊𝑖 of every child node 𝑖 of emotional selectors: 𝑊𝑖 = 𝑔(𝑊𝑟1 ,𝑖 , … , 𝑊𝑟𝑚 ,𝑖 ) 8. For every child node 𝑖 define the probability 𝑝𝑟𝑜𝑏𝑖 to be selected for execution. 4.2. Modelling Believable NPCs via EmoBTs In [20] the authors discuss the role of emotions in decision making. In a case study aiming at modelling realistic, believable non-player characters (NPCs) in video-games, they identify three relevant aspects for that type of application; Risk, Time and Planning. Below we show how to incorporate these three aspects into EmoBTs by following the methodological steps. For simplicity of exposition, we focus on the Risk aspect. 1. Objectives: To model realistic and believable non-player characters. 2. Relevant aspects 𝑅 = {𝑅𝑖𝑠𝑘, 𝑇 𝑖𝑚𝑒, 𝑃𝑙𝑎𝑛𝑛𝑖𝑛𝑔} Here we only motivate the Risk aspect. The reader may refer to [20] for a more detailed account. Risk perception The perceived risk of an action is greatly influenced by emotions [21]. Studies by Lerner and Keltner [22, 23] have shown that happy and angry people are willing to accept greater risks, while fearful people are more pessimistic. Raghunathan and Pham [24] proposes that anxiety is connected to risk-avoidance, while sadness allows for greater risks, but instead gives a focus on high rewards. A study by Maner et al. [25] also shows that people with anxiety are more prone to risk-avoidance behavior 3. 𝐸 = {fear, fatigue, sadness}. Only three emotions were identified for simplicitly of exposition. 4. Definition of Risk-value (Risk Assessment): Risk has to do with how dangerous the character believes a situation is. A risk value is between 0 and 1; 0 being no risk at all, and 1 being extremely dangerous. The risk value measures the probability of risk. EmoBTs cannot reason about the risk of performing an action, but we allow the designer to add a risk value to each leaf node in the tree, and derive the associated risk for the inner nodes. Action: An action has a risk value that is set by the designer (0 by default). Condition: A condition has a risk value that is set by the designer (0 by default). Sequence Selector: Since a sequence selector performs every child node of the sequence, the risks of every child nodes must be combined. The overall risk value is calculated as: 𝑁 Risk 𝑗 = 1 − ∏ (1 − Risk 𝑖 ) 𝑖=1 where 𝑁 is the number of child nodes of 𝑗. Priority Selector: A priority selector 𝑗 only executes one of its child nodes. Since we cannot determine in advance which node will be executed, we define the risk value of 𝑗 as the average of the risk of every child node 𝑖: 𝑁 ∑𝑖=1 (1 − Risk 𝑖 ) Risk 𝑗 = 𝑁 Parallel Node: Since all of the child nodes of a parallel node 𝑗 are executed, the risk is defined as: 𝑁 Risk 𝑗 = 1 − ∏ (1 − Risk 𝑖 ) 𝑖=1 where 𝑁 is the number of child nodes of 𝑗. Decorator: The risk value of a decorator is the same as the one of its child node. 5. To mimic how affective states influence decision making, we introduce emotional weights for every relevant factor. Below we show the Risk factor. Let 𝑒1+ , … , 𝑒𝑀 + (resp. 𝑒 − , … , 𝑒 − ) 1 𝑁 be the values of the emotions that positively (resp., negatively) affect the perception of risk. We define the emotional weight for risk as: 𝑀 𝑁 − ∑ 𝑒𝑖+ ∑𝑗=1 𝑒𝑗 𝐸Risk = 𝑖=1 − 𝑀 𝑁 6. For every aspect 𝑟 ∈ 𝑅 we define the weight 𝑊𝑟,𝑖 of every child node 𝑖 of any emotional selector. We consider the Risk aspect. The weight for risk for a child node 𝑖 is calculated as: 𝑊Risk,𝑖 = (1 − 𝐸Risk × 𝛿) × Risk 𝑖 where Risk 𝑖 is the risk value for the child node 𝑖. Note that 𝑊Risk,𝑖 should be clamped to the interval [0; 1] since it represents a probability. The variable 𝛿 determines how much emotions affect the weights. Its value must be between 0 and 1, where 0 signifies no emotional impact and 1 corresponds to full emotional impact. 7. The overall weight of a child node 𝑖 is calculated as: 𝑊𝑖 = 𝛼 × 𝑊Risk,𝑖 + 𝛽 × 𝑊Time,𝑖 + 𝛾 × 𝑊Plan,𝑖 The constants 𝛼, 𝛽 and 𝛾 give importance to their respective factors. 8. For every child node 𝑖 we define the probability 𝑝𝑟𝑜𝑏𝑖 for the node to be selected for execution. To select which child node to execute, we list them in ascending order according to the weight value 𝑊𝑖 . Hence, the lower value of 𝑊𝑖 , the more desirable is the node. Since there are only three different factors to use to calculate the weights, but there may be many child nodes to an emotional selector, we want to make it possible, however unlikely, that the character will choose a less desirable node. To do so, after the child nodes have been ordered according to increasing weights 𝑊𝑖 , we associate a probability prob 𝑖 to every child node 𝑖: prob 𝑖 = 𝑎(1 − 𝑎)𝑖−1 with 0.5 ≤ 𝑎 < 1 where a is chosen depending on which distribution one wants. Let us assume 𝑎 = 0.5. This will make the most desirable child have a probability of 50% to be selected, the second child 25% and so on. Once the ordering has been established by selecting the child nodes based upon their probabilities, the emotional selector behaves as a priority selector. When the emotional selector has completed its execution, and it is executed again, the ordering of the nodes must be re-calculated. Figure 2: The behavior tree for the NPC fighter. 5. Example: NPC-fighter Behavior Here we show an example of a virtual character for a fighting scenario. The example is taken from [20]2 . A fighting NPC often has several different ways to attack an enemy, each option involving different amounts of risk, planning and time. In the example the NPC can choose the following attacks. It can throw grenades, use a musket, use a sword, or do a fancy knifing maneuver. The emotional behavior tree used for the example is depicted in Figure 2. The risk, planning, and time interval values for the respective actions are displayed in Figure 3. The fighting example above is simulated under different emotional states. In Figure 4, the weight values for each action are shown under different emotional conditions. It can be seen that the weight values change widely due to emotional impact. For example, when the character is afraid, maneuvering is a good choice because it is not risky. When the character is sad and tired, throwing grenades seems like a good option because it does not involve much planning 2 The reader can refer to the paper for a more detailed description. Figure 3: Time, risk and planning values of leaf nodes. Movements actions refer to step left and right, and jump in front actions. Figure 4: The weight values for the fighting game example, given different emotional states. When listed, each emotion has the maximum value 1. and it gives fast results. In Figure 5 the generated probabilities for each child node are displayed, using a = 0.7. Note that the ordering with respect to weight value is preserved, but there is a greater distinction in which action is more preferable. 6. Case Study: The Animalistic Project Some modules of the agent architecture for emotional empathetic agents were first deployed in the Animalistic project at Norrköpings Visualiseringscenter in 2011 as part of the doctoral work of Anja Johansson [18]. The overall project architecture is depicted in Figure 6. The Animalistic project was a half a year long interactive installation at the center. The project made use of the Kinect device to allow visitors to the center to interact with virtual animals in a virtual world. The installation was developed mainly for children, but any visitor could use it. The animals in the virtual world were extra-terrestrial in nature, inhabiting a vast field of grass. The animals could, depending on their mood, decide to sleep, eat, explore the world or walk up to the screen to interact with the visitors. There were three different species of animals in the installation. Figure 5: Resulting probabilities of the fighting game example with a = 0.7 Each had a different type of behavior, such as a preferred resting place. Since the Animals were not expected to be empathetic with the visitors, the Emotion Recognition module were not used in the architecture. The form of interaction was, due to limited resources, simple. Standing in front of the screen attracted animals unless too many people were present, in which case the animals were frightened. Figure 6: The architecture for the Animalistic project. In the installation, the following affective and physiological state were used: hunger, happi- ness, fear, and socialness (the current need to socialize with a human). While the animals were Figure 7: A child interacting with the virtual animals in the installation. Photo courtesy of Norrköpings Visualiseringscenter. unable to display emotions in a sophisticated way (e.g. through blended animations of different affect type), they were able to portray their current thoughts and feelings to the visitors through thought bubbles appearing over their heads. This addition was made partway through the project when it had become evident that the reason behind the characters’ actions was not clear to the visitors. A user study was planned and executed during the Animalistic project. The study was intended to answer the following questions. First, did the animations of the virtual animals convey the desired behavior to the visitors? Secondly, what were the movements used by the users when they were asked to interact with the animals? A school class of pupils at around age 11 volunteered for the user study. The first part of the study was executed using a questionnaire. The second part used video recordings of the children as they interacted with the installation. The first question was answered partly during the study, resulting in a slight change of the animations for final installation. Results for the user interaction part gave few results, as most children did not to interact with the animals at all. The Animalistic project gave a much needed opportunity to try out the agent architecture in a real application. 7. Conclusions and Future Work In this paper we outlined our line of work on emotional human-automation interaction, with the intention of modeling realistic, believable characters and, more generally, to devise a module for managing emotions in human-AI interaction, to be potentially incorporated in any agent architecture. We focused on presenting how to introduce emotions into behavior trees to enable a dynamic, changing and adapting behavior. In fact, by taking advantage of known psychology theories concerning emotions, the EmoBT approach lets emotions affect decision making in a human-like and intuitive way. Currently, we are developing a theoretical framework for modeling emotional empathetic interaction in the context of car interfaces. The research goal is to monitor the emotions of drivers and to enable novel driver-car interactions. In fact in the context of driving, the emotional state of drivers plays a critical role in road safety as it has been shown that negative emotions like anger can significantly increase accidents [26, 27]. Future work will include the deployment the agent architecture presented above in the context of a companion agent devised to assist and help drivers, by providing emotion-enabled interventions in risky situations that might arise due to external circumstances and/or to the driver’s state of mind. Zepf et al. [28] found that events associated with traffic management were the most frequent source for negative states (e.g., frustration, annoyance, stress) for drivers, probably because they counteracted the achievement of the drivers’ main goal (i.e., reaching the destination). If the car automation detects one or several of the highlighted triggers (e.g., red lights, speed signs), it might provide information about any potential time loss to help correct unnecessary biases or expectations by the driver. As the authors suggested, this information may help appease the negative mood and help overlook potential goal-incongruent events. In [28] another important trigger that elicited negative emotional states was the high traffic density which require high cognitive demands. A possible method of intervention to help the driver relax may be that of automatically activating the self-driving mode so the driver can divert the attention. Alternatively, the car could more actively engage with the driver, e.g., by recommending relaxing music on a dedicated radio station. References [1] A. Goldman, et al., Theory of mind, in: The Oxford Handbook of Philosophy of Cognitive Science, volume 1, Oxford University Press, 2012. [2] L. Dissing, T. Bolander, Implementing theory of mind on a robot using dynamic epistemic logic, in: C. Bessiere (Ed.), Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, ijcai.org, 2020, pp. 1615–1621. doi:10.24963/ijcai. 2020/224 . [3] S. Costantini, A. Formisano, V. Pitoni, An epistemic logic for modular development of multi-agent systems, in: N. Alechina, M. Baldoni, B. Logan (Eds.), Engineering Multi-Agent Systems - 9th International Workshop, EMAS 2021, Virtual Event, May 3-4, 2021, Revised Selected Papers, volume 13190 of Lecture Notes in Computer Science, Springer, 2021, pp. 72–91. doi:10.1007/978- 3- 030- 97457- 2_5 . [4] Why empathy has a beneficial impact on others in medicine: unifying theories, Frontiers in Behavioral Neurosciences (2015). [5] D. Isla, Handling complexity in the Halo 2 AI, in: Game Developers Conference, volume 12, 2005. [6] M. Iovino, E. Scukins, J. Styrud, P. Ögren, C. Smith, A survey of behavior trees in robotics and AI, Robotics and Autonomous Systems 154 (2022). URL: "https://doi.org/10.1016/j. robot.2022.104096". [7] A. Champandard, Getting started with decision making and control systems, Springer, 2008, pp. 257–264. [8] A. Champandard, Popular approaches to behavior tree design, www.aigamedev.com, 2007. [9] A. Champandard, Understanding behavior trees, www.aigamedev.com, 2007. [10] A. Champandard, Behavior trees for next-gen game ai, www.aigamedev.com, 2008. [11] B. Knafla, Introduction to behavior trees, 2011. [12] D. Isla, Building a better battle - the halo 3 ai objectives system, in: Game Developers Conference, (talk), 2008. [13] C. Peter, B. Urban, Emotion in Human-Computer Interaction, 2012, pp. 239–262. doi:10. 1007/978- 1- 4471- 2804- 5\%5F14 . [14] R. S. Lazarus, Emotion and adaptation, Oxford University Press, 1991. [15] A. Ali, Reflection on richard lazarus’ emotion and adaptation, British Journal of Psychiatry 209 (2016) 399–399. doi:10.1192/bjp.bp.115.178285 . [16] A. Ortony, G. L. Clore, A. Collins, The Cognitive Structure of Emotions, Cambridge University Press, 1988. doi:10.1017/CBO9780511571299 . [17] P. Grundström, Design and implementation of an appraisal module for virtual characters, 2012. M.Sc. thesis. [18] A. Johansson, Affective decision making in artificial intelligence: Making virtual characters with high believability, Ph.D. thesis, Linköping University Electronic Press, 2012. [19] A. Johansson, P. Dell’Acqua, Realistic Virtual Characters in Treatments for Psychological Disorders An Extensive Agent Architecture, in: A. Hast (Ed.), The Annual SIGRAD Conference, volume Special Theme: Computer Graphics in Healthcare, Uppsala University, Linköping Electronic Conference Proceedings, Uppsala, Sweden, 2007, pp. 46–52. [20] A. Johansson, P. Dell’Acqua, Emotional behavior trees, in: 2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012, pp. 355–362. doi:10.1109/CIG.2012. 6374177 . [21] G. Loewenstein, E. Weber, C. Hsee, N. Welch, Risk as feelings, Psychological bulletin 127 (2001) 267–86. doi:10.1037/0033- 2909.127.2.267 . [22] J. Lerner, D. Keltner, Beyond valence: Toward a model of emotion-specific influences on judgement and choice, Cognition and Emotion 14 (2000) 473–493. doi:10.1080/ 026999300402763 . [23] J. S. Lerner, D. Keltner, Fear, anger, and risk., Journal of personality and social psychology 81 (2001) 146–159. [24] R. Raghunathan, M. T. Pham, All negative moods are not equal: Motivational influences of anxiety and sadness on decision making, Organizational Behavior and Human Deci- sion Processes 79 (1999) 56–77. URL: https://www.sciencedirect.com/science/article/pii/ S0749597899928388. doi:https://doi.org/10.1006/obhd.1999.2838 . [25] J. Maner, J. Richey, K. Cromer, M. Mallott, C. Lejuez, T. Joiner, N. Schmidt, Dispositional anxiety and risk-avoidant decision-making, Personality and Individual Differences 42 (2007) 665–675. doi:10.1016/j.paid.2006.08.016 . [26] G. Underwood, P. Chapman, S. Wright, D. Crundall, Anger while driving, Trans- portation Research Part F: Traffic Psychology and Behaviour 2 (1999) 55–68. URL: https://www.sciencedirect.com/science/article/pii/S1369847899000066. doi:https://doi. org/10.1016/S1369- 8478(99)00006- 6 . [27] SafeStates, Exploring key factors for risky driving, 2022. URL: https://www.safestates.org/ page/SRPFKeyFactors, accessed = 09-07-2022. [28] S. Zepf, M. Dittrich, J. Hernandez, A. Schmitt, Towards empathetic car interfaces: Emotional triggers while driving, CHI EA ’19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (2019) 1–6. doi:10.1145/3290607.3312883 .