<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Stefania Costantini</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pierangelo Dell'Acqua</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abeer Dyoub</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Monaldini</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Information Engineering</institution>
          ,
          <addr-line>Computer Science and Mathematics</addr-line>
          ,
          <institution>University of L'Aquila</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dept. of Science and Technology, Linköping University</institution>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Care Robots are the future of healthcare giving for patients with chronic diseases and disabilities. For improving the quality of care-giving and the trustworthiness of those robots, they should be equipped with emotion recognition capabilities and empathetic behavior. In this work, we propose an framework for an empathy module to be incorporated in every care robot, and then demonstrate the efectiveness of our proposal by means of an example. exist, many of them based upon computational logic (cf., e.g., [1] for recent survey on such languages). We conHealthcare is shifting into patient-centered healthcare centrate on such approaches, as logic-based agents are with the objective to empower patients to become active in principle verifiable and thus trustworthy, and more participants in their care and to ensure better health capable that in other approaches to provide the user with outcomes. However, the nonclinical needs of patients explanations, as logical inference can be transposed into mental health and well-being are frequently overlooked natural language (see, e.g., [2], Chapter VII). by contemporary patient-centered healthcare models. Theory of Mind (ToM) is starting to be applied to The COVID19 pandemic has accelerated the develop- robotics [3]. It is often linked to the so-called Afective ment of robots and virtual assisted living that can help Computing, which is a set of techniques able to elicit care for persons with disabilities and aging adults both a human's emotional condition from physical signs, to physically and emotionally. Given the intimate human- enable the system to respond intelligently to human emomachine interaction in the case of care robots, it has tional feedback, and thus to enhance ToM activities by become fundamental for these robots to demonstrate providing it with perceptions related to the user's emoan empathetic behavior. This would result in more pro- tional signs. In order to render these robots acceptable ductive and delightful interaction that contribute to the and even appreciated by users, they will have to be propatient well-being and mental health and to the trust grammed so as to mimic basic social skills and behave relation between the patient and the machine. in a socially acceptable manner. This means that their Artificial Empathy (AE) refer to the development of behaviour should be to some extent predictable by the AI systems, such as care robots or virtual agents, that user and conformant to social and ethical standards [4]. are able to detect and respond to human emotions in an Virtual Reality (VR) defines as a technology that creempathetic way. Interest in empathetic robots is growing ates simulated environments to mimic real-world situain academia and industry in the last years. tions. The patient and her/his care robot (CR) can b seen as Since the use of VR can turn threatening and tedious two agents in a Multi-agent system (MAS). To achieve conditions into safe and enjoyable states; in recent years, better results via cooperation and enhance the mutual employing this technology has been considered for the trust, agents interacting with other agents and in par- treatment of many mental illnesses especially for anxiety ticular with humans must be able to reason about what disorders [5]. One approach that can be implemented these other agents should and can do, because the robot in VR to treat anxiety is exposure therapy (VRET), it is should support the human to accomplish her tasks. Sev- client-centered and helps clients confront fear-inducing eral agent-oriented programming languages and systems stimuli through guided exposures and is often paired with cognitive-behavioral therapy. For the sake of improving care giving by care robots, in this work, we propose a framework for an empathy management module, based upon an enhanced notion of Behavior Trees.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Human-centered computing</kwd>
        <kwd>human-automation interaction</kwd>
        <kwd>afective computing</kwd>
        <kwd>behavior trees</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Ital-IA 2023: 3rd National Conference on Artificial Intelligence,
organized by CINI, May 29–31, 2023, Pisa, Italy
* Corresponding author.
$ stefania.costantini@univaq.it (S. Costantini);
pierangelo.dellacqua@liu.se (P. Dell’Acqua);
abeer.dyoub@univaq.it (A. Dyoub);
andrea.monaldini@student.univaq.it (A. Monaldini)
© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).</p>
      <p>CEUR Workshop Proceedings (CEUR-WS.org)</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background: Behavior Trees</title>
      <p>Behavior Trees (BTs) were invented as a tool to enable
modular AI in computer games. A behavior tree is
essentially a mathematical model of plan execution, where
each element (task, action, etc.) of a plan is associated
to a node in the tree. Their strength comes from their
ability to create complex tasks composed of simple tasks
without worrying how the simple tasks are implemented.
In the last decade BTs received an increasing amount
of attention both in computer science, robotics, control
systems and video games. For comprehensive survey of
BTs in Artificial Intelligence and Robotic applications see
[6].</p>
      <p>In this section, we introduce a definition of BTs based
on the description of Champandard and Knafla [7, 8]..</p>
      <p>A BT is a directed acyclic graph consisting of diferent
types of nodes, each one associated with executable code
(where such code enacts an element composing a plan).
In most cases, a BT is tree-shaped, hence the name.
However, unlike a traditional tree, a node in a BT can have
multiple parents which allows the reuse of that part of the
BT. The traversal of a behavior tree starts at the top node.
When a node is traversed the associated code is executed
(we say for short that the node is executed), returning
one of the three states: success, failure or running. In our
case, a BT is composed of the following types of nodes,
where the type denotes the kind of task related to node
execution:</p>
      <sec id="sec-2-1">
        <title>2.1. Leaf Nodes</title>
        <p>Action: An action represents a behavior that the
character can perform. The action returns success, failure, or
running state. An action is depicted as a white, rounded
rectangle.</p>
        <p>Condition: A condition checks an internal or external
state. It returns either success or failure. A condition is
represented as a gray, rounded rectangle.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Inner Nodes</title>
        <p>Sequence Selector: A sequence selector is a node that
typically has several child nodes that are executed
sequentially. If every child node returns success, then this
selector returns success. Should any child fail, the
selector immediately returns failure. If a child returns running,
the selector also returns running. A sequence selector is
depicted as a gray square with an arrow across the links
to its child nodes.</p>
        <p>Priority Selector: A priority selector has a list of child
nodes which it tries to execute one at a time, with respect
to the specified order, until one of them returns success.
If none of the children succeeds, then the this selector</p>
        <p>A priority selector is represented with a gray circle with
a question mark in it.</p>
        <p>Parallel Node: A parallel node executes all of its child
nodes in parallel. A parallel node can stop executing its
child nodes. One may specify the number of child nodes
that must execute successfully for the parallel node to
succeed, and those that must fail in order for the parallel
node to fail. A parallel node is depicted as a gray circle
with a P in it.</p>
        <p>Decorator: A decorator is a node that acts as a filter
that places certain constraints on the execution of its
single child node without afecting the child node itself.
Decorators are represented as diamonds with descriptive
text inside.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. A framework for Empathetic</title>
    </sec>
    <sec id="sec-4">
      <title>Agents</title>
      <p>In this section, we present the proposed framework for
emotional empathetic agents, depicted in Figure 1 and
discuss its main components. Sensors An agent perceives
its environment through sensors and acts upon it through
actuators.</p>
      <p>Emotion Recognition: this module receives the raw data
from the sensory input and processes it to synthesize the
user’s afective state . The output &lt; ,  &gt;
diferentiates between the input data  from the sensor about the
environment and the synthesized emotional state .</p>
      <p>Afective Appraisal refers to the process in which
events from the environment are evaluated in terms of
their emotional significance. Appraisal theory is the
theory in psychology that emotions are extracted from our
evaluations (appraisals or estimates) of events that cause
specific reactions in diferent people.</p>
      <p>Afective State The term afective state refers to how
an entity is currently feeling, that is the product of its
emotions at a certain moment in time. Within an
emotional agent architecture [9], emotions were represented
as signals1 coming from the Afective Appraisal module.
returns failure. If a child is running, it returns running. 1The signals correspond to what in neuroscience is the concentration</p>
      <p>To take emotions into consideration, Johansson and
Dell’Acqua [10] extended the definition of behavior trees
and introduced a new type of selector, called the
emotional selector. They called the resulting model the
emotional behavior tree (EmoBT).</p>
      <p>Emotional Selector: reorders its child nodes
according to a number of identified relevant factors and the
afective state of the agent (see section 5). Once the
ordering has been established based upon the probabilities
of nodes, the emotional selector behaves as a priority
selector. When it completes its execution, and re-execute,
the ordering of the nodes must be re-calculated. An
emotional selector is represented with a gray circle with the
character ’E’ in it.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Modelling VRET-Companion</title>
    </sec>
    <sec id="sec-6">
      <title>Behavior via EmoBTs</title>
      <p>We present here a simple VRET-Companion behavior
scenario with an emotional behavior tree. This example aims
to illustrate the usefulness of our model in a VRET
scenario. VERT-Companion is a virtual character that play
the role of user companion in a virtual reality settings.
This character try to approach the user (represented as
another character) and interact with her/him.</p>
      <p>Similar to [10], we will consider three relevant aspects
for our application; Risk, Time and Planning. Below we
show how to incorporate these three aspects into EmoBTs
by following methodological steps.</p>
    </sec>
    <sec id="sec-7">
      <title>4. Emotional Behavior Trees</title>
      <p>It is not straightforward to couple behavior trees with
emotions to mimic human emotional decision making. If
we wish to have a natural and interesting agent’s
behavior, it is important that characters behave in an emotional
way. It could be claimed that it is possible to incorporate
emotions into behavior trees by merely using emotions
in the conditions. However, doing so may create large
cumbersome behavior trees that are dificult to manage.
For each behavior, a specific set of conditions would
have to be placed on emotional states. These conditions
would most likely take the form of checking the
emotional values against a fixed threshold, which would
disable a subtle emotional efect on decision making. Thus,
this approach would most likely lead to a large behavior
tree with numerous nested conditions, making it
dificult to construct and manage. Furthermore, in this work
we focus on emotion-based interaction between humans
and machines, and human (the end user), with emotions
modeled as condition-action, would certainly feel that
the system is programmed to react to her/his inputs not
in a genuine emotional way, but rather in a rational way
as a machine typically does.
of certain chemical substances in human brain. The signal we use
is a simplified representation of the concentration levels.
1. Objectives: To model VRET-Companion
characters.
2. Relevant aspects  =
{,  ,  }.</p>
      <p>Risk perception: The perceived risk of an action
is greatly influenced by emotions [ 11].
Studies by Lerner and Keltner [12] have shown that
happy and angry people are willing to accept
greater risks, while fearful people are more
pessimistic. Maner et al. [13] shows that anxiety is
connected to risk-avoidance, while sadness
allows for greater risks, but instead gives a focus
on high rewards.
3.  = {fear , fatigue, sadness }. Only three
emotions were identified for simplicitly of exposition.
4. Definition of Risk -value (Risk Assessment): Risk
has to do with how dangerous the character
believes a situation is. A risk value is between
0 and 1; 0 being no risk at all, and 1 being
extremely dangerous. The risk value measures
the probability of risk. EmoBTs cannot reason
about the risk of performing an action, but we
allow the designer to add a risk value to each leaf
node in the tree, and derive the associated risk
for the inner nodes.</p>
      <sec id="sec-7-1">
        <title>Action: An action has a risk value that is set by</title>
        <p>the designer (0 by default).</p>
      </sec>
      <sec id="sec-7-2">
        <title>Condition: A condition has a risk value that is set</title>
        <p>by the designer (0 by default).</p>
      </sec>
      <sec id="sec-7-3">
        <title>Sequence Selector: Since a sequence selector per</title>
        <p>forms every child node of the sequence, the risks
of every child nodes must be combined. The
overall risk value is calculated as:</p>
        <p>Risk  = 1 −</p>
        <p>Risk )

∏︁ (1 −
=1
where  is the number of child nodes of .</p>
      </sec>
      <sec id="sec-7-4">
        <title>Priority Selector: A priority selector  only ex</title>
        <p>ecutes one of its child nodes. Since we cannot
determine in advance which node will be
executed, we define the risk value of  as the average
of the risk of every child node :</p>
        <p>Risk  =
∑︀=1(1 −</p>
        <p>Risk )</p>
      </sec>
      <sec id="sec-7-5">
        <title>Parallel Node: Since all of the child nodes of a</title>
        <p>parallel node  are executed, the risk is defined
as:</p>
        <p>Risk  = 1 −</p>
        <p>Risk )

∏︁ (1 −
=1
where  is the number of child nodes of .</p>
      </sec>
      <sec id="sec-7-6">
        <title>Decorator: The risk value of a decorator is the</title>
        <p>same as the one of its child node.
5. To mimic how afective states influence decision
making, we introduce emotional weights for
every relevant factor. Below we show the Risk factor.
Let 1+, . . . , + (resp. −1 , . . . , − ) be the values
of the emotions that positively (resp., negatively)
afect the perception of risk. We define the
emotional weight for risk as:
Risk =
∑︀
=1 +

−
∑︀
=1 −

6. For every aspect  ∈  we define the weight ,
of every child node  of any emotional selector.
We consider the Risk aspect. The weight for risk
for a child node  is calculated as:
Risk, = (1 − Risk ×  ) ×
Risk 
where Risk  is the risk value for the child node
. Note that Risk, should be clamped to the
interval [0; 1] since it represents a probability.
The variable  determines how much emotions
afect the weights. Its value must be between 0
and 1, where 0 signifies no emotional impact and
1 corresponds to full emotional impact.</p>
        <p>The weight for time is calculated in the following
way:</p>
        <p>1
time, = (1− 1 +  ×  × ((1−  + × time ), 0)
where  is a variable that is set to a value that fits
the time span used in the simulation. time is the
emotional efect delay time calculated as:
 −</p>
        <p>2
 =  +</p>
        <p>× ((1 −  × opt )
where opt is the emotional impact on optimism.
The weight for the planning is calculated as:</p>
        <p>1
plan, = (1− 1 +  ×  × ((1− +× plan ), 0)
where  is to fit the planning amount of the
simulation.
7. The overall weight of a child node  is calculated
as:
 =  × Risk, +  × Time, +  × Plan,</p>
      </sec>
      <sec id="sec-7-7">
        <title>The constants  ,  and  give importance to their</title>
        <p>respective factors.</p>
        <p>To select which child node to execute, we list
them in ascending order according to the weight
value . Hence, the lower value of , the more
desirable is the node.</p>
        <sec id="sec-7-7-1">
          <title>5.1. VRET-Companion Behavior</title>
          <p>There are diferent ways in which the the VRET-character
might interact with the user character. Here we would
like to show how the emotional selector can be used to
let the VRET-Companion choose the behavior which is
most suitable under the current emotional circumstances.</p>
          <p>We design a simple interaction scenario where the
character has the following simple interaction choices: it can
simply greet the user ’say hi’, then it can go away; it can
check the weather outside, if there is sun, comments the
weather; it can decide to start a conversation, or even
play music and start to dance encouraging the user to
mimic the dance movements. The character should lay
down to rest when its energy is low; and it should maybe
go around looking for users. The emotional behavior
tree used for the example is depicted in Figure 2. In the
tree there is on emotional selector with four child nodes
(two sequence selectors s1, s2, and two simple action
much shorter time to execute. Finally when the character
nodes n3, and n4). This emotional selector contains the is tired, then s2 is selected since it consists of one action
set of interaction options the character has when it de- that needs little planning. It is possible to manipulate
cides to approach the user. The first child node contains
a sequence of two actions: to say hi then go away.</p>
          <p>Figure 3 shows the amount of risk, planning, and time
intervals for every child node of the emotional selector.</p>
          <p>For this scenario we consider the emotions: fear,
sadness, and fatigue. Constants , ,  are set to 1.0.</p>
          <p>And constants , , , , ,  are set to 0.8, 0.9, 0.5,
0.6, and 0.6 respectively. We use fear as negative
emotional impact for risk with a static value of 1.0 as balance
since we do not include any positive emotional impact. Figure 7: The weight values for the VRET companion example,
For planning, we use fatigue as a positive emotional im- given diferent emotional states. When listed, each emotion
pact. For time, we use sadness as positive emotional im- has the maximum value 1.
pact. For this example we let  be zero. The emotions
mentioned here are derived from psychological theories the order of the execution of the child nodes to force
presented in Section 5 the character to choose a less desirable node (action) be</p>
          <p>We let the specified emotions take diferent values to assigning probabilities to child nodes.
illustrate the efect this has on the action selection. In
Figure 4, 5, 6 we list the overall weights for diferent
factors given diferent emotional states. It can be easily 6. Related Work, Conclusions and
seen that emotions greatly afect the factor weights in Future Work
diferent ways, resulting in diferent overall weight for
the child nodes. The VRET companion example above
is simulated under diferent emotional states. In Figure
7, the weight values for each action are shown under
diferent emotional conditions. It can be seen that the
weight values change widely due to emotional impact.</p>
          <p>For example, when the character is afraid, s2 is the most
desirable choice because it is not risky. When the
character is sad s1 is the most suitable one because it takes
In this work we outlined our line of work on emotional
human-automation interaction, with the intention of
modeling realistic, believable characters and, more
generally, to devise a module for managing emotions in
humanAI interaction, to be potentially incorporated in any agent
architecture.</p>
          <p>A relevant context of empathetic interaction is within
synthetic character applications. Several synthetic
characters have been developed where empathy and the de- CEUR Workshop Proceedings, CEUR-WS.org, 2022,
velopment of empathic relations played a significant role. pp. 1–8. URL: https://ceur-ws.org/Vol-3281/paper1.
These include theatre, storytelling and personal, social pdf .
and health education (cf., for a survey, to [14]). [5] P. L. Anderson, M. Price, S. M. Edwards, M. A.</p>
          <p>Research on computational modeling of empathy has Obasaju, S. K. Schmertz, E. Zimand, M. R.
Calashown that empathic capacity in interactive agents lead maras, Virtual reality exposure therapy for social
to more trust, help coping with stress and frustration anxiety disorder: a randomized controlled trial.,
and increase engagement [15]. Equipping artificial social Journal of consulting and clinical psychology 81
agents with empathic capabilities is, therefore, a crucial (2013) 751.
and yet challenging problem. [6] M. Iovino, E. Scukins, J. Styrud, P. Ögren, C. Smith,</p>
          <p>Previous existing proposals concerning empathy in A survey of behavior trees in robotics and ai,
agents are discussed in the survey [14] (cf. also the refer- Robotics and Autonomous Systems 154 (2022)
ences therein). The approaches discussed in the survey 104096.
fall within one of the two classes, and, typically, are tai- [7] A. J. Champandard, Getting started with decision
lored to a certain application domain. The novelty and making and control systems, AI Game
Programrelevance of our approach is that: (i) it is fully general ming Wisdom 4 (2008) 257–264.
(and thus can be exploited in any kind of application); [8] B. Knafla, Introduction to behavior trees, Retrieved
(ii) it encompasses both aspects, as an agent can be an September 15 (2011) 2011.
observer that empathize with other agents, particularly [9] A. Johansson, Afective decision making in
artifiwith human partners, and at the same time it can lead cial intelligence: Making virtual characters with
the user to choose the right course of action. high believability, Ph.D. thesis, Linköping
Univer</p>
          <p>Currently, we are developing a theoretical framework sity Electronic Press, 2012.
for modeling emotional empathetic interaction in the [10] A. Johansson, P. Dell’Acqua, Emotional behavior
context of car interfaces. The research goal is to monitor trees, in: 2012 IEEE Conference on Computational
the emotions of drivers and to enable novel driver-car Intelligence and Games (CIG), IEEE, 2012, pp. 355–
interactions. 362.</p>
          <p>In future perspective, we aim to further generalize [11] G. F. Loewenstein, E. U. Weber, C. K. Hsee, N. Welch,
our approach and test its applicability in a wider range Risk as feelings., Psychological bulletin 127 (2001)
of contexts, with particular attention to healthcare and 267.
teaching, where we intend to deploy solutions and per- [12] J. Lerner, D. Keltner, Fear, anger, and risk, journal
form practical experiments. of personality &amp; social psychology, 81 (2001).
[13] J. K. Maner, J. A. Richey, K. Cromer, M. Mallott, C. W.</p>
          <p>Lejuez, T. E. Joiner, N. B. Schmidt, Dispositional
References anxiety and risk-avoidant decision-making,
Personality and Individual Diferences 42 (2007) 665–675.
[14] A. Paiva, I. Leite, H. Boukricha, I. Wachsmuth,
Empathy in virtual agents and robots: A survey, ACM
Transactions on Interactive Intelligent Systems 7
(2017) 1–40. doi:10.1145/2912150.
[15] A. Paiva, F. Correia, R. Oliveira, F. P. Santos, P.
Arriaga, Empathy and prosociality in social agents,
in: B. Lugrin, C. Pelachaud, D. R. Traum (Eds.), The
Handbook on Socially Interactive Agents: 20 Years
of Research on Embodied Conversational Agents,
Intelligent Virtual Agents, and Social Robotics
Volume 1: Methods, Behavior, Cognition, ACM /
Morgan &amp; Claypool, 2021, pp. 385–432. doi:10.1145/
3477322.3477334.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Calegari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Ciatto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Mascardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Omicini</surname>
          </string-name>
          ,
          <article-title>Logic-based technologies for multi-agent systems: a systematic literature review</article-title>
          ,
          <source>Autonomous Agents and Multi-Agent Systems</source>
          <volume>35</volume>
          (
          <year>2021</year>
          )
          <article-title>1</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Martinich</surname>
          </string-name>
          ,
          <article-title>Philosophy of language: Foundational articles (</article-title>
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Formisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pitoni</surname>
          </string-name>
          ,
          <article-title>An epistemic logic for modular development of multi-agent systems</article-title>
          , in: Engineering Multi-Agent
          <source>Systems: 9th International Workshop</source>
          , EMAS 2021,
          <string-name>
            <given-names>Virtual</given-names>
            <surname>Event</surname>
          </string-name>
          , May 3-
          <issue>4</issue>
          ,
          <year>2021</year>
          , Revised Selected Papers, Springer,
          <year>2022</year>
          , pp.
          <fpage>72</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dyoub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Costantini</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Letteri</surname>
          </string-name>
          ,
          <article-title>Care robots learning rules of ethical behavior under the supervision of an ethical teacher (short paper)</article-title>
          ,
          <source>in: Joint Proceedings of HYDRA 2022 workshop and the 29th RCRA 2022 Workshop co-located with the 16th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR</source>
          <year>2022</year>
          ), Genova Nervi,
          <source>Italy, September</source>
          <volume>5</volume>
          ,
          <year>2022</year>
          , volume
          <volume>3281</volume>
          of
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>