<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Handling robot sociality: a goal-based normative approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Patrizia Ribino</string-name>
          <email>patrizia.ribino@icar.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carmelo Lodato</string-name>
          <email>carmelo.lodato@icar.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ignazio Infantino</string-name>
          <email>ignazio.infantino@icar.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Istituto di CAlcolo e Reti ad Alte prestazioni (ICAR) Consiglio Nazionale delle Ricerche</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The increasing development of service robots devoted to various functions arisen the need to demonstrate additional social capabilities beyond their primary functionality. For improving robot sociality, among other abilities, robots need to implement the capability to interact with humans using the same principles as humans do following social norms. In this work, we propose an extension of a goal-based normative framework to cover new abstractions such as qualitative goals, social norms, and expectations which constitute essential elements for handling robot sociality. Moreover, we have integrated such extended normative framework into a Nao robot platform. An implementation of the proposed framework is described and tested in a simulated environment.</p>
      </abstract>
      <kwd-group>
        <kwd>social robots</kwd>
        <kwd>social norms</kwd>
        <kwd>normative reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        As result of the increasing progress in the eld of the Arti cial Intelligence,
robots are expected to become more and more available in everyday
environments. Among several issues, the integration of robots into the society depends
on their capability to demonstrate socially acceptable behaviours to be perceived
by humans as suitable partners in collaborations. To de ne socially acceptable
actions, we refer to the branch of socio-cognitive theory that has documented
the existence of two orthogonal dimensions in social judgement [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].
      </p>
      <p>
        The social judgement of an individual can be represented through two
components: social utility and social desirability. Social utility refers to individuals'
capacity to satisfy the functional requirements of a given social environment.
It varies along an incompetence-to-competence horizontal axis that corresponds
to the perceived ability of the social target to reach social success. It pertains
to adaptive traits like skilled/unskilled, proactive/passive. For example,
selfsu ciency [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and being focused on goal achievement [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] are perceived as socially
useful behaviours. On the other hand, social desirability refers to the degree of
likeableness of a person in his/her relationships with others in a given social
environment. It varies along an unlikability-to-likability vertical axis, that
corresponds to the perceived ability of the social target to gain social approval. It
concerns aspects such as polite, honesty, respect, only to cite a few.
      </p>
      <p>
        According to Sommet et.al [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], socially useful behaviours are typically those
described as focused on the self for sustaining the practical ful lment of one's
goals. Conversely, socially desirable behaviours are those de ned as directed on
the others, that involve benevolent interaction styles. From humans perspective,
showing the social utility of a robot can be easier than perceiving him as socially
desirable because in this latter case it is necessary to demonstrate additional
social capabilities beyond robot's primary functionality. Indeed, the social utility
of service robots deployed for various functions in public spaces such as airports,
hospitals, logistic warehouses is readily perceived by humans. Conversely, to be
socially desirable, robots [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] need to show not only "human social" features (like
the expression of emotions, ability to conduct high-level dialogue, to develop
personality and social competencies and so on), but also capabilities to interact
using the same principles as humans do. As it arisen from cognitive and social
science, human interactions are fundamentally based on normative principles.
For example, many forms of interaction are institutionalised and pertain to the
political and economic structures of the society that are de ned by rules and
prescribed by laws that enforce behaviour [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Other types of interaction are
based on conventions, such as on what side of the road people should drive [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
Finally, most human interactions are often in uenced by more profound social
and cultural standards, so-called social norms [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. A social norm is commonly
seen as [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]: a rule of behaviour such that individuals prefer to conform to it on
condition that they believe that (a) most people in their relevant network
conform to it, and (b) most people in their relevant network believe they ought to
conform to it and may sanction deviations. Norms directly identify possible
actions as desirable or undesirable in a given community and a particular context
involving social expectations and guiding the choice of people' actions [
        <xref ref-type="bibr" rid="ref8 ref9">9, 8</xref>
        ].
Social expectations are people's beliefs about other people's behaviours and beliefs
in certain situations. Beyond social norms, expectations play an important role
in regulating social behaviours. Indeed, an individual may comply with social
norms in the presence of relevant expectations, but (s)he does not follow them
in their absence [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        A current challenge is how to incorporate norm processing into robotic
architectures, because it requires addressing several issues such as the speci cation of
social norms, how they can be activated, how social plans can be generated for
expressing social behaviours, the con icts resolution and the acquisition of new
norms [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The contribution of this paper is to address the social desirability as
the ability of a robot to show itself as conforming to social norms. In so doing,
we extended the approach we have presented in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] by introducing the concepts
of qualitative goals and revising the concept of achievement goals for modelling
di erent objectives of social robots. Then, we extended the de nition of norms
to cover the peculiarities of social norms by introducing desirable operators and
expectations. Finally, we have integrated our normative framework into a robotic
platform. An implementation of our approach is tested in a simulated humanoid
robot Nao by using Choreographe [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], a user-friendly application for controlling
robots, creating behaviours and accessing data acquired by the sensors.
      </p>
      <p>The rest of the paper is organised as follows. Section 2 presents an overview of
the related works. Section 3 presents the theoretical foundations of the proposed
approach. Then, in Section 4 a case study about robot sociality is presented.
Finally, in Section 5 conclusions are drawn.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related</title>
    </sec>
    <sec id="sec-3">
      <title>Works</title>
      <p>
        Shortly, social robots will play an ever more signi cant role, working for and in
cooperation with humans. In so doing, they should show social capabilities [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
such as interacting with humans naturally. An emerging challenge is to provide
a robot with the normative reasoning to behave in compliance with the same
social norms as humans do. To the best of our knowledge, only a few recent
works address such issue in an explicit and general way, and a lot of work must
still be done to incorporate sophisticated norm processing into robotic
architecture. In [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], authors present a framework for planning and execution of social
plans, in which social norms are explicitly represented in a domain and
languageindependent form. In [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], Brinck et.al discuss the role of the social norms in the
design of human-robot interactions focusing on the dynamic information that
a robot needs to comply with social norms. In particular, they pay attention
to three elements (gaze and face, place in space, and orientation, posture and
movement) that are important as sources of social information. An initial step
toward a cognitive-computational model of norms by delineating core properties
of the human norm system, contrasting two models of a computational norm
system, and deriving implications for how robotic architectures would implement
such a norm system is described in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].In such work, authors focus on modelling
norms as directly and indirectly connected networks discussing mechanisms of
co-activation of rules that are connected to other norms. Finally, in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] an
approach to creating a computational model of social norms based on identifying
some values that are considered relevant in some culture. Appropriate metrics
quantify such values. Social norms are used as the requirement for maximising
such metrics. In so doing, authors introduce a model for concrete beliefs of the
actors that are relevant to the social scene.
      </p>
      <p>In this work, we propose a normative approach that allows exploiting the
advantages of goal modelling to make social robots able to reason about dynamic
situations pro-actively. In so doing, we suggest the concept of quality goals for
modelling the pursuit of social values by a robot. Then we de ne social norms by
introducing desirability operators for representing preferences about acceptable
behaviours. Finally, we de ne the expectations formally as a new mental concept
a robot sees as a motivator for pursuing social values by following social norms.
3</p>
    </sec>
    <sec id="sec-4">
      <title>Goal-based normative framework for social robots</title>
      <p>
        A widely-accepted approach for developing intelligent agents (both robots and
bodiless agents) is a cognitive approach, where agents are modelled using
mental concepts such as beliefs, goals, plans, rules and so on. Among them, to
develop agents able to reason about dynamic contexts pro-actively, a fundamental
abstraction is the concept of goal. Rich literature addresses issues about goal
modelling for intelligent agents, de ning a great variety of goal's types [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. The
achievement goal is the most used kind of goal. It models the most recurring
functional requirements of this kind of systems. The well known cognitive de
nition is [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]: an achievement goal represents a desired state of the world that
an agent wants to reach.
      </p>
      <p>On the other side, for providing intelligent agents with normative reasoning,
a fundamental abstraction to be modelled is the concept of the social norm.
Social norms are behavioural rules considered acceptable in determined contexts,
which refer to the standard of desirability in a community. Thus, social norms
are behavioural expressions of abstract social values (such as politeness, dignity,
hospitality, honesty, etc.) that underlie the preferences in a group in various
situations. For example, the norm "everyone should to queue to ticket o ce"
involves the social values of equality, e ciency and respect for orderliness. In
other words, widely accepted social values provide the grounds for complying or
rejecting certain behavioural norms.</p>
      <p>Among several functions that social norms serve in the society, they mainly
provide guidelines for expected modes of social behaviour. Thus, for keeping
society functioning, an important role is played not only by the direct rules but
also by the expectations about the conduct of the members of the society. If
few members of the group follow the norm (e.g., do not use a cellphone during
class), then the norm is weakened, and it may be no longer treated as binding.
If few members of the group expect others to follow the instruction, it becomes
optional and loses its character as a norm. These peculiarities distinguish norms
from goals because the latter can hold even when individuals disregard entirely
other community expectations.</p>
      <p>
        A cognitive de nition of social norm [
        <xref ref-type="bibr" rid="ref16 ref9">16, 9</xref>
        ] states that:
- An agent represents the instruction to [not] perform a speci c action or
general class of action.
- An agent believes that a number of individuals in the group in fact (do not)
follow the norm.
- An agent believes that a su cient number of individuals in the groups
expects others in the group to (not) follow the norm.
      </p>
      <p>In this work, we want to add a further condition to the previous ones. So that
a social robot conforms its behaviour to social norms of a group as humans do,
it has also to share the same social values as the members of that community.
Thus, we extend the previous de nition with the following condition:
- An agent wants to pursue a social value.</p>
      <p>
        To incorporate norm processing into social robots, we propose a goal-based
normative approach that extends our previous work [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] for covering di erent
aspects both functional and non-functional of a social robot. In particular, to
implement the feature of sociality, we extended the de nition of the norm to cover
also the peculiarities of the social norms by introducing the concept of
expectation. Moreover, we have proposed the notion of qualitative goals for modelling
the pursuit of social values.
      </p>
      <p>
        Before de ning norms and goals, we need to introduce the de nition of the
state of the world [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] that is fundamental for the following. The state of the
world represents a set of declarative information about events occurring within
the environment and relations among events at a speci c time. An event can be
de ned as the occurrence of some fact that can be perceived by or be
communicated to an intelligent agent. Events can be used to represent any information
that can characterise the situation of an interacting user as well as a set of
circumstances in which the intelligent agent operates at a speci c time.
Definition 1 (State of the world).
      </p>
      <p>Let D be the set of concepts defining a domain. Let L be a first-order logic defined on D
with &gt; a tautology and ? a logical contradiction, where an atomic formula p(t1; t2:::; tn)2L
is represented by a predicate applied to a tuple of terms (t1; t2:::; tn)2D and the predicate
is a property of or relation between such terms that can be true or false.</p>
      <p>A state of the world in a given time t (Wt) is a subset of atomic formulae whose values
are true at the time t:</p>
      <p>Wt = [p1(t1; t2; :::; th); :::; pn(t1; t2; :::; tm)]</p>
      <p>De nition 1 is based on the close world hypothesis that assumes all facts that
are not in the state of the world are considered false. In the next sections, we
introduce the elements of the proposed approach.
3.1</p>
      <sec id="sec-4-1">
        <title>Types of Goals</title>
        <p>
          An Achievement goal represents the desired state that has to be achieved. They
express goals which are not currently ful lled, and which the agent, pursuing
the appropriate actions, acts to reach them. To de ne, an achievement goal we
extended the general de nition of goal proposed in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>Definition 2 (Achievement Goal). Let D, L and p(t1; t2:::; tn)2L be as
previously introduced in the definition 1. Let tc2L, fs2L and fc2L be formulae that may be
composed of atomic formulae by means of logic connectives AND(^), OR (_) and NOT
(:). An Achievement Goal is a triple htc; fs; fci where tc (trigger condition) is a condition
to evaluate over a state of the world Wt when the goal may be actively pursued, fs (final
state) is a condition to evaluate over a state of the world W t+ t when it is eventually
addressed, fc (failure condition) is a condition to evaluate over a state of the world W t+ t
when the goal is no longer applicable. An achievement goal is:
i) active if tc(Wt) ^ :fs(Wt) = true
ii) addressed if fs(Wt+ t) = true
iii) is dropped if fc(Wt+ t) = true</p>
        <p>On the contrary, a qualitative goal is a kind of goal that is perceived more
than ful lled. It is a goal for which satisfaction criteria are not de ned in a
clear-cut way.
Definition 3 (Qualitative goal). Let D, L and p(t1; t2:::; tn)2L be as previously
introduced in the definition 1. Let tc2L and fs2L be formulae that may be composed of
atomic formulae by means of logic connectives AND(^), OR (_) and NOT (:).</p>
        <p>An qualitative goal is a tuple htc; qs; sc; fc; i where tc (trigger condition) is a condition
to evaluate over a state of the world Wt when the quality goal may be actively pursued,
qs (qualitative state) is the state to head toward, sc(suspending condition) is a condition
to evaluate over a state of the world W t+ t when the quality goal has to be suspended,
fc (failure condition) is the condition to evaluate over a state of the world W t+ t when
the goal is no longer applicable. A qualitative goal is:
i) active if tc(Wt) ^ :mc(Wt) ^ :sc(Wt) ^ :fc(Wt) = true
ii) suspended if sc(Wt+ t) ^ :fc(Wt+ t) = true
iii) dropped if fc(Wt+ t) = true</p>
        <p>A qualitative goal never quite reach the state it is heading toward, but instead
get closer and closer. Thus, a qualitative goal when activated will be continuously
pursued until it is suspended or dropped.</p>
        <p>In the context of social robots, the concept of achievement goals allows us to
represent functional requirements that a social robot has to be able to satisfy.
Conversely, a qualitative goal enables us to describe the pursuit of a social value
that can not be described by mean a clear condition to be reached. An agent has
to continuously perform actions that give positive contributions to sustaining a
quality state.
3.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Social Norms</title>
        <p>
          In the following, we provide an explicit representation of social norms and robot's
expectations. In particular, we adapt the de nition of norm presented in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]
for representing social norms introducing the desirability operators: Desirable,
Undesirable and Indi erent [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Desirability operators represent preference in
a wide sense. The following principles underpin the desirable operators:
$
then
        </p>
        <p>Des</p>
        <p>$ Des
Des
Des
! :Des :
! :Undes
(1)
(2)
(3)</p>
        <p>In our context, designates a proposition that asserts that an act or a state
of a air is done or reached. Thus, Des is read as "it is desirable that the
situation described by the descriptive sentence is realised". In particular, (3)
expresses that something cannot be desirable and undesirable at the same time.
Definition 4 (Social Norm). Let D, L, and p(t1; t2:::; tn)2L be as previously
introduced in the definition 1. Let 2L and 2L be formulae that may be composed of
atomic formula by means of logic connectives AND(^), OR (_) and NOT (:). Moreover,
let Desop = fDesirable; U ndesirable; Indif f erentg be the set of desirability operators.
A Social Norm is defined by the elements of the following tuple:
n = hp; qs; ; ; d i</p>
        <p>where
- p is a Position the norm refers to. A Position indicates a status of an individual in a
society. The symbol means that norms refers to anyone.
- qs is a quality state. It represents the social value the norm underlying.
- 2L is a formula expressing the set of actions and/or state of affairs that the norm
disciplines.
- 2L is a logic condition (to evaluate over a state of the world Wt) under which the norm
is applicable.
- d 2Dop is the desirability operator applied to that the norm prescribes for sustaining the
quality state qs in a state of the world Wt+ t:
d( )
8&gt; (Wt+ t) = true
&lt;</p>
        <p>: (Wt+ t) = true
&gt;: (Wt+ t) _ : (Wt+ t) = true if d = Indif f erent
if d = Desirable
if d = U ndesirable
(4)</p>
        <p>Let us consider a society where the politeness is considered a shared social
value. Thus, a social norm such as the following "It is desirable that a guy gives up
his seat if an elderly person is standing up" prescribes an acceptable behaviour.
According to the de nition 5, the previous norm is applied to a guy (i.e., the
position) and it prescribes that, in a given state of the world where an elderly
t
person is standing up (i:e: : (W ) = true) then the action "give up own seat" is
expected to be true in a consecutive state of the world (i:e: : (Wt+ t) = true).</p>
        <p>As said before, a further important role is played by the expectations. By
de nition, the preference to conform to a social norm is conditional, it implies
that one may comply with a social norm in the presence of the relevant
expectations, but do not obey the norm in the absence of such expectations. We initially
considered such expectations in a broad sense as a motivator for pursuing a
social value, thus leading an agent to follow the related social norms. Conversely,
repeated negative feedbacks about its expectations cause the loss interest in that
social value, thus ignoring social norms 1. In this work, an expectation is
generated by certain circumstances, and it is satis ed when the expected state is true
in a consecutive state of the world before its time to ful l (if any).
Definition 5 (Expectations). Let D, L, and p(t1; t2:::; tn)2L be as previously
introduced in the definition 1. Let n = hp; qs; ; ; d i be a Social Norm. Let tc2L and
es2L be formulae that may be composed of atomic formulae by means of logic
connectives AND(^), OR (_) and NOT (:). An Expectation is a couple hn; esi where n =
hp; qs; ; ; d i is the social norm generating the expectation and es (expected state) is
a condition to evaluate over a state of the world W t+ t when expectation is eventually
satisfied. Moreover, an expectation may have time to fulfilled (ttf ), that is the time within
which the expected status must occur so that the expectation can be considered satisfied.
An Expectation is:</p>
        <p>t
- generated if (W ) = true
- satisfied if es(Wt+ t) = true
1 In this paper, we give a simple role to the expectations, but we conceived them to
be employed in most complex reasoning
3.3</p>
      </sec>
      <sec id="sec-4-3">
        <title>Reasoning on Social Norms</title>
        <p>The following algorithms provide the reasoner for a social robot for deciding to
comply with social norms according to its expectations. Algorithm 1 is the core
of the reasoner. The triple of elements it works is: the state of the world Wt, the
set of social values the robots wants to pursue represented by a set of qualitative
goals (QG), and a set of social norms N . The state of the world Wt may change
during system execution because the robot may perform some actions, it may
perceive environmental changes, or it may capture events deriving from human
interactions. For each active qualitative goal, the set of the related active social
norms are considered. The most simple case is the presence of a single norm
(Step A ). In this case, a desired nal state is created according to the desirable
operators. Analogously, Step B allows for creating a desired nal state by
merging the di erent state of a airs the norms discipline. Thus, an achievement goal
is generated by starting from the quality goal for reaching the new desired nal
state. After pursuing such goal, the new state of the world is updated.</p>
        <sec id="sec-4-3-1">
          <title>Algorithm 1: Follow Social Norms</title>
          <p>Data: Wt, QG, N
foreach QualitGoali 2 QG do</p>
          <p>QualitGoali htci; qsi; sci; fcii;
if QualitGoali is active then</p>
          <p>Ni fn 2 N : n = hp; qsi ; ; ; di ^ (Wt) = trueg;
A if cardfNig = 1 then
n hp; qsi ; ; ; di ;
if d = Des then</p>
          <p>fs ;
if d = Undes then</p>
          <p>fs : ;
if d = Indi then</p>
          <p>fs _ : ;
B if cardfNig &gt; 1 then
fs ? ;
foreach nh 2 Ni do
nh hp; qsi ; h ; h ; dh i;
if dh = Des then</p>
          <p>fs fs ^ ;
if dh = Undes then</p>
          <p>fs fs ^ : ;
if dh = Des then</p>
          <p>fs fs ^ ( _ : );
AchievGoal htci; fs; fcii;
pursue(AchievGoal);
update(Wt; fs);</p>
          <p>Algorithm 2 allows for evaluating the expectations that following a given
norm may raise. In this rst implementation, we consider that a robot has a
satisfaction threshold that is decreased each time its expectation has not been
satis ed after the time to ful l. When its satisfaction reaches its lowest value,
the related quality goal is dropped.</p>
        </sec>
        <sec id="sec-4-3-2">
          <title>Algorithm 2: Evaluate Expectations</title>
          <p>Data: Wt; EX P
foreach expk 2 EX P do
hn; eski expk;
n hp; qs; ; ; di;
QG htc; qs; sc; fci;
if expk is generated then
initT imeQs getCurrentT ime();
update(Wt; SensorData);
satisfied evaluate(esk; Wt);
if (currentT ime initT imeQs) &gt; ttf ^ satisfied = false then</p>
          <p>T hresholdQs = T hresholdQs 1;
if T hresholdQs = 0 then</p>
          <p>update(Wt; fc);</p>
          <p>We want to highlight that the robot, in this case, does not follow all norms
that are underlying the dropped goal. For example, if a robot wants to be polite,
it tries to follow the social norms related to politeness such as say hello, thank
you, sorry, etc. When the robot says hello, it expects that people greet it.
Analogously, if the robot helps someone, it expects that the other person say thank
you. If its expectations are continuously unsatis ed, similarly it could not say
sorry when bumping into someone. It loses the motivation to follow the
correlated norms because it thinks in this case that politeness is not a social value
for the community of people it is interacting.
4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Case Study: Modelling robot sociality</title>
      <p>
        In this section, we provide a simple case study for describing how a social robot
may behave in some situations that involve social norms. To model robot and
human objectives, we use the goal model diagram [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] where goals can be
analysed, from the perspective of an actor, by Boolean decomposition, Contribution
analysis, and Means-end analysis. Decomposition is a ternary relationship which
de nes a generic boolean decomposition of a root goal into sub-goals, that can
be an AND or an OR decomposition. Contribution analysis identi es goals that
can contribute positively or negatively towards the ful lment of other goals.
Finally, the Means-end relationship is also a ternary relationship de ned among
an Actor a goal and a task, representing the means which will be able to satisfy
that goal. Practically, it provides the operationalisation of the goal.
      </p>
      <p>The goal diagram showed in Fig. 1 represents the goal model related to a
robot and human and their relations for the proposed case study. In the scenario
under study, a robot has to go to a postal o ce for sending something on behalf
of its owner. Thus, the robot has to reach the o ce, wait its turn sitting if there
is a free chair, then talk with the postal employee for sending its item. In such
scenario, some social norms regarding public behaviour have been made known
to the robot. Such as i) It is desirable to kindly greets when you meet someone ;
ii) It is desirable to say "I'm sorry" if you hit or bump into someone by accident ;
iii) It is desirable to be kind to the elderly, giving up your seat. Thus, besides
its functional objectives related to the physical tasks the robot is involved, a
Human wants
interact with</p>
      <p>Robot</p>
      <p>OR</p>
      <p>Human Goals</p>
      <p>OR</p>
      <p>Human
Human does not
want interact
with Robot</p>
      <p>To be
compliant with
Social Norms</p>
      <p>To be
uncompliant with
Social Norms
- To be social
- + +
Fol ow
Social
Norms</p>
      <p>To be
compliant with
Social Norms</p>
      <p>Evaluate
Expectat
ions</p>
      <p>Robot Goals</p>
      <p>AND
Go to the
Postal office</p>
      <p>Move
to
target</p>
      <p>Send a Packet
AND AND</p>
      <p>Wait for turn
Sit
Down</p>
      <p>Wait</p>
      <p>Stand
Up
Robot</p>
      <p>Talk with
Employee
qualitative goal represents the interest of the robot to be social. A goal that can
contribute positively to reach this qualitative goal is to be compliant with social
norms. Such goal is reached by accomplishing two tasks: follow social norms 2
and validate its expectations. Conversely, from the perspective of an individual,
(s)he may have the interest to interact or not with the robot. In the latter case,
the lacking of interactions negatively contributes to the ful lment of the robot's
qualitative goal because it lacks the fundamental requirement for being social
that are the interactions. Conversely, an individual can have the interest to
interact with a robot, but he/she can choose to be compliant with social norms
or not thus violating the expectation of the robot. An individual that behaves
conforming to the social norms favourites the robot's sociality because they
satisfy the generated expectations of the robot. From the robot's perspective,
the satisfaction of its expectations is a measure of the appropriateness of its
behaviour. Conversely, individuals that do not follow social norms, thus not
satisfying the expectations of the robot, could weaken the robot's belief about
the appropriateness of the adopted social behaviour and cause him to change his
attitude. In the initial implementation presented in this work, the robot suspends
its interest in pursuing the social value underlying the disregarded social norms.
According to our formalisation, the goal "To be social " and the above norms
can be represented as follows 3.</p>
      <p>G1: qualitative goal(condition(want(be social)),state(be social),
condition(), condition(: want(be social))
2 This is a generic task meaning that according to the social norms the appropriate
task will be performed. In our case study, for example, the robot has to be able to
greet, apologize and standing up for giving its seat.
3 For space concern, we avoid to represent the achievement goals that are not relevant
for understanding the case study.
person),type(desirable))
N1: norm(position( ),state(be social),state(greet),condition(is(person)),
N2: norm(position( ),state(be social), state(sorry),condition(bumped(
N3: norm(position( ),state(be social), state(standup),condition(</p>
      <p>The proposed case study has been tested in Choreographe by using a
simulated Nao Robot. As we can see in Fig.2, the behaviour of the robot is not
described using a prede ned work- ow as it is commonly done in the Choregraphe
environment. All the possible concrete tasks a robot may perform are linked to
a normative reasoning component. Such component de nes the behaviour of the
robot according to its goals, thus deciding what tasks to be performed according
to the speci c circumstances the robot is working. As we can see, we implemented
not only a set of concrete tasks the robot may use for satisfying its achievement
goals, but also a set of tasks the robot may perform to be compliant with the
previous set of social norms. Moreover, we developed a simple graphical interface to
simulate some events such as met or bumped into someone or some conditions of
the environment such as "there is a free chair." Such elements are perceived as
beliefs by the robot which updates its knowledge about the state of the world. Each
simulated scenario starts under the same conditions: the robot wants to send a
packet and it wants to be social, W 0=fwant(be social),want(send,packet)g. The
expectations of the robot are met by the gratitude and politeness shown by the
human. Each scenario presents three sections: a brief description of the scenario,
the initial behaviour, the robot plan for reaching its achievement goals, and a
description of the dynamic execution of the scenario.
SCENARIO 1
Description - In this scenario the robot arrives at the o ce, sees a free chair then it
sits down and waits for its turn, after that it talks to the clerk. Such scenario shows the
most simple situation where there are no applicable norms. Thus, any change occurs in
the normal behaviour of the robot. The robot pursues its triggered achievement goals
by following its initial planned behaviour.</p>
      <p>Initial Behaviour
Start
Execution
Task: Move to postal office
Task: Sit Down
Task:Wait
Expected Event: is(MyTurn)
Task: Stand up
Task: Dialog
SCENARIO 2
Description - In such scenario, the robot, moving to the postal o ce, bumps into a
person. Such event triggers the norms N2. Thus the robot changes its planned behaviour
by adding the task of apologising. Then, the behaviour continues as in the previous
scenario.</p>
      <p>Initial Behaviour
Start
Execution</p>
      <p>Move To</p>
      <p>Sit Down</p>
      <p>Wait</p>
      <p>Stand Up</p>
      <p>Dialog</p>
      <p>End
Task: Move to postal office
Unexpected Event: bumped(person)
Applicable norm: norm(position( ),state(be social), state(sorry),condition(
bumped(person),type(desirable))
Update Behaviour:
Start
Description - In such scenario, the robot arrives at the postal o ce, sees a free chair
then it sits down. An older adult arrives at the postal o ce. The robot changes its
plan by following the norm N3. Thus, it stands up and waits for its turn.
Initial Behaviour
Start
Description - In such scenario, the robot, moving to the postal o ce, bumps into a
person. Such event triggers the norms N2. Thus the robot changes its planned behaviour
by adding the task of apologising. Then, it sees a free chair, and it sits down. An older
adult arrives at the postal o ce. The robot changes its plan again by following the
norm N3. Thus it stands up and waits for its turn.</p>
      <p>Initial Behaviour
Unexpected Event: said(person,"Thanks")
Task: Wait
Expected Event: is(MyTurn)
Task: Dialog</p>
      <p>In this section, we presented a simple case study for showing operatively
the proposed approach. In particular, we want to highlight the exibility of the
approach. As we see, it is not necessary to de ne all the possible plans the
robot may perform for managing all the possible situations. Indeed we do not
implement if-then rules, but we provide the robot with the ability to reason
about a mental concept such as norm and expectations. Thus, it can manage
unexpected events that the robot does not consider in its initial plan.
5</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusions</title>
      <p>Social robots interact with humans for performing speci c tasks. Implementing
social capabilities, such as behave following the social norms prescribed by the
community, improves the social desirability of the robot. In this work, we propose
a normative approach that allows exploiting the advantages of goal modelling
to make social robots able to reason about dynamic situations pro-actively. In
particular, we de ned social norms by introducing desirability operators for
representing preferences about acceptable behaviours and the expectations as a new
mental concept a robot sees as a motivator for pursuing social values that we
model using quality goals. Moreover, we have illustrated some scenarios about
how the robot behaves in some situations that involve social norms, showing the
exibility of the approach to managing unexpected events.</p>
      <p>As next step, we are working on allowing the robot to perform more complex
evaluation about the expectations and how the robot may change its behaviour
adaptively according to its expectations.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Jean-Leon Beauvois</surname>
          </string-name>
          .
          <article-title>Judgment norms, social utility, and individualism. A sociocognitive approach to social norms</article-title>
          , pages
          <volume>123</volume>
          {
          <fpage>147</fpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Nicole</given-names>
            <surname>Dubois</surname>
          </string-name>
          and
          <string-name>
            <surname>Jean-Leon Beauvois</surname>
          </string-name>
          .
          <article-title>Normativeness and individualism</article-title>
          .
          <source>European Journal of Social Psychology</source>
          ,
          <volume>35</volume>
          (
          <issue>1</issue>
          ):
          <volume>123</volume>
          {
          <fpage>146</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>N</given-names>
            <surname>Dubois</surname>
          </string-name>
          .
          <article-title>Theory of the social value of persons applied to organizations: Typologies of good leaders and recruitment</article-title>
          . Revue Europeenne de Psychologie Appliquee/European Review of Applied Psychology,
          <volume>60</volume>
          (
          <issue>4</issue>
          ):
          <volume>255</volume>
          {
          <fpage>266</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Nicolas</given-names>
            <surname>Sommet</surname>
          </string-name>
          , Alain Quiamzade, and
          <string-name>
            <given-names>Fabrizio</given-names>
            <surname>Butera</surname>
          </string-name>
          .
          <article-title>How would pyrrho have been socially valued? social desirability and social utility of con ict regulation</article-title>
          .
          <source>International Review of Social Psychology</source>
          ,
          <volume>30</volume>
          (
          <issue>1</issue>
          ),
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Terrence</given-names>
            <surname>Fong</surname>
          </string-name>
          , Illah Nourbakhsh, and
          <string-name>
            <given-names>Kerstin</given-names>
            <surname>Dautenhahn</surname>
          </string-name>
          .
          <article-title>A survey of socially interactive robots</article-title>
          .
          <source>Robotics and autonomous systems</source>
          ,
          <volume>42</volume>
          (
          <issue>3-4</issue>
          ):
          <volume>143</volume>
          {
          <fpage>166</fpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>John R Searle.</surname>
          </string-name>
          <article-title>The construction of social reality</article-title>
          .
          <source>Simon and Schuster</source>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>David</given-names>
            <surname>Lewis</surname>
          </string-name>
          .
          <article-title>Convention: A philosophical study</article-title>
          . John Wiley &amp; Sons,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Cristina</given-names>
            <surname>Bicchieri</surname>
          </string-name>
          and
          <string-name>
            <given-names>Ryan</given-names>
            <surname>Muldoon</surname>
          </string-name>
          .
          <article-title>Social norms</article-title>
          . In Edward N. Zalta, editor,
          <source>The Stanford Encyclopedia of Philosophy</source>
          . Metaphysics Research Lab, Stanford University, spring
          <year>2014</year>
          edition,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Bertram F Malle</surname>
          </string-name>
          ,
          <string-name>
            <surname>Matthias Scheutz</surname>
          </string-name>
          , and Joseph L Austerweil.
          <article-title>Networks of social and moral norms in human and robot agents</article-title>
          .
          <source>In A world with robots, pages</source>
          <volume>3</volume>
          {
          <fpage>17</fpage>
          . Springer,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Patrizia</surname>
            <given-names>Ribino</given-names>
          </string-name>
          , Carmelo Lodato, and
          <string-name>
            <given-names>Massimo</given-names>
            <surname>Cossentino</surname>
          </string-name>
          .
          <article-title>Modeling business rules compliance for goal-oriented business processes</article-title>
          .
          <source>In Workshop on Enterprise and Organizational Modeling and Simulation</source>
          , pages
          <volume>83</volume>
          {
          <fpage>99</fpage>
          . Springer,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>Aldebaran</given-names>
            <surname>Robotics</surname>
          </string-name>
          .
          <source>Nao robot</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. Fabio Maria Carlucci, Lorenzo Nardi, Luca Iocchi, and
          <string-name>
            <given-names>Daniele</given-names>
            <surname>Nardi</surname>
          </string-name>
          .
          <article-title>Explicit representation of social norms for social robots</article-title>
          .
          <source>In Intelligent Robots and Systems (IROS)</source>
          ,
          <year>2015</year>
          IEEE/RSJ International Conference on, pages
          <volume>4191</volume>
          {
          <fpage>4196</fpage>
          . IEEE,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. What Social Robots Can,
          <string-name>
            <surname>Should Do J Seibt</surname>
          </string-name>
          , et al.
          <article-title>Making place for social norms in the design of human-robot interaction</article-title>
          .
          <source>What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016/TRANSOR</source>
          <year>2016</year>
          ,
          <volume>290</volume>
          :
          <fpage>303</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Ladislau</surname>
          </string-name>
          <article-title>Boloni, Taranjeet Singh Bhatia, Saad Ahmad Khan, Jonathan Streater, and Stephen M Fiore. Towards a computational model of social norms</article-title>
          .
          <source>arXiv preprint arXiv:1801.05796</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>Mehdi</given-names>
            <surname>Dastani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M</given-names>
            <surname>Birna Van Riemsdijk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Michael</given-names>
            <surname>Winiko</surname>
          </string-name>
          .
          <article-title>Rich goal types in agent programming</article-title>
          .
          <source>In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume</source>
          <volume>1</volume>
          , pages
          <fpage>405</fpage>
          {
          <fpage>412</fpage>
          . International Foundation for Autonomous Agents and
          <string-name>
            <given-names>Multiagent</given-names>
            <surname>Systems</surname>
          </string-name>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Geo</surname>
          </string-name>
          rey Brennan, Lina Eriksson, Robert E Goodin, and Nicholas Southwood.
          <article-title>Explaining norms</article-title>
          . Oxford University Press,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Carole</surname>
            <given-names>Adam</given-names>
          </string-name>
          , Benoit Gaudou, Andreas Herzig, and
          <string-name>
            <given-names>Dominique</given-names>
            <surname>Longin</surname>
          </string-name>
          .
          <article-title>Occs emotions: a formalization in a bdi logic</article-title>
          .
          <source>In International Conference on Arti cial Intelligence: Methodology, Systems, and Applications</source>
          , pages
          <volume>24</volume>
          {
          <fpage>32</fpage>
          . Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Anand S Rao. Agentspeak</surname>
          </string-name>
          (l):
          <article-title>Bdi agents speak out in a logical computable language</article-title>
          .
          <source>In European Workshop on Modelling Autonomous Agents in a Multi-Agent World</source>
          , pages
          <volume>42</volume>
          {
          <fpage>55</fpage>
          . Springer,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Rafael H Bordini</surname>
          </string-name>
          ,
          <article-title>Jomi Fred Hubner, and Michael Wooldridge. Programming multiagent systems in AgentSpeak using Jason</article-title>
          , volume
          <volume>8</volume>
          . John Wiley &amp; Sons,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Mirko</surname>
            <given-names>Morandini</given-names>
          </string-name>
          , Fabiano Dalpiaz, Cu Duy Nguyen, and
          <string-name>
            <given-names>Alberto</given-names>
            <surname>Siena</surname>
          </string-name>
          .
          <article-title>The tropos software engineering methodology</article-title>
          .
          <source>In Handbook on Agent-Oriented Design Processes</source>
          , pages
          <volume>463</volume>
          {
          <fpage>490</fpage>
          . Springer,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>