<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards Enhancing Social Navigation through Contextual and Human-related Knowledge</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Phani Teja Singamaneni</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Umbrico</string-name>
          <email>alessandro.umbrico@istc.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Orlandini</string-name>
          <email>andrea.orlandini@istc.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rachid Alami</string-name>
          <email>rachid.alami@laas.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Robotics, Assistive Robotics</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CNR - Institute of Cognitive Sciences and Technologies (ISTC-CNR)</institution>
          ,
          <addr-line>Rome</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>LAAS-CNRS, Universite de Toulouse</institution>
          ,
          <addr-line>CNRS, Toulouse</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Task and Motion Planning, Social Navigation</institution>
          ,
          <addr-line>Knowledge Representation and Reasoning, Cognitive</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Robots acting in real-world environments usually need to interact with humans. Interactions may occur at diferent levels of abstraction (e.g., process, task, physical), entailing diferent research challenges (e.g., task allocation, human-robot joint actions, robot navigation). For social navigation, we propose a conceptual integration of task and motion planning to contextualize robot behaviors. The main idea is to leverage the contextual knowledge of a task planner to dynamically contextualize the navigation skills of a robot. More specifically, we propose a holistic model of tasks and human features and a mapping from task-level knowledge to motion-level knowledge to constrain the generation of robot trajectories.</p>
      </abstract>
      <kwd-group>
        <kwd>Knowledge</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Robots acting in real-world and social environments usually face situations requiring tight,
continuous interactions with humans. The presence of humans entails several research
challenges under the control perspective of a robotic system. A human indeed represents a source
of uncertainty a robot controller should deal with in order to synthesize and execute behaviors
that are valid, safe, and acceptable.</p>
      <p>Humans are usually not controllable and only partially predictable representing a source of
uncertainty. With respect to Human-Robot Interaction (HRI), uncertainty about the behavior
of a human concerns goals, beliefs, intentions, and expectations. A robot controller should be
capable of reasoning about who is the human it interacts with, what are the objectives of the
interactions, how to achieve them, and when to execute the needed actions. According to the
diferent contexts and objectives, some assumptions can be made to reduce this uncertainty. In
general, it is necessary to find suitable interaction strategies to carry out tasks in a reliable (safe)
and efective way.</p>
      <p>In addition, there is a social perspective to consider in order to meet the social expectation of
a human in a given context and, thus, realize behaviors that are acceptable also under a social
The 2nd workshop on social robots for personalized, continuous and adaptive assistance (ALTRUIST), December 16, 2022,
nEvelop-O
perspective. In (social) human-robot interaction scenarios, it is particularly important to reason
about how tasks are carried out by a robot in order to comply with so-called social norms and,
as a consequence, behave in an acceptable way (for the human user).</p>
      <p>
        The need for implementing diferent “intelligent behaviors” requires investigating several
research directions that lead to the integration of Robotics and Artificial Intelligence (AI) [
        <xref ref-type="bibr" rid="ref1">1, 2</xref>
        ].
This integration is especially crucial to support personalized and adaptive social and assistive
interactions with humans. General interaction capabilities of robotic platforms should be
customized according to the specific features of the scenario as well as the preferences and
needs of users [3, 4, 5]. It is fundamental to endow the robot with an “expressive” and
wellstructured user model. On the one hand, such a model allows robots to personalize their general
interaction/assistive capabilities (i.e., behaviors) to the specific needs of a user. On the other
hand, it allows robots to adapt their behavior execution over time according to the changing
or evolving states of users (e.g., worsening of impairments, changing health-related needs, or
changing interaction preferences).
      </p>
      <p>In this work, we propose a holistic model to be integrated within a task and motion planning
approach to enhance the awareness of the social navigation skills of robots. The proposed
approach relies on a motion planner, called CoHAN [6, 7], which implements human-aware
navigation skills and provides a number of parameters to tailor implemented motion behaviors.
Within an integrated task and motion planning framework, the idea is to leverage human-related
and contextual knowledge available at the task planning level to set COHAN motion parameters
and dynamically contextualize navigation behaviors. To this aim, this paper proposes a holistic
model to represent domain needs and a mapping from human-aware domain knowledge to the
CoHAN’s navigation primitives.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Why We Need a Holistic Model</title>
      <p>Endowing a robot with a well-structured model of humans is crucial to synthesize efective
interaction strategies. There are several human and social-related variables that would afect
the motions and interaction style of a robot in a given social context [8, 9]. Works in social
navigation usually focus on the motion task alone, without considering the context in which it
is executed. In our opinion, it is important to consider human-related knowledge correlated
to the execution of a particular motion [6] as well as contextual knowledge concerning the
domain-level tasks being executed [10]. Depending on the specific domain/application needs,
tasks requiring (social) navigation skills may entail diferent priorities, safety requirements, and
diferent performance constraints. All this information impacts the interaction style of a robot
and the way motions and navigation behaviors are actually implemented.</p>
      <p>We propose a holistic model to characterize social navigation tasks from diferent synergetic
perspectives. We aim at integrating this knowledge into a novel task and motion planning
approach to enrich the navigation skills of a robot when performing tasks. Usually, task and
motion planners work at two diferent levels of abstraction. At a higher level, a task planner
focuses on the goal-oriented behavior of a robot and plans tasks to achieve high-level goals. At a
lower level, a motion planner acts closer to the execution layer by concretely implementing the
requested motions. In particular, a motion planner should take into account the perspectives,
intentions, and physical motions of involved humans [6].</p>
      <p>With respect to the adaptation of the physical motion of a robot, contextual knowledge about
the task being executed and the qualities of involved humans could enhance the awareness
of the robot. The idea is thus to leverage the contextual and abstract perspective of a task
planner to provide a motion planner with contextual knowledge about performed tasks and
involved humans in order to enhance the awareness of navigation skills. At the same time, a
motion planner exposes a set of interaction parameters that a task planner would use to tailor
the physical behavior of the robot to the known context when dispatching navigation actions.
Description of the synergetic perspectives considered to define a holistic model for integrated task and
motion planning in human-aware social navigation.</p>
      <p>Perspective</p>
      <p>Domain
Human</p>
      <p>Task Planning
Robot</p>
      <p>Motion Planning
(Social) Environment</p>
      <p>Motion Planning</p>
      <p>Description
To characterize the features of the tasks a robot should perform in a certain
(social) environment to achieve desired goals. Tasks may have diferent
priorities, performance requirements as well as safety constraints. This
information would afect the navigation style of a robot and the way it
actually moves within the environment and in relationship with humans.</p>
      <p>To characterize the features of the humans involved in the execution of tasks
and related motions. Humans have diferent interaction skills, intentions,
goals, and preferences that may afect the behavior of a robot at diferent
levels. Furthermore, there can be diferent expectations with respect to the
reliability of their behavior. This information would thus elicit diferent
interaction/navigation styles of a robot with respect to the known features
of involved humans.</p>
      <p>To characterize the types and qualities of the interaction skills of a robot as
well as performance, and execution requirements. This information would in
particular define the interaction parameters that would be used to tailor the
interaction to the diferent contexts and expected behaviors of involved
humans.</p>
      <p>To characterize features of the environment in which a robot should act.</p>
      <p>This information is suitable to describe objects/obstacles (and their features)
that are part of the environment as well as the structure of the environment
(e.g., its topology). At this abstraction level, humans can be considered as
“dynamic obstacles” of the environment. With respect to the definition of a
holistic model, it is necessary to characterize geometric-related information
e.g., motion intentions, perspectives, and acceleration that would afect the
implemented physical motions of the robot [6].</p>
      <sec id="sec-2-1">
        <title>2.1. Task-level Knowledge</title>
        <p>Task-level knowledge should characterize the motivations and objectives that lead a robot to act
in a (social) environment. As shown in Table 1, the domain and human perspectives contribute
to this level of abstraction. These perspectives characterize socially-relevant information about
the tasks to be performed (i.e., the domain perspective) and the interacting features of involved
humans (i.e., the human perspective).</p>
        <p>From the domain perspective, it is useful to define also a number of variables characterizing
the tasks a robot should perform with respect to the expected direct/indirect interactions with
humans. According to these variables, it would be possible to dynamically “configure” the
motion planner and constrain the resulting navigation behavior of the robot.</p>
        <p>Table 2 describes the variables defined to characterize the tasks requiring a robot to act in
a particular social environment. The rationale behind these variables is to estimate to which
extent a task is critical with respect to the social dimension of the resulting interaction.</p>
        <p>Each variable is associated with a score (min 0, max 3) assessing the task from a social
perspective. The sum of the scores would estimate the level of the necessary social awareness.</p>
        <p>The higher the cumulative value, the lower the need of taking into account human-related
constraints when executing a task. For instance, let us consider a task to be executed in a robotic
social context, with critical priority, low risk, and strict performance requirements. The motion
planner would mainly focus on the technical constraints of the needed motions and execute
the task in the most eficient way possible. Let us consider instead a task to be executed in a
crowded social context, with low priority, critical risk, and none performance requirements. The
motion planner would mainly focus on the social constraints of the needed motions and execute
the task in the safest way possible. Considering these examples, we define some thresholds to
categorize the social relevance of tasks.</p>
        <p>• Technical-critical task have a cumulative score within the interval [9, 12] and represent
tasks whose execution can focus on the technical constraints mainly. The execution of
these tasks would thus have a low impact on humans. The motion planner can thus
“relax” underlying social constraints in order to be as much eficient as possible.
• Interaction-critical task have a cumulative score within the interval [5, 8] and represent
tasks whose execution should find a trade-of between technical and social constraints.
Namely, the execution of these tasks is expected to afect human behaviors and the motion
planner should take into account human behaviors when executing needed motions.
• Social-critical task have a cumulative score within the interval [0, 4] and represent
tasks whose execution should mainly focus on the social constraints. The execution of
these tasks is expected to strongly afect humans. The motion planner should therefore
mainly focus on the social constraints in order to be as much safe and reliable as possible.</p>
        <p>Furthermore, it is necessary to estimate the (physical) interaction abilities of the humans
that are directly (or indirectly) involved in the execution of a robot task.
We rely on the
International Classification of Functioning, Disability, and Health
Health Organization (WHO). The ICF theoretical framework describes the level of functioning
(ICF 1) proposed by the World
of a person from diferent points of view. Ontological models of the ICF have been proposed
and integrated into robot architectures to personalize and adapt the assistance [11, 10, 12, 13].
Description of the variables defined to characterize task-level human knowledge used to constrain the
physical behaviors of a robot.</p>
        <p>ICF Area
Mental Functioning</p>
        <p>ICF variable
Attention</p>
        <p>Value Range
[0, 4]
and uncertainty of the physical interactions that may occur between a human and a robot.
Depending on the cumulative scores of the variables we define three categories of humans.</p>
        <p>• Fragile. Humans falling into this category have a cumulative score within the interval</p>
        <p>Mental Functioning</p>
        <p>Memory
Mental Functioning</p>
        <p>Orientation
Mental Functioning</p>
        <p>Perception
Sensory</p>
        <p>Hearing
Sensory</p>
        <p>Seeing
Sensory</p>
        <p>Vision
Mobility</p>
        <p>Body Position
Mobility</p>
        <p>Movement Control
Mobility</p>
        <p>Muscle Tone
Mobility</p>
        <p>Walking
[25, 44]. This category represents humans with limited interaction skills (e.g., low hearing
or seeing functioning) and unstable motions (e.g., unstable walking, equilibrium issues,
or low attention). This category should in general entail conservative/prudent robot
behaviors since no assumptions can be made on the actual physical state/motions of the
human (maximum uncertainty).
• Average. Humans falling into this category have a cumulative score within the interval
[13, 25]. This category represents average humans with good interaction skills and
suficiently stable motions. This category allows the robot to make some assumptions
about the expected behaviors of the interacting humans and thus perform some level of
optimization and planning of motions (average uncertainty).
• Reliable. Humans falling into this category have a cumulative score within the interval
[0, 12]. This category represents “eficient” humans able to reliably interact with robots
and perform mutual adaptation to robot motions. In this case, the robot may achieve a
higher level of optimization since the behavior of the human is predictable to some extent
(minimum uncertainty).</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Motion-level Knowledge</title>
        <p>Table 3 and Table 2 generally characterize categories of humans that could be involved in the
execution of robotic tasks. To reliably and safely interact with humans it is also necessary to
characterize behaviors and interaction qualities of humans from a motion (physical) perspective.
Several works in the literature addressed the social navigation problem by taking into account
e.g., emotional states and proxemics to adapt motions to humans [14, 15, 16].</p>
        <p>The framework CoHAN generates flexible motion trajectories by taking into account observed
intentions and perspectives of humans [6]. The primary objective of CoHAN is to support
a higher level of human awareness by observing and evaluating human perspectives. This
section describes the sets of motion parameters that could be used to constrain the generation of
robot trajectories. These parameters are at the basis of the proposed task and motion planning
integrated approach.</p>
        <p>Table 4 shows the parameters of the motion planner determining the desired “qualities” of the
implemented robot behaviors. These variables set the desired limits of velocity and acceleration
of the robot. An interesting parameter is plan (Planning horizon). It determines the look-ahead
of planned trajectories and can be set according to the expected uncertainty of involved humans.</p>
        <p>Similarly, the parameter band (Band tightness) can be set according to the expected level of
collaboration of humans (e.g., conflict resolution when moving in narrow spaces).</p>
        <p>Table 5 shows the set of motion parameters modeling the physical motions and states of
humans. Parameters about velocity, are useful to infer the motion intentions of a human. The
parameter hrad (Radius) specifies proxemics constraints. The parameter hfield (Field of vision)
allows the motion planner to know whether the robot is visible to the human or not.</p>
        <p>In addition to robot and human parameters, CoHAN supports social variables that can further
contextualize robot behavior. The variables of Table 6 like st2c (Time to collision), svis (Visibility),
sband (Hidden humans) are especially interesting to realize robot behaviors that are acceptable
and close to human expectations.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Mapping Knowledge to Contextualize Navigation</title>
        <p>We now propose patterns mapping the defined categories of humans and social tasks to the
motion variables of CoHAN. This mapping allows a task planner to enrich dispatched motion
tasks with information about human and task categories.</p>
        <p>Table 7 shows how the defined human and task categories could be mapped to the motion
variables characterizing the behaviors of the human and the robot. Non-reliable/fragile humans
for example entail a motion model of the human characterized by a higher level of
uncertainty about intentions and beliefs, limiting the assumptions of the robot when implementing
its motions. Similarly, social-critical tasks are mapped to motion variables entailing a more
conservative and prudent behavior of the robot.</p>
        <p>Vice versa, eficient and highly reliable humans entail a motion model of the human
characterized by less uncertainty allowing the robot to “optimize” its motions to some extent.
Technical-critical tasks would for example push trajectory optimization in order to execute
tasks as eficiently as possible.</p>
        <p>To set social motion variables of CoHAN we consider the synergetic combination of the
human and task categories. In this case, indeed it is crucial to jointly reason about the task
a robot is supposed to perform and the involved humans in order to find a suitable trade-of
between, safety, reliability, and eficiency. Table 8 shows the defined social variable patterns.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Examples from Assistive Scenarios</title>
      <p>We plan to develop an integrated task and motion planning framework and evaluate the desired
contextualization capabilities in an assistive-inspired simulated environment. We in particular
consider an in-hospital scenario where a socially interacting robot is deployed to support
patients and healthcare personnel. In such a scenario a robot would perform diferent types of
tasks (e.g., drug delivery, patient monitoring, technical support to healthcare professionals) each
with diferent priorities, and should interact with diferent categories of humans (e.g., fragile
patients and more reliable healthcare professionals). Figure 1 shows the designed environment.
We will consider diferent scenarios by varying the human/task categories, and the resulting
social constraint patterns.</p>
      <p>On the one hand, the robot may perform tasks with low priority (e.g., monitoring patients
in diferent rooms and general non-critical assistance) and implement social trajectories when
encountering humans. In the case of patients, robot tasks would be executed considering Fragile
humans and realize navigation behaviors as cautiously as possible. In the case of healthcare
professionals, robot tasks would be executed considering Reliable humans to realize more
eficient navigation behaviors.</p>
      <p>On the other hand, the robot may perform tasks with high priority (e.g., emergency assistance,
and drug delivery) and thus implement trajectories as eficiently as possible finding a suitable
trade-of between optimization and safety when encountering humans. In the case of patients,
robot tasks would be executed considering Fragile humans but the high priority of the task
would require “relaxing” social constraints in the generation of the trajectories. In the case
of healthcare professionals, robot tasks would be executed considering Reliable humans and
implementing eficient navigation behaviors.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and Future Work</title>
      <p>This paper proposes a conceptual integration of task and motion planning aimed at
contextualizing the navigation behaviors of robots. We leverage the domain-level knowledge of a
task planner to dynamically contextualize navigation skills according to the needs/preferences
of humans. This proposal relies on CoHAN, a motion planning framework exposing several
motion parameters that can be used by a task planner to constrain trajectory generation. Future
work concerns the development of the integrated task and motion planning framework and
its evaluation on an assistive-inspired simulated environment. Then, we aim at considering
real HRI experiments for evaluating the envisaged capabilities with real human users as well as
integrating perception capabilities to dynamically infer human categories from perception.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments References</title>
      <p>Author Alessandro Umbrico from ISTC-CNR was supported by the Short-Term Mobility (STM)
2022 Program of the National Research Council of Italy (CNR).</p>
      <p>Special Issue on AI and Robotics.
[2] F. Ingrand, M. Ghallab, Deliberation for autonomous robots: A survey, Artificial
Intelligence 247 (2017) 10–44.
[3] B. Bruno, C. T. Recchiuto, I. Papadopoulos, A. Safiotti, C. Koulouglioti, R. Menicatti,
F. Mastrogiovanni, R. Zaccaria, A. Sgorbissa, Knowledge representation for culturally
competent personal robots: Requirements, design principles, implementation, and assessment,
International Journal of Social Robotics 11 (2019) 515–538.
[4] C. Moro, G. Nejat, A. Mihailidis, Learning and personalizing socially assistive robot
behaviors to aid with activities of daily living, ACM Trans. Hum.-Robot Interact. 7 (2018)
15:1–15:25.
[5] I. Awaad, G. K. Kraetzschmar, J. Hertzberg, The Role of Functional Afordances in
Socializing Robots, International Journal of Social Robotics 7 (2015) 421–438.
[6] P. T. Singamaneni, A. Favier, R. Alami, Human-aware navigation planner for diverse
humanrobot interaction contexts, in: 2021 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), 2021, pp. 5817–5824.
[7] P. T. Singamaneni, A. Favier, R. Alami, Watch out! there may be a human. addressing
invisible humans in social navigation, in: 2022 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), IEEE, 2022.
[8] R. D. Benedictis, A. Umbrico, F. Fracasso, G. Cortellessa, A. Orlandini, A. Cesta, A
dichotomic approach to adaptive interaction for socially assistive robots, User Modeling
and User-Adapted Interaction (2022).
[9] S. Rossi, F. Ferland, A. Tapus, User profiling and behavioral adaptation for HRI: A survey,</p>
      <p>Pattern Recognition Letters 99 (2017) 3 – 12.
[10] A. Umbrico, A. Cesta, G. Cortellessa, A. Orlandini, A holistic approach to behavior
adaptation for socially assistive robots, International Journal of Social Robotics 12 (2020)
617–637.
[11] A. Sorrentino, L. Fiorini, G. Mancioppi, F. Cavallo, A. Umbrico, A. Cesta, A. Orlandini,
Personalizing care through robotic assistance and clinical supervision, Frontiers in Robotics
and AI 9 (2022).
[12] I. Kostavelis, M. Vasileiadis, E. Skartados, A. Kargakos, D. Giakoumis, C.-S. Bouganis,
D. Tzovaras, Understanding of human behavior with a robotic agent through daily activity
analysis, International Journal of Social Robotics 11 (2019) 437–462.
[13] R. I. García-Betances, M. F. Cabrera-Umpiérrez, M. Ottaviano, M. Pastorino, M. T.</p>
      <p>Arredondo, Parametric cognitive modeling of information and computer technology
usage by people with aging- and disability-derived functional impairments, Sensors 16
(2016).
[14] G. Ferrer, A. G. Zulueta, F. H. Cotarelo, A. Sanfeliu, Robot social-aware navigation
framework to accompany people walking side-by-side, Autonomous Robots 41 (2017)
775–793.
[15] F. Cavallo, F. Semeraro, L. Fiorini, G. Magyar, P. Sinčák, P. Dario, Emotion modelling for
social robotics applications: A review, Journal of Bionic Engineering 15 (2018) 185–203.
[16] J. Rios-Martinez, A. Spalanzani, C. Laugier, From proxemics theory to socially-aware
navigation: A survey, International Journal of Social Robotics 7 (2015) 137–153.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lemaignan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Warnier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Sisbot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Clodic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Alami</surname>
          </string-name>
          ,
          <article-title>Artificial cognition for social human-robot interaction: An implementation</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>247</volume>
          (
          <year>2017</year>
          )
          <fpage>45</fpage>
          -
          <lpage>69</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>