<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards Enhancing Social Navigation through Contextual and Human-related Knowledge</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Phani</forename><forename type="middle">Teja</forename><surname>Singamaneni</surname></persName>
							<email>ptsingaman@laas.fr</email>
							<affiliation key="aff0">
								<orgName type="laboratory">LAAS-CNRS</orgName>
								<orgName type="institution" key="instit1">Universite de Toulouse</orgName>
								<orgName type="institution" key="instit2">CNRS</orgName>
								<address>
									<settlement>Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alessandro</forename><surname>Umbrico</surname></persName>
							<email>alessandro.umbrico@istc.cnr.it</email>
							<affiliation key="aff1">
								<orgName type="institution">CNR -Institute of Cognitive Sciences and Technologies (ISTC-CNR)</orgName>
								<address>
									<settlement>Rome</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrea</forename><surname>Orlandini</surname></persName>
							<email>andrea.orlandini@istc.cnr.it</email>
							<affiliation key="aff1">
								<orgName type="institution">CNR -Institute of Cognitive Sciences and Technologies (ISTC-CNR)</orgName>
								<address>
									<settlement>Rome</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Rachid</forename><surname>Alami</surname></persName>
							<email>rachid.alami@laas.fr</email>
							<affiliation key="aff0">
								<orgName type="laboratory">LAAS-CNRS</orgName>
								<orgName type="institution" key="instit1">Universite de Toulouse</orgName>
								<orgName type="institution" key="instit2">CNRS</orgName>
								<address>
									<settlement>Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<address>
									<addrLine>December 16</addrLine>
									<postCode>2022</postCode>
									<settlement>Florence</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Towards Enhancing Social Navigation through Contextual and Human-related Knowledge</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">A3F38D900D499926CE9B3169E7A93382</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T05:07+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Task and Motion Planning</term>
					<term>Social Navigation</term>
					<term>Knowledge Representation and Reasoning</term>
					<term>Cognitive Robotics</term>
					<term>Assistive Robotics</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Robots acting in real-world environments usually need to interact with humans. Interactions may occur at different levels of abstraction (e.g., process, task, physical), entailing different research challenges (e.g., task allocation, human-robot joint actions, robot navigation). For social navigation, we propose a conceptual integration of task and motion planning to contextualize robot behaviors. The main idea is to leverage the contextual knowledge of a task planner to dynamically contextualize the navigation skills of a robot. More specifically, we propose a holistic model of tasks and human features and a mapping from task-level knowledge to motion-level knowledge to constrain the generation of robot trajectories.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Robots acting in real-world and social environments usually face situations requiring tight, continuous interactions with humans. The presence of humans entails several research challenges under the control perspective of a robotic system. A human indeed represents a source of uncertainty a robot controller should deal with in order to synthesize and execute behaviors that are valid, safe, and acceptable.</p><p>Humans are usually not controllable and only partially predictable representing a source of uncertainty. With respect to Human-Robot Interaction (HRI), uncertainty about the behavior of a human concerns goals, beliefs, intentions, and expectations. A robot controller should be capable of reasoning about who is the human it interacts with, what are the objectives of the interactions, how to achieve them, and when to execute the needed actions. According to the different contexts and objectives, some assumptions can be made to reduce this uncertainty. In general, it is necessary to find suitable interaction strategies to carry out tasks in a reliable (safe) and effective way.</p><p>In addition, there is a social perspective to consider in order to meet the social expectation of a human in a given context and, thus, realize behaviors that are acceptable also under a social perspective. In (social) human-robot interaction scenarios, it is particularly important to reason about how tasks are carried out by a robot in order to comply with so-called social norms and, as a consequence, behave in an acceptable way (for the human user).</p><p>The need for implementing different "intelligent behaviors" requires investigating several research directions that lead to the integration of Robotics and Artificial Intelligence (AI) <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. This integration is especially crucial to support personalized and adaptive social and assistive interactions with humans. General interaction capabilities of robotic platforms should be customized according to the specific features of the scenario as well as the preferences and needs of users <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref>. It is fundamental to endow the robot with an "expressive" and wellstructured user model. On the one hand, such a model allows robots to personalize their general interaction/assistive capabilities (i.e., behaviors) to the specific needs of a user. On the other hand, it allows robots to adapt their behavior execution over time according to the changing or evolving states of users (e.g., worsening of impairments, changing health-related needs, or changing interaction preferences).</p><p>In this work, we propose a holistic model to be integrated within a task and motion planning approach to enhance the awareness of the social navigation skills of robots. The proposed approach relies on a motion planner, called CoHAN <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>, which implements human-aware navigation skills and provides a number of parameters to tailor implemented motion behaviors. Within an integrated task and motion planning framework, the idea is to leverage human-related and contextual knowledge available at the task planning level to set COHAN motion parameters and dynamically contextualize navigation behaviors. To this aim, this paper proposes a holistic model to represent domain needs and a mapping from human-aware domain knowledge to the CoHAN's navigation primitives.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Why We Need a Holistic Model</head><p>Endowing a robot with a well-structured model of humans is crucial to synthesize effective interaction strategies. There are several human and social-related variables that would affect the motions and interaction style of a robot in a given social context <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref>. Works in social navigation usually focus on the motion task alone, without considering the context in which it is executed. In our opinion, it is important to consider human-related knowledge correlated to the execution of a particular motion <ref type="bibr" target="#b5">[6]</ref> as well as contextual knowledge concerning the domain-level tasks being executed <ref type="bibr" target="#b9">[10]</ref>. Depending on the specific domain/application needs, tasks requiring (social) navigation skills may entail different priorities, safety requirements, and different performance constraints. All this information impacts the interaction style of a robot and the way motions and navigation behaviors are actually implemented.</p><p>We propose a holistic model to characterize social navigation tasks from different synergetic perspectives. We aim at integrating this knowledge into a novel task and motion planning approach to enrich the navigation skills of a robot when performing tasks. Usually, task and motion planners work at two different levels of abstraction. At a higher level, a task planner focuses on the goal-oriented behavior of a robot and plans tasks to achieve high-level goals. At a lower level, a motion planner acts closer to the execution layer by concretely implementing the requested motions. In particular, a motion planner should take into account the perspectives, intentions, and physical motions of involved humans <ref type="bibr" target="#b5">[6]</ref>.</p><p>With respect to the adaptation of the physical motion of a robot, contextual knowledge about the task being executed and the qualities of involved humans could enhance the awareness of the robot. The idea is thus to leverage the contextual and abstract perspective of a task planner to provide a motion planner with contextual knowledge about performed tasks and involved humans in order to enhance the awareness of navigation skills. At the same time, a motion planner exposes a set of interaction parameters that a task planner would use to tailor the physical behavior of the robot to the known context when dispatching navigation actions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Description of the synergetic perspectives considered to define a holistic model for integrated task and motion planning in human-aware social navigation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Perspective Abstraction Level Description Domain Task Planning</head><p>To characterize the features of the tasks a robot should perform in a certain (social) environment to achieve desired goals. Tasks may have different priorities, performance requirements as well as safety constraints. This information would affect the navigation style of a robot and the way it actually moves within the environment and in relationship with humans.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Human Task Planning</head><p>To characterize the features of the humans involved in the execution of tasks and related motions. Humans have different interaction skills, intentions, goals, and preferences that may affect the behavior of a robot at different levels. Furthermore, there can be different expectations with respect to the reliability of their behavior. This information would thus elicit different interaction/navigation styles of a robot with respect to the known features of involved humans.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Robot Motion Planning</head><p>To characterize the types and qualities of the interaction skills of a robot as well as performance, and execution requirements. This information would in particular define the interaction parameters that would be used to tailor the interaction to the different contexts and expected behaviors of involved humans.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>(Social) Environment Motion Planning</head><p>To characterize features of the environment in which a robot should act. This information is suitable to describe objects/obstacles (and their features) that are part of the environment as well as the structure of the environment (e.g., its topology). At this abstraction level, humans can be considered as "dynamic obstacles" of the environment. With respect to the definition of a holistic model, it is necessary to characterize geometric-related information e.g., motion intentions, perspectives, and acceleration that would affect the implemented physical motions of the robot <ref type="bibr" target="#b5">[6]</ref>.</p><p>Table <ref type="table">1</ref> describes four perspectives contributing to the holistic model. For each perspective, the table shows the level of abstraction at which related knowledge affects the behavior of a robot. The next sections discuss in more detail how these perspectives contribute to the contextualization of human-aware navigation behaviors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Task-level Knowledge</head><p>Task-level knowledge should characterize the motivations and objectives that lead a robot to act in a (social) environment. As shown in Table <ref type="table">1</ref>, the domain and human perspectives contribute to this level of abstraction. These perspectives characterize socially-relevant information about the tasks to be performed (i.e., the domain perspective) and the interacting features of involved humans (i.e., the human perspective).</p><p>From the domain perspective, it is useful to define also a number of variables characterizing the tasks a robot should perform with respect to the expected direct/indirect interactions with humans. According to these variables, it would be possible to dynamically "configure" the motion planner and constrain the resulting navigation behavior of the robot. Table <ref type="table" target="#tab_0">2</ref> describes the variables defined to characterize the tasks requiring a robot to act in a particular social environment. The rationale behind these variables is to estimate to which extent a task is critical with respect to the social dimension of the resulting interaction. Each variable is associated with a score (min 0, max 3) assessing the task from a social perspective. The sum of the scores would estimate the level of the necessary social awareness.</p><p>The higher the cumulative value, the lower the need of taking into account human-related constraints when executing a task. For instance, let us consider a task to be executed in a robotic social context, with critical priority, low risk, and strict performance requirements. The motion planner would mainly focus on the technical constraints of the needed motions and execute the task in the most efficient way possible. Let us consider instead a task to be executed in a crowded social context, with low priority, critical risk, and none performance requirements. The motion planner would mainly focus on the social constraints of the needed motions and execute the task in the safest way possible. Considering these examples, we define some thresholds to categorize the social relevance of tasks.</p><p>• Technical-critical task have a cumulative score within the interval <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b11">12]</ref> and represent tasks whose execution can focus on the technical constraints mainly. The execution of these tasks would thus have a low impact on humans. The motion planner can thus "relax" underlying social constraints in order to be as much efficient as possible. • Interaction-critical task have a cumulative score within the interval <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b7">8]</ref> and represent tasks whose execution should find a trade-off between technical and social constraints. Namely, the execution of these tasks is expected to affect human behaviors and the motion planner should take into account human behaviors when executing needed motions.</p><p>• Social-critical task have a cumulative score within the interval [0, 4] and represent tasks whose execution should mainly focus on the social constraints. The execution of these tasks is expected to strongly affect humans. The motion planner should therefore mainly focus on the social constraints in order to be as much safe and reliable as possible.</p><p>Furthermore, it is necessary to estimate the (physical) interaction abilities of the humans that are directly (or indirectly) involved in the execution of a robot task. We rely on the International Classification of Functioning, Disability, and Health (ICF<ref type="foot" target="#foot_0">1</ref> ) proposed by the World Health Organization (WHO). The ICF theoretical framework describes the level of functioning of a person from different points of view. Ontological models of the ICF have been proposed and integrated into robot architectures to personalize and adapt the assistance <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref>. Moving along a surface on foot, step by step, so that one foot is always on the ground, such as when strolling, sauntering, walking forwards, backward, or sideways.</p><p>Table <ref type="table" target="#tab_1">3</ref> shows the areas and variables modeling the interacting skills and qualities of humans. The rationale behind the considered variables of ICF is to estimate the physical reliability and uncertainty of the physical interactions that may occur between a human and a robot. Depending on the cumulative scores of the variables we define three categories of humans.</p><p>• Fragile. Humans falling into this category have a cumulative score within the interval [25, 44]. This category represents humans with limited interaction skills (e.g., low hearing or seeing functioning) and unstable motions (e.g., unstable walking, equilibrium issues, or low attention). This category should in general entail conservative/prudent robot behaviors since no assumptions can be made on the actual physical state/motions of the human (maximum uncertainty). • Average. Humans falling into this category have a cumulative score within the interval <ref type="bibr" target="#b12">[13,</ref><ref type="bibr">25]</ref>. This category represents average humans with good interaction skills and sufficiently stable motions. This category allows the robot to make some assumptions about the expected behaviors of the interacting humans and thus perform some level of optimization and planning of motions (average uncertainty). • Reliable. Humans falling into this category have a cumulative score within the interval <ref type="bibr">[0,</ref><ref type="bibr" target="#b11">12]</ref>. This category represents "efficient" humans able to reliably interact with robots and perform mutual adaptation to robot motions. In this case, the robot may achieve a higher level of optimization since the behavior of the human is predictable to some extent (minimum uncertainty).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Motion-level Knowledge</head><p>Table <ref type="table" target="#tab_1">3</ref> and Table <ref type="table" target="#tab_0">2</ref> generally characterize categories of humans that could be involved in the execution of robotic tasks. To reliably and safely interact with humans it is also necessary to characterize behaviors and interaction qualities of humans from a motion (physical) perspective. Several works in the literature addressed the social navigation problem by taking into account e.g., emotional states and proxemics to adapt motions to humans <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b15">16]</ref>. The framework CoHAN generates flexible motion trajectories by taking into account observed intentions and perspectives of humans <ref type="bibr" target="#b5">[6]</ref>. The primary objective of CoHAN is to support a higher level of human awareness by observing and evaluating human perspectives. This section describes the sets of motion parameters that could be used to constrain the generation of robot trajectories. These parameters are at the basis of the proposed task and motion planning integrated approach.</p><p>Table <ref type="table" target="#tab_2">4</ref> shows the parameters of the motion planner determining the desired "qualities" of the implemented robot behaviors. These variables set the desired limits of velocity and acceleration of the robot. An interesting parameter is plan (Planning horizon). It determines the look-ahead of planned trajectories and can be set according to the expected uncertainty of involved humans. On the other hand, the robot may perform tasks with high priority (e.g., emergency assistance, and drug delivery) and thus implement trajectories as efficiently as possible finding a suitable trade-off between optimization and safety when encountering humans. In the case of patients, robot tasks would be executed considering Fragile humans but the high priority of the task would require "relaxing" social constraints in the generation of the trajectories. In the case of healthcare professionals, robot tasks would be executed considering Reliable humans and implementing efficient navigation behaviors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions and Future Work</head><p>This paper proposes a conceptual integration of task and motion planning aimed at contextualizing the navigation behaviors of robots. We leverage the domain-level knowledge of a task planner to dynamically contextualize navigation skills according to the needs/preferences of humans. This proposal relies on CoHAN, a motion planning framework exposing several motion parameters that can be used by a task planner to constrain trajectory generation. Future work concerns the development of the integrated task and motion planning framework and its evaluation on an assistive-inspired simulated environment. Then, we aim at considering real HRI experiments for evaluating the envisaged capabilities with real human users as well as integrating perception capabilities to dynamically infer human categories from perception.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Hospital simulated environment to define the social navigation scenario</figDesc><graphic coords="9,154.15,84.19,137.50,116.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2</head><label>2</label><figDesc>Description of the variables defined to characterize domain-level tasks with respect to a social context.</figDesc><table><row><cell>Parameter Value Set</cell><cell cols="2">Value Range Description</cell></row><row><cell cols="2">Social Context {crowded, public, private, robotic} [0, 3]</cell><cell>Describe the environmental context in which a</cell></row><row><cell></cell><cell></cell><cell>task is supposed to be performed. Higher values</cell></row><row><cell></cell><cell></cell><cell>correspond to a lower predominance of humans,</cell></row><row><cell></cell><cell></cell><cell>and consequently higher availability of space to</cell></row><row><cell></cell><cell></cell><cell>plan robot motions.</cell></row><row><cell>Priority {low, average, high, critical}</cell><cell>[0, 3]</cell><cell>Describe the priority of the execution of a task</cell></row><row><cell></cell><cell></cell><cell>with respect to domain/application needs. The</cell></row><row><cell></cell><cell></cell><cell>level of priority reflects the needed efficiency and</cell></row><row><cell></cell><cell></cell><cell>need for optimization of the trajectories of robot</cell></row><row><cell></cell><cell></cell><cell>motions.</cell></row><row><cell>Risk {critical, high, average, low}</cell><cell>[0, 3]</cell><cell>Describe the risk of the execution of a task with</cell></row><row><cell></cell><cell></cell><cell>respect to the safety of humans. Tasks with low</cell></row><row><cell></cell><cell></cell><cell>risk for example would allow the execution of</cell></row><row><cell></cell><cell></cell><cell>optimal trajectories that are not necessarily social.</cell></row><row><cell></cell><cell></cell><cell>Vice versa, tasks with high risk would imply the</cell></row><row><cell></cell><cell></cell><cell>execution of social (and non-optimal) trajectories.</cell></row><row><cell>Performance {none, flexible, regular, strict}</cell><cell>[0, 3]</cell><cell>Describe the required level of performance during</cell></row><row><cell></cell><cell></cell><cell>the execution of the motions. Higher values imply</cell></row><row><cell></cell><cell></cell><cell>stricter adherence to performance optimization</cell></row><row><cell></cell><cell></cell><cell>when planning robot motions .</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3</head><label>3</label><figDesc>Description of the variables defined to characterize task-level human knowledge used to constrain the physical behaviors of a robot.</figDesc><table><row><cell>ICF Area ICF variable</cell><cell cols="2">Value Range Description</cell></row><row><cell>Mental Functioning Attention</cell><cell>[0, 4]</cell><cell>Specific mental functions of focusing on an external</cell></row><row><cell></cell><cell></cell><cell>stimulus or internal experience for the required period of</cell></row><row><cell></cell><cell></cell><cell>time.</cell></row><row><cell>Mental Functioning Memory</cell><cell>[0, 4]</cell><cell>Specific mental functions of registering and storing</cell></row><row><cell></cell><cell></cell><cell>information and retrieving it as needed.</cell></row><row><cell>Mental Functioning Orientation</cell><cell>[0, 4]</cell><cell>General mental functions of knowing and ascertaining</cell></row><row><cell></cell><cell></cell><cell>one's relation to time to place, to self, to others, to objects,</cell></row><row><cell></cell><cell></cell><cell>and to space.</cell></row><row><cell>Mental Functioning Perception</cell><cell>[0, 4]</cell><cell>Specific mental functions of recognizing and interpreting</cell></row><row><cell></cell><cell></cell><cell>sensory stimuli.</cell></row><row><cell>Sensory Hearing</cell><cell>[0, 4]</cell><cell>Sensory functions relating to sensing the presence of</cell></row><row><cell></cell><cell></cell><cell>sounds and discriminating the location, pitch, loudness and</cell></row><row><cell></cell><cell></cell><cell>quality of sounds.</cell></row><row><cell>Sensory Seeing</cell><cell>[0, 4]</cell><cell>Sensory functions relating to sensing the presence of light</cell></row><row><cell></cell><cell></cell><cell>and sensing the form, size, shape, and color of the visual</cell></row><row><cell></cell><cell></cell><cell>stimuli.</cell></row><row><cell>Sensory Vision</cell><cell>[0, 4]</cell><cell>Mental functions involved in discriminating shape, size,</cell></row><row><cell></cell><cell></cell><cell>color, and other ocular stimuli.</cell></row><row><cell>Mobility Body Position</cell><cell>[0, 4]</cell><cell>Staying in the same body position as required, such as</cell></row><row><cell></cell><cell></cell><cell>remaining seated or remaining standing for carrying out a</cell></row><row><cell></cell><cell></cell><cell>task, in play, work, or school.</cell></row><row><cell cols="2">Mobility Movement Control [0, 4]</cell><cell>Functions associated with control over and coordination of</cell></row><row><cell></cell><cell></cell><cell>voluntary movements.</cell></row><row><cell>Mobility Muscle Tone</cell><cell>[0, 4]</cell><cell>Functions related to the tension present in the resting</cell></row><row><cell></cell><cell></cell><cell>muscles and the resistance offered when trying to move the</cell></row><row><cell></cell><cell></cell><cell>muscles passively.</cell></row><row><cell>Mobility Walking</cell><cell>[0, 4]</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 4</head><label>4</label><figDesc>Description of the variables characterizing the qualities of robot motions.</figDesc><table><row><cell cols="3">Parameter Label Value Set</cell><cell>Description</cell></row><row><cell>Velocity limits</cell><cell>vel</cell><cell>{min, nominal, max}</cell><cell>Set the velocity limits of the implemented motions of the</cell></row><row><cell></cell><cell></cell><cell></cell><cell>robot.</cell></row><row><cell>Angular velocity limits</cell><cell>avel</cell><cell>{min, nominal, max}</cell><cell>Set the angular velocity of the implemented motions of the</cell></row><row><cell></cell><cell></cell><cell></cell><cell>robot.</cell></row><row><cell>Acceleration limits</cell><cell>acc</cell><cell>{min, nominal, max}</cell><cell>Limit the maximum acceleration of the motions of the robot.</cell></row><row><cell>Planning horizon</cell><cell>plan</cell><cell>{min, normal, max}</cell><cell>Set the "look ahead" of the planned motion trajectories of</cell></row><row><cell></cell><cell></cell><cell></cell><cell>the robot.</cell></row><row><cell>Band tightness</cell><cell>band</cell><cell cols="2">{loose, medium, tight} Set the collaborative level of the implemented behavior of the</cell></row><row><cell></cell><cell></cell><cell></cell><cell>robot.</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://icd.who.int/dev11/l-icf/en</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Author Alessandro Umbrico from ISTC-CNR was supported by the Short-Term Mobility (STM) 2022 Program of the National Research Council of Italy (CNR).</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Similarly, the parameter band (Band tightness) can be set according to the expected level of collaboration of humans (e.g., conflict resolution when moving in narrow spaces). Table <ref type="table">5</ref> shows the set of motion parameters modeling the physical motions and states of humans. Parameters about velocity, are useful to infer the motion intentions of a human. The parameter hrad (Radius) specifies proxemics constraints. The parameter hfield (Field of vision) allows the motion planner to know whether the robot is visible to the human or not.</p><p>In addition to robot and human parameters, CoHAN supports social variables that can further contextualize robot behavior. The variables of Table <ref type="table">6</ref> like st2c (Time to collision), svis (Visibility), sband (Hidden humans) are especially interesting to realize robot behaviors that are acceptable and close to human expectations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Mapping Knowledge to Contextualize Navigation</head><p>We now propose patterns mapping the defined categories of humans and social tasks to the motion variables of CoHAN. This mapping allows a task planner to enrich dispatched motion tasks with information about human and task categories.</p><p>Table <ref type="table">7</ref> shows how the defined human and task categories could be mapped to the motion variables characterizing the behaviors of the human and the robot. Non-reliable/fragile humans for example entail a motion model of the human characterized by a higher level of uncertainty about intentions and beliefs, limiting the assumptions of the robot when implementing its motions. Similarly, social-critical tasks are mapped to motion variables entailing a more Vice versa, efficient and highly reliable humans entail a motion model of the human characterized by less uncertainty allowing the robot to "optimize" its motions to some extent. Technical-critical tasks would for example push trajectory optimization in order to execute tasks as efficiently as possible. To set social motion variables of CoHAN we consider the synergetic combination of the human and task categories. In this case, indeed it is crucial to jointly reason about the task a robot is supposed to perform and the involved humans in order to find a suitable trade-off between, safety, reliability, and efficiency. Table <ref type="table">8</ref> shows the defined social variable patterns.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Examples from Assistive Scenarios</head><p>We plan to develop an integrated task and motion planning framework and evaluate the desired contextualization capabilities in an assistive-inspired simulated environment. We in particular consider an in-hospital scenario where a socially interacting robot is deployed to support patients and healthcare personnel. In such a scenario a robot would perform different types of tasks (e.g., drug delivery, patient monitoring, technical support to healthcare professionals) each with different priorities, and should interact with different categories of humans (e.g., fragile patients and more reliable healthcare professionals). Figure <ref type="figure">1</ref> shows the designed environment. We will consider different scenarios by varying the human/task categories, and the resulting social constraint patterns.</p><p>On the one hand, the robot may perform tasks with low priority (e.g., monitoring patients in different rooms and general non-critical assistance) and implement social trajectories when encountering humans. In the case of patients, robot tasks would be executed considering Fragile humans and realize navigation behaviors as cautiously as possible. In the case of healthcare</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Artificial cognition for social human-robot interaction: An implementation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Lemaignan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Warnier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Sisbot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Clodic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Alami</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">247</biblScope>
			<biblScope unit="page" from="45" to="69" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>Special Issue on AI and Robotics</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Deliberation for autonomous robots: A survey</title>
		<author>
			<persName><forename type="first">F</forename><surname>Ingrand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ghallab</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">247</biblScope>
			<biblScope unit="page" from="10" to="44" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Knowledge representation for culturally competent personal robots: Requirements, design principles, implementation, and assessment</title>
		<author>
			<persName><forename type="first">B</forename><surname>Bruno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">T</forename><surname>Recchiuto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Papadopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saffiotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Koulouglioti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Menicatti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Mastrogiovanni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zaccaria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sgorbissa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="515" to="538" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Learning and personalizing socially assistive robot behaviors to aid with activities of daily living</title>
		<author>
			<persName><forename type="first">C</forename><surname>Moro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Nejat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mihailidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Hum.-Robot Interact</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page">25</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The Role of Functional Affordances in Socializing Robots</title>
		<author>
			<persName><forename type="first">I</forename><surname>Awaad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">K</forename><surname>Kraetzschmar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hertzberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="421" to="438" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Human-aware navigation planner for diverse humanrobot interaction contexts</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">T</forename><surname>Singamaneni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Favier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Alami</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="5817" to="5824" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Watch out! there may be a human. addressing invisible humans in social navigation</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">T</forename><surname>Singamaneni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Favier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Alami</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022">2022. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A dichotomic approach to adaptive interaction for socially assistive robots</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">D</forename><surname>Benedictis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Umbrico</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fracasso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Cortellessa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Orlandini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cesta</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">User Modeling and User-Adapted Interaction</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">User profiling and behavioral adaptation for HRI: A survey</title>
		<author>
			<persName><forename type="first">S</forename><surname>Rossi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ferland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tapus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition Letters</title>
		<imprint>
			<biblScope unit="volume">99</biblScope>
			<biblScope unit="page" from="3" to="12" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A holistic approach to behavior adaptation for socially assistive robots</title>
		<author>
			<persName><forename type="first">A</forename><surname>Umbrico</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cesta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Cortellessa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Orlandini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="617" to="637" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Personalizing care through robotic assistance and clinical supervision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sorrentino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fiorini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Mancioppi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cavallo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Umbrico</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cesta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Orlandini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Robotics and AI</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Understanding of human behavior with a robotic agent through daily activity analysis</title>
		<author>
			<persName><forename type="first">I</forename><surname>Kostavelis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vasileiadis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Skartados</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kargakos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Giakoumis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-S</forename><surname>Bouganis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tzovaras</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="437" to="462" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Parametric cognitive modeling of information and computer technology usage by people with aging-and disability-derived functional impairments</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">I</forename><surname>García-Betances</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Cabrera-Umpiérrez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ottaviano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pastorino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Arredondo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Robot social-aware navigation framework to accompany people walking side-by-side</title>
		<author>
			<persName><forename type="first">G</forename><surname>Ferrer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Zulueta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">H</forename><surname>Cotarelo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Autonomous Robots</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page" from="775" to="793" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Emotion modelling for social robotics applications: A review</title>
		<author>
			<persName><forename type="first">F</forename><surname>Cavallo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Semeraro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fiorini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Magyar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sinčák</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dario</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Bionic Engineering</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="185" to="203" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">From proxemics theory to socially-aware navigation: A survey</title>
		<author>
			<persName><forename type="first">J</forename><surname>Rios-Martinez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Spalanzani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Laugier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="137" to="153" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
