<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Collaborative-AI: Social Robots Accompanying and Approaching people 1</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Ely</forename><surname>Repiso</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Anaís</forename><surname>Garrell</surname></persName>
						</author>
						<title level="a" type="main">Collaborative-AI: Social Robots Accompanying and Approaching people 1</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">F73887B342BC5C11B1A6096B4D0547F7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T19:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Collaborative AI to approach or to accompany people using social robots will be a fundamental robotics field in the near future. If we desire to share and to collaborate with social robots during the development of our daily task, social robots should be able to develop Collaborative AI task, such us accompanying or approaching people. In this article, we will present the robot-people accompaniment and approaching missions through the four levels of abstractions of Collaborative AI systems and describe the main Collaborative AI functionalities that are needed for these missions. We will also show the system that we have developed for accompany one or two pedestrians and approaching on person by a robot in an urban space.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Researchers stated robots will share humans' environments, will perform tasks together with humans, and will assist and help humans in their daily tasks. Robots should behave in a social way and have to be accepted by people, and for these reasons, robots must understand the spatio-temporal situation, must understand humans' behaviors and their intentions, and must take into account the goal that both pursue in a collaborative task. There are many tasks, where robot and people share a collaborative task, but in this article we will focus in the well-known "companion robot", which is defined as a robot moving in a human crowd environment while accompanying one or more pedestrians (e.g. for assisting them <ref type="bibr" target="#b12">[13]</ref>, guiding them <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b16">17]</ref>, or following them).</p><p>We will focus in the collaborative task of accompany one or two persons and approaching a person by a robot in a dynamic environment which have static obstacles (for example, walls, buildings, urban furniture, etc.) and dynamic obstacles, for example, moving pedestrians, bicycles, etc., see Fig. <ref type="figure" target="#fig_0">1</ref>. When humans walk in these environments, they follow specific formations, for example two people groups typically walk in a side-by-side formation; three people groups usually walk in a V-formation <ref type="bibr" target="#b30">[31]</ref>; etc.; and in any of these situations, they do the accompany in a social manner. The "companion robot" will have to behave in a similar way as the humans accompany other people, and also they have to navigate human-aware, and adapt to people in the different type of formations. Fig. <ref type="figure" target="#fig_0">1</ref> shows examples of accompany people by a robot.</p><p>Collaborative task while robots walk in groups is not at all a trivial problem, as it involves different collaborative levels of abstraction -for example reactive sensori-motor, spatio-temporal situational, Left: Our robot, named Tibi, accompanies one volunteer using the adaptive side-by-side <ref type="bibr" target="#b25">[26]</ref>. Center: Tibi accompanies the volunteers at the lateral of the side-by-side <ref type="bibr" target="#b26">[27]</ref>. Right: Tibi accompanies the volunteers at the middle of the Side-by-side formation <ref type="bibr" target="#b26">[27]</ref>.</p><p>operational (task oriented), cognitive (knowledge oriented) collaborational levels-; diverse functionalities working on-line (multimodal perception, multimodal actions, decision making, etc.); and complex computations at real time. In a typical accompany task the robot has to infer the final destination and the best path to go through; to take into account the orientation of the movement of the group; to adapt their desired velocity to the changes of people's velocity (accelerating, decelerating and even stopping when necessary); to maintain the formation and to be able to change their position in the group if people change their positions; to always detect their companions or at least include a behavior to deal with people's occlusions by other members of the group; and, finally, to anticipate the behavior of all pedestrians to avoid collisions in advance. In this work, we explain the human-robot accompaniment task through Collaborative AI issues, however we are not explaining the details of the methods, neither the experiments done due to the lack of space. These methods and experiments can be found in <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b25">26,</ref><ref type="bibr" target="#b27">28]</ref>.</p><p>In the remainder of the paper, we will explain briefly the following issues: in Section 3, the four levels of abstraction of Collaborative AI applied to the accompany task; in Section 4, the Social Force Model and other techniques used for the human-robot accompany and approaching of people; in Section 5, the functionalities involved in these missions; and in Section 6, the conclusions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related work</head><p>Researchers have developed techniques for robot guiding and following people. A context-aware following behaviour was developed by <ref type="bibr" target="#b29">[30]</ref>. Hybrid approaches combined following, guiding and accompany behaviours have been developed by <ref type="bibr" target="#b23">[24]</ref> and <ref type="bibr" target="#b24">[25]</ref>. A new technique of following behaviour that could be perceived by a non-expert as merely following someone, or as a guiding companion, has been developed by <ref type="bibr" target="#b15">[16]</ref>.</p><p>Recently, researchers have developed more complex strategies in their work on social robots <ref type="bibr" target="#b21">[22]</ref>. Morales et al. <ref type="bibr" target="#b19">[20]</ref> proposed a model of people walking side-by-side which could predict the partner's future position, and subsequently generate a plan for the robot's next position. Furthermore, <ref type="bibr" target="#b20">[21]</ref> and more recently <ref type="bibr" target="#b17">[18]</ref> did a side-byside method inferring the final goal of the person and also recorded a database of people walking in a fixed side-by-side formation that is different from our database, included in <ref type="bibr" target="#b25">[26]</ref>, which includes also situations of an adaptive side-by-side companion behaviour.</p><p>While previous studies only discussed the challenge of navigating around the person in a fixed side-by-side formation, our algorithm allows a more dynamic positioning around the human partner. This is, the method allows the robot to position itself at front, at lateral and at back of the person who accompanies depending on the situation. Then, if no obstacle interferes with the group's path, the robot accompanies the person in a side-by-side, but if any obstacle interferes with the group's path, the robot changes its position around the person to avoid it.</p><p>Another innovation that sets our approach apart from others is that our method is able to render a real-time prediction of the dynamic movements of the partner, as well as that of other people, in a horizon time. This kind of prediction performed within a determined time window allows the robot to anticipate people's navigation and react accordingly.</p><p>The Human-Robot approach is an important collaborative task that takes place between humans and robots in order to generate interaction; central to this task is the ability to recognise and predict the intentions of the other party's movements. In the past, researchers have used different control strategies for approaching a moving target by either pursuing it, or trying to intercept it <ref type="bibr" target="#b1">[2]</ref>.</p><p>Fajen et al. <ref type="bibr" target="#b2">[3]</ref> presented different control strategies for approaching a moving target. Narayanan et al. <ref type="bibr" target="#b22">[23]</ref> used a task-based control law to enable the robot to meet two standing persons and interact with them, by carefully considering their respective positions and orientations, and use that knowledge to calculate an optimal meeting point. Other researchers <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b18">19]</ref> studied human social behaviours in order to yield better results. <ref type="bibr" target="#b0">[1]</ref> recorded the different trajectories that people took when approaching other persons. In <ref type="bibr" target="#b10">[11]</ref>, the authors used proxemics rules to define the approaching distance to the target person for teaching robots proactive behaviour.</p><p>In contrast to the previous approaches, our work employs a prediction module based on the social force model, which includes humanlike behaviours for navigating within dynamic environments, and for mapping out the best path for the robot to take towards a moving destination. We are also able to compute the best meeting point between parties by considering the status of the group, the state of the target person, and the target person's future position.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Multiple levels of collaboration for the human-robot accompany mission</head><p>We can describe the collaboration through four levels <ref type="bibr" target="#b28">[29]</ref>:</p><p>• Reactive sensori-motor collaboration: This level involves all the perception sensors and actuators required in a collaborative task to react in the environment. • Spatio-temporal situational collaboration: This level includes the spatio-temporal situation assessment in a collaborative task. • Operational (task-oriented) collaboration: This level includes the collaboration from the point of view of the task to be developed.</p><p>• Cognitive (knowledge-oriented) collaboration: This level is oriented to the all collaborative cognitive issues required to the reach the goal.</p><p>Anyway, these levels are interconnected and they share information among them.</p><p>In order to detail these levels for the accompany and approaching missions, we will first explain the mission of accompany people side-by-side by a robot. The robot has to accompany one or two persons in an urban environment, where there are buildings, walls, urban furniture, etc. that are static obstacles; and people or bicycles moving, that are dynamic objects. The goal is to reach from an origin a destination, without colliding with the static or dynamic obstacles, following some criteria (for example, the minimum time) and taking into account that the robot has to behave human-aware (e.g. the robot navigation and planning has to be socially acceptable and minimizing the people paths disturbances). In the present system (accompaniment mission), we assume that the robot always follows the people, that means that the robot does not know where the people wants to go, and also we assume that the robots knows the actual urban map. It is clear that the behaviour of the system will be different if the person has to follow the robot (guiding mission). For the approaching mission we also assume that the robot knows the actual map.</p><p>Let us going to explain these levels for the case of robot-people accompany and approaching missions.</p><p>Reactive sensori-motor collaboration: The robot uses the perception systems to detect the person or people that are accompanied (or the person that has to approach) and the static and dynamic obstacles, and also uses this system to localize the robot and the accompanied persons in the urban map. The perception system includes several range-laser (Lidar) and one stereo-vision camera. The Lidar is also used to detect the velocity and acceleration of any moving object in the scene. Moreover, the system uses microphones to listen the voice of the persons. The system uses as actuators, the motors of the mobile platform to navigate, and the speaker to have a dialogue with the people. Although in the present work we do not use the people gaze people tracking, this information is important to know where the are looking to infer where they want to go. Moreover, the system is not detecting, neither identifying sidewalk signals, restaurants, bars, shop, etc.</p><p>Spatio-temporal situational collaboration: The robot monitors its poses (position and orientation) and velocity, the poses of all the static obstacles and the poses and velocity of all the moving persons and objects. Using this information the robot always knows the sideby-side position and orientation of the accompanied people, which is used to know how well the robot accompanies the people. Moreover, using the paths followed by the nearby pedestrians and other moving objects, the system is able to predict where they will be after some time and if it is going to be a collision. Then the robot creates several plans, select the best one and sends commands to the Sensori-motor collaboration to adapt the robot poses to the people, to maintain the best accompaniment formation and to avoid collisions. Because our path plans are human-aware, our system always adapts to path people modifications and in this way maintain an implicit agreement with pedestrians to not bother them.</p><p>Operational (task-oriented) collaboration: The robot helps in route planning and navigation, providing the best route to the final destination, and the alternatives when the route is closed or there is a too narrow route or the route is too busy. In our system, although this route is always computed, the local path always depends on the accompanied person decision, since the robot follows the person path.</p><p>The robot can also help in providing information of the upcoming restaurants, shops and other services in this level, but again, this functionality has not been implemented in the present system.</p><p>Cognitive (knowledge-oriented) collaboration: The robot can share with the person the goal destination and the alternatives routes to reach the destination in the shortest time. However, in our system this has not been implemented due what we mentioned before, the robot always follows the people path. Moreover, in our system, the robot generates a dialogue with the two persons (in two persons sideby-side accompaniment), in order to maintain them together side-byside while they are navigating.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Collaborative-AI Accompaniment models of people by social robots</head><p>In any robot accompaniment mission, the first thing that the robot has to do is inferring the final destination of the accompanied people in order to do it efficiently. Also, if the robot has to approach to a person, it needs to know the final person's destination to meet him/her at some point. Finally, for the rest of people of the environment, the robot needs to know to which destination they are going to, in order to make the navigation human-aware and avoid collisions. To know all people's destinations, we use the Bayesian Human Motion Intentionally Predictor (BHMIP) method <ref type="bibr" target="#b4">[5]</ref>. The BH-MIP uses a set of predefined known destinations of the environment, D = {D1, D2, ..., Dn, ..., Dm}, and a geometric-based long term prediction method that uses a Bayesian classifier to selects the best destination of the person. These predefined destinations are locations where people usually go, like entrances, exits or work places of the environment.</p><p>Once the robot knows all the final accompaniment destination and the rest of the pedestrian and other moving objects destinations, the robot computes the best path to reach the destination and avoid collision with the pedestrians and the static objects. Our navigation system is based on the the Social Force Model <ref type="bibr" target="#b14">[15]</ref>, and we have extended this model (ESFM -Extended Social Force Model) to include repulsion of static objects and of the robot itself <ref type="bibr" target="#b3">[4]</ref>. Moreover, we have developed a dynamic path planner, that computes the best path to be followed by the robot, that computes all the paths to go to the final destination, taken into account the pedestrian paths. This model is denominated Anticipative Kinno-dynamic Planner (AKP) <ref type="bibr" target="#b5">[6]</ref>.</p><p>Once the robot knows all people's behaviours, the robot has to plan its collaborative behavior with respect to the people it accompanies or with respect to the people it will approach. To plan the accompaniment of the robot with respect one accompanied person, we use the Adaptive Side-by-side Accompaniment of One Person (ASSAOP) <ref type="bibr" target="#b25">[26]</ref>. Also, this method was combined with an anticipate robot approaching behaviour that infers in advance the best encounter point and do an engagement with an accompanied person and one approached person, by using a triangle formation. In addition, to plan the accompaniment of the robot with respect a group of two accompanied people, we use the Adaptive Side-by-Side Accompaniment of Groups of People (ASSAGP) <ref type="bibr" target="#b26">[27]</ref>, which allows the robot to accompany the group in the central an lateral position of the group. Further, to do a robot's approaching to a person, we use the G 2 -Spline and ESFM to approach people <ref type="bibr" target="#b8">[9]</ref>. All these methods have in common that uses the ESFM to plan the tree of paths for the robot to be able to fulfill all the tasks. The next equation includes all the attractive and repulsive forces necessary to carry out all these collaborative navigation's with humans:</p><formula xml:id="formula_0">Fr = α f goal r,d (D d n ) + i∈Pc i f goal r,pc i (Dp c i goal ) + γ (F ped r + i∈Pc i F ped pc i ) + δ (F obs r + i∈Pc i F obs pc i ), where f goal r,d (D d n )</formula><p>is the attractive force until the final destination. In the accompaniment case this final destination is inferred using the direction of movement of the accompanied people. Also, this final destination can be a physical static destination inside the environment,D d n (a door, street, passageway, etc), or other person in the environment in the case of the approaching, D dg n .</p><p>i∈Pc i f goal r,pc i (Dp c i goal ) are the attractive forces to maintain the side-by-side formation with each i companion of the robot. F ped r and F obs r are the repulsive forces respect to other people and obstacles. i∈Pc i F ped pc i and i∈Pc i F obs pc i are the repulsive forces that the accompanied people feel from all the other people and obstacles applied to the robot, to be able to do a more effective accompaniment. For better explanation of what forces are used for each method, the reader is directed to the cited papers of accompaniment and approaching in the current section.</p><p>Once the robot has computed all the paths to accompany the group or approach to one person, the robot has to select the best one. The evaluation of these paths is done using a multi-cost function that considers several sub-cost related to some characteristics of the paths, Eq. 1. These sub-costs evaluate: the distance between the robot and the final dynamic destination of the group (J d ); the orientation of the robot respect to the orientation to arrive to the final destination (Jor); the attractive force to control the robot (Jr); and the repulsive interaction forces respect to people (Jp) and obstacles (Jo), and the accompaniment cost (Jc), respectively. The first five costs were introduced in <ref type="bibr" target="#b6">[7]</ref> and the companion cost was introduced in <ref type="bibr" target="#b25">[26]</ref>.</p><formula xml:id="formula_1">J(S, s goal , U ) = [J d , Jor, Jr, Jp, Jo, Jc]<label>(1)</label></formula><p>Finally, the computation of the cost needs three steps. First, the robot computes each individual cost in each step of the path. Second, to avoid the scaling effect of the weighted sum method, each cost function is normalized between (−1, 1) using the mean and variance of an erf function, that are calculated after the computation of all the paths. Third, a projection via weighted sum J : R n → R is obtained giving the weighted cost formula <ref type="bibr" target="#b5">[6]</ref>. Where n is the number of costs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Functionalities in Collaborative AI Systems to accompany people</head><p>In this section, we include the main functionalities that center the research efforts in the Collaborative AI systems to accompany and/or approach people. The functionalities of the Collaborative AI systems to accompany and/or approach people are listed in Fig. <ref type="figure" target="#fig_1">2</ref>, as well as the relations among them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Multimodal Perception</head><p>To interact in dynamic urban environments, robots must detect all pedestrians and objects of the environment. In our case we use three types of perception systems: a 360 • range-laser range sensor (Lidar); a video camera system; and a sterovision camera. The video camera system <ref type="bibr" target="#b11">[12]</ref> is used for identifying specific people that we want to search for or track. The stereovision is used for tracking people in the accompaniment and approaching missions. The 360 • range-laser range sensor allows to compute person position with high accuracy, high frequency and in large areas. These are important characteristics to do interactions in a real time. However, the Lidar does not allow to identify a specific person, and for this purpose, it is used the video camera. The Lidar is also used for the adaptation of the robot in the accompaniment and approaching missions. It allows to keep the person in all the accompaniment process and to detect the person in the approaching process. Moreover, the Lidar is also used to detect the pedestrians position and orientation, and predict their paths.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Communication</head><p>Communication between the robot and the human is a main functionality to allow an efficient accompaniment. Communication is needed to reach common understanding about the environment that surrounds the group (1-robot and 1 person or 2-people), to agree on shared final destination, to share the perception of the accompanied people or other people in the environment, to agree on common plans to arrive until the destination and synchronize the execution of these plans or more concretely paths until the destination. The humans and the robot must communicate and coordinate among themselves to fulfill a effective and efficient accompaniment or to avoid collide among each other. During the accompaniment task this communication can be verbal and non-verbal or low level (implicit or explicit) <ref type="bibr" target="#b7">[8]</ref>.</p><p>Regarding the explicit verbal communication, the robot uses it for interacting with the accompanied person, the approached person or with other people using voice (robot speakers). In our accompaniment, this communication was done by speech dialogue between the robot and the human. For example, the robot communicates if it loses the target of the accompanied person. In the case of accompaniment of two persons, the robots makes an interactive dialogue with the persons, using a child game to create engagement between the persons and the robot (in our case we use the child game of discovering the name of an environment object), while walking towards a destination in the environment.</p><p>Regarding the implicit non-verbal communication of the accompaniment task, the communication is done through the range-laser, which gives information of the person with respect the robot. In any of the accompaniment missions, the robot knows in real time the position and orientation of the accompanied persons, and also the position, orientation and velocity of the pedestrians. The implicit communication is only in one direction, from person to robot, and when the robot needs to inform the person, from robot to person, it uses the explicit verbal communication.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3">Intentionality</head><p>To have an effective Collaborative-AI interaction between the robot and the people, it is mandatory that the robot infers the people intentionality. Then, in the case of accompaniment or approaching tasks, the robot needs to predict the walking behaviour of all the people in the environment. In our case, we use the the Bayesian Human Motion Intentionality Predictor (BHMIP) <ref type="bibr" target="#b4">[5]</ref> to predict all the people walking behavior.</p><p>Specifically, for the accompaniment task, the robot needs to predict the accompanied people behaviour, to anticipate their movements and improve the accompany task. In our case, we use it for maintaining a specific formation and inferring the final destination and the best path to arrive to the people destination. The intentionality is computed using the previous person path and the position of the goal.</p><p>For the approaching task, the robot has to predict where will be the position of the person that has to be approached. If the person stops in a specific location, then the prediction is simplified to a known destination. If the person is moving, then the robot using the BHMIP algorithm, computes where the person will be, and then modify its path to reach him or her.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.4">Adaptation</head><p>In the accompaniment task there are mutual adaptation between the human and the robot. The robot is continuously adapting its path in order to fulfill a side-by-side accompaniment, and usually the person is also doing something similar. However, there are cases where for example the person stops without telling anything, in this case the robot modifies its trajectory to stop or to approach the person. When there are obstacles that have to be avoided, then the robot modifies its formation to allow the person to go ahead or behind the robot. If an obstacle implies that the side-by-side formation is broken, then the robot recovers the side-by-side formation after overcoming the obstacle.</p><p>In our experiments with inexpert people, we start explaining to the users the minimum information necessary to interact with the robot. This information includes: the destination where they will go together; the required time that needs the robot to start moving; and that the persons have to walk slowly in order that the robot can maintain the side-by-side formation. In addition, we explain that the robot has a safe distance, so they can not walk very close to it. Finally, for the accompaniment of two people, we also explain the child game that we be used.</p><p>To fulfill this accompaniment adaptation, the robot needs to detect, track and predict the behaviour of the accompanied people and also of other people or obstacles of the environment, to facilitate the group's navigation in the dynamic environment.</p><p>In the case of the approaching mission, there is a mutual adaptation between the person and the robot. If both are moving, there is an adaptation between the speeds of both to approach and to stop in front of each other.</p><p>Furthermore, the accompaniment group must adapt to the dynamic environment. This means that by detecting and predicting the people and obstacles in the environment, the robot must avoid them in an anticipatory way, while accompanying a person or a group of people. In our case, the robot facilitates the navigation behaviour of the group that accompanies, while at the same time, facilitates the navigation behaviour of other people in the environment <ref type="bibr" target="#b25">[26]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.5">Interaction</head><p>Interaction among robot and humans plays an important role in Collaborative-AI, where in our case, these interactions will be Master-Slave for the robot and Peer-to-peer for inexpert people that interact with the robot.</p><p>In our accompaniment case, we have two types of interactions. First, the robot interacts using the position of the person or persons being accompanied. The robot interacts with the accompanied people by approaching or moving away, depending in the type of formation, for example side-by-side or V formation. In the case of two persons being accompanied, the robot will interact in a different way if the robot is in between both persons, or if the robot is the lateral position. Moreover, in case that the robot has to break the formation, due for example to an obstacle, the robot will interact again with the persons to recover the previous side-by-side formation.</p><p>Second, the robot and humans can use direct communication among them. The direct communication is done through the robot speaker, for example by telling to the people that the robot can not move because there are too many people blocking its path or for maintaining the group formation using a child game. This game establishes a dialogue of questions-answers, where the accompanied people have to be near the robot and to follow side-by-side formation to maintain the dialogue.</p><p>In the case of Collaborative approaching, we use only the interactions regarding the position among the humans of the environment. First, the robot and the approached human can interact using position in two different situations, where only the robot approaches the human or where both approach each other. Second, the robot interact with other people of the environment, by avoiding them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.6">Agreement</head><p>There are always agreement between robots and humans who collaborate to do accompaniment or approaching tasks. These agreements are in the shared goals, shared plans of actions and action execution. They can negotiate verbally these shared behaviours or in some cases, they can negotiate implicitly, for example using the distance between them. The negotiation exist and both of them have to agree on what has to be the next action. In most of the cases, the robot has to anticipate what the human will do, in order to facilitate the accompaniment or the approaching.</p><p>In the accompaniment tasks there are several agreements between the robot and the human. First, the group must agree on the final destination to go. In our case, the person decides to which destination, of all the possible environment destinations, he/she prefers to go and the robot infers this destination from the person's navigation behavior. In the case of two people, the robot infers the most likely destination for the group taking into account the behavior of both people, and in the case that they separate, it will take into account the behavior of the closest person. Furthermore, the final destination can be static (an environment destination: door, stairs, passageway, etc) or a dynamic destination, for example other person position in the environment. Then, the group need to agree to which person they want to reach. Second, they must agree on which path they must follow to reach the final destination. In our case, the robot takes into account the behavior of humans by evaluating different costs in the possible computed paths and selects the best of them to reach the destination. Third, in our case they must agree in the adaptive formation when they overpass people or obstacles. Then, to avoid other people in the environment, the robot changes its position around the person to allow the group to avoid easily other people or static obstacles. As the robot is usually slower than the person, for security reasons, it has been decided that the robot goes always behind. And as the robot changes its group's position in advance to avoid static obstacles and other people, the people in the group can adapt and understand that the robot prefers to go behind of him/her, to overcome obstacles. Fourth, for the accompaniment of groups, the members of the group must decide in which central or lateral position they will be within this group formation, and that position within the group can change dynamically for reasons of comfort and / or the environment.</p><p>For a robot approaching to a person, but without accompanying any one, the robot and the approached person may have to agree in: if both will approach at the same time; if it is the robot that approaches the person; in which way the robot has to approach the person; or whether the person really wants that the robot approaches him/her.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.7">Decision Making, Reasoning &amp; Planning</head><p>When human and robot collaborate doing a specific task, they need to share some decision making, reasoning and planning through direct or indirect communication by using nonverbal cues. In the companion and approaching cases, we achieve these issues using a social human-aware navigation <ref type="bibr">[6] [7]</ref>. In addition, this navigation is accessed using the extended social force model (ESFM) based on the relative position between humans and people. The ESFM includes several interactions between the robot, the accompanied people, the approached people and other people in the environment. Using these interactions and the intentionality prediction of all people, our robot is able to infer a planning behaviour that allows the robot to accompany people or approach to people through a social accepted way.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Experiments</head><p>We have done a number of real-life experiments, for accompaniment of one person, two persons and approaching a person. In all the experiments, we have used different groups of people that they did not know robots before. We have set up the parameters of the models doing experiments only with people, without robots, and other experiments with people and a tele-operated robot. With these parameters, we have complete our models and then tested the models with people and robots. Fig. <ref type="figure" target="#fig_0">1</ref> shows examples of accompany two people by a robot. Fig. <ref type="figure" target="#fig_2">3</ref> shows approaching robot a person. We have not included in this paper the experiments that we did, due to the lack of space, but they can be found in <ref type="bibr" target="#b25">[26]</ref> [28] <ref type="bibr" target="#b8">[9]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Conclusions</head><p>We have described in this article, the basic collaborative AI multi levels required to do accompaniment and approaching of people by robots. We have explained each one of the collaborative AI functionalities to do these two missions and show some illustrative images of the experiments. Finally, we showed that robot accompaniment involves complex Collaborative AI issues. </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Real-life experiments in the Barcelona Robot Lab.Left: Our robot, named Tibi, accompanies one volunteer using the adaptive side-by-side<ref type="bibr" target="#b25">[26]</ref>. Center: Tibi accompanies the volunteers at the lateral of the side-by-side<ref type="bibr" target="#b26">[27]</ref>. Right: Tibi accompanies the volunteers at the middle of the Side-by-side formation<ref type="bibr" target="#b26">[27]</ref>.</figDesc><graphic coords="1,313.36,179.76,228.60,67.88" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. Graph of Accompaniment and Approach Issues and relations among them.</figDesc><graphic coords="4,38.44,71.41,252.21,179.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. The robot uses the implemented method to approach a static and moving person, while avoiding several static obstacles of the environment. terio de Ciencia e Innovación , the EU AI4EU project (H2020-ICT-2018-2-825619), by the Spanish State Research Agency through the María de Maeztu Seal of Excellence to IRI (MDM-2016-0656). Ely Repiso is suported by the FPI-grant, BES-2014-067713.</figDesc><graphic coords="6,50.37,71.18,228.49,164.76" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Copyright c</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2020" xml:id="foot_1">for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0) 2 Institut de Robòtica i Informàtica Industrial (CSIC-UPC). Llorens Artigas 4-6, 08028 Barcelona, Spain. erepiso@iri.upc.edu, agarrell@iri.upc.edu, sanfeliu@iri.upc.edu</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGEMENTS</head><p>Work supported by the Spanish Ministry of Science project ROCO-TRANSP (PID2019-106702RB-C21-RAEI/FEDER EU) by Minis-</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Using human approach paths to improve social navigation</title>
		<author>
			<persName><forename type="first">Eleanor</forename><surname>Avrunin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Reid</forename><surname>Simmons</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">8th ACM/IEEE International Conference on Human-Robot Interaction</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="73" to="74" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Line of sight robot navigation toward a moving goal</title>
		<author>
			<persName><forename type="first">Fethi</forename><surname>Belkhouche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Boumediene</forename><surname>Belkhouche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Parviz</forename><surname>Rastgoufard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="255" to="267" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Behavioral dynamics of intercepting a moving target</title>
		<author>
			<persName><forename type="first">William</forename><forename type="middle">H</forename><surname>Brett R Fajen</surname></persName>
		</author>
		<author>
			<persName><surname>Warren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Experimental Brain Research</title>
		<imprint>
			<biblScope unit="volume">180</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="303" to="319" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Robot companion: A social-force based approach with human awareness-navigation in crowded environments</title>
		<author>
			<persName><forename type="first">Gonzalo</forename><surname>Ferrer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anais</forename><surname>Garrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ international conference on Intelligent robots and systems</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1688" to="1694" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Bayesian human motion intentionality prediction in urban environments</title>
		<author>
			<persName><forename type="first">Gonzalo</forename><surname>Ferrer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition Letters</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="page" from="134" to="140" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Proactive kinodynamic planning using the extended social force model and human motion prediction in urban environments</title>
		<author>
			<persName><forename type="first">Gonzalo</forename><surname>Ferrer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ international conference on Intelligent robots and systems</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1730" to="1735" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Anticipative kinodynamic planning: multi-objective robot navigation in urban and dynamic environments</title>
		<author>
			<persName><forename type="first">Gonzalo</forename><surname>Ferrer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Autonomous Robots</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1473" to="1488" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A survey of socially interactive robots</title>
		<author>
			<persName><forename type="first">Terrence</forename><surname>Fong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Illah</forename><surname>Nourbakhsh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kerstin</forename><surname>Dautenhahn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Robotics and autonomous systems</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">3-4</biblScope>
			<biblScope unit="page" from="143" to="166" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Robot navigation to approach people using g 2 -spline path planning and extended social force model</title>
		<author>
			<persName><forename type="first">Marta</forename><surname>Galvan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ely</forename><surname>Repiso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Iberian Robotics conference</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="15" to="27" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Cooperative social robots to accompany groups of people</title>
		<author>
			<persName><forename type="first">Anais</forename><surname>Garrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The International Journal of Robotics Research</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">13</biblScope>
			<biblScope unit="page" from="1675" to="1701" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Teaching robot&apos;s proactive behavior using human assistance</title>
		<author>
			<persName><forename type="first">Anais</forename><surname>Garrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Villamizar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesc</forename><surname>Moreno-Noguer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="231" to="249" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Searching and tracking people with cooperative mobile robots</title>
		<author>
			<persName><forename type="first">A</forename><surname>Goldhoorn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Garrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Alquezar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Autonomous Robots</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="739" to="759" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Flexible path planning for nonholonomic mobile robots</title>
		<author>
			<persName><forename type="first">Birgit</forename><surname>Graf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christoph</forename><surname>Hostalet Wandosell</surname></persName>
		</author>
		<author>
			<persName><surname>Schaeffer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 4th European workshop on advanced Mobile Robots (EUROBOT&apos;01)</title>
		<title level="s">Fraunhofer Inst. Manufact. Eng</title>
		<editor>
			<persName><surname>Automat</surname></persName>
		</editor>
		<meeting>4th European workshop on advanced Mobile Robots (EUROBOT&apos;01)<address><addrLine>Lund, Sweden</addrLine></address></meeting>
		<imprint>
			<publisher>IPS</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="199" to="206" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Toomas: interactive shopping guide robots in everyday use-final implementation and experiences from long-term field trials</title>
		<author>
			<persName><forename type="first">H-M</forename><surname>Gross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Boehme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ch</forename><surname>Schroeter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Steffen</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><surname>König</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Erik</forename><surname>Einhorn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ch</forename><surname>Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matthias</forename><surname>Merten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andreas</forename><surname>Bley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ International Conference on Intelligent Robots and Systems</title>
				<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="2005" to="2012" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Social force model for pedestrian dynamics</title>
		<author>
			<persName><forename type="first">Dirk</forename><surname>Helbing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Molnar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Physical review E</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page">4282</biblScope>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Design of sensing system and anticipative behavior for human following of mobile robots</title>
		<author>
			<persName><forename type="first">Jwu-Sheng</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jyun-Ji</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniel</forename><forename type="middle">Minare</forename><surname>Ho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Industrial Electronics</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1916" to="1927" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">An affective guide robot in a shopping mall</title>
		<author>
			<persName><forename type="first">Takayuki</forename><surname>Kanda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Masahiro</forename><surname>Shiomi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zenta</forename><surname>Miyashita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hiroshi</forename><surname>Ishiguro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Norihiro</forename><surname>Hagita</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 4th ACM/IEEE international conference on Human robot interaction</title>
				<meeting>the 4th ACM/IEEE international conference on Human robot interaction</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="173" to="180" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Model of side-by-side walking without the robot knowing the goal</title>
		<author>
			<persName><forename type="first">Deneth</forename><surname>Karunarathne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yoichi</forename><surname>Morales</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Takayuki</forename><surname>Kanda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hiroshi</forename><surname>Ishiguro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="401" to="420" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">&apos;May i help you?: Design of human-like polite approaching behavior</title>
		<author>
			<persName><forename type="first">Yusuke</forename><surname>Kato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Takayuki</forename><surname>Kanda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hiroshi</forename><surname>Ishiguro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction</title>
				<meeting>the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="35" to="42" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Walking together: side by side walking model for an interacting robot</title>
		<author>
			<persName><forename type="first">Yoichi</forename><surname>Morales</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Takayuki</forename><surname>Kanda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Norihiro</forename><surname>Hagita</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Human-Robot Interaction</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="51" to="73" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Destination unknown: walking side-byside without knowing the goal</title>
		<author>
			<persName><forename type="first">Ryo</forename><surname>Murakami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luis</forename><surname>Yoichi Morales</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Satoru</forename><surname>Saiki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Takayuki</forename><surname>Satake</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hiroshi</forename><surname>Kanda</surname></persName>
		</author>
		<author>
			<persName><surname>Ishiguro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM/IEEE international conference on Human-robot interaction</title>
				<meeting>the ACM/IEEE international conference on Human-robot interaction</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="471" to="478" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Movement control of accompanying robot based on artificial potential field adapted to dynamic environments</title>
		<author>
			<persName><forename type="first">Kazushi</forename><surname>Nakazawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Keita</forename><surname>Takahashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Masahide</forename><surname>Kaneko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electrical Engineering in Japan</title>
		<imprint>
			<biblScope unit="volume">192</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="25" to="35" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">On equitably approaching and joining a group of interacting humans</title>
		<author>
			<persName><forename type="first">Anne</forename><surname>Vishnu K Narayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Franc</forename><surname>Spalanzani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marie</forename><surname>¸ois Pasteau</surname></persName>
		</author>
		<author>
			<persName><surname>Babel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ International Conference on Intelligent Robots and Systems</title>
				<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="4071" to="4077" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Human robot interaction in mobile robot applications</title>
		<author>
			<persName><forename type="first">Akihisa</forename><surname>Ohya</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication</title>
				<meeting>11th IEEE International Workshop on Robot and Human Interactive Communication</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="5" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">A multimodal personfollowing system for telepresence applications</title>
		<author>
			<persName><forename type="first">Ching</forename><surname>Wee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gerald</forename><surname>Pang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiling</forename><surname>Seet</surname></persName>
		</author>
		<author>
			<persName><surname>Yao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology</title>
				<meeting>the 19th ACM Symposium on Virtual Reality Software and Technology</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="157" to="164" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Adaptive side-by-side social robot navigation to approach and interact with people</title>
		<author>
			<persName><forename type="first">Ely</forename><surname>Repiso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anaís</forename><surname>Garrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="page" from="1" to="22" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">People&apos;s adaptive side-by-side model evolved to accompany groups of people by social robots</title>
		<author>
			<persName><forename type="first">Ely</forename><surname>Repiso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anaís</forename><surname>Garrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Robotics and Automation Letters IEEE/RSJ and International Conference on Robotics and Automation</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">People&apos;s v-formation and side-by-side model adapted to accompany groups of people by social robots</title>
		<author>
			<persName><forename type="first">Ely</forename><surname>Repiso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Zanlungo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Takayuki</forename><surname>Kanda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anaís</forename><surname>Garrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ international conference on Intelligent robots and systems</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">State of the art in collaborative ai</title>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Sanfeliu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">James</forename><surname>Crowley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Javier</forename><surname>Vazquez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luca</forename><surname>Iocchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Angulo</forename><surname>Cecilio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Antony</forename><forename type="middle">G</forename><surname>Cohn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Antoni</forename><surname>Grau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Geza</forename><surname>Nemeth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anais</forename><surname>Garrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Edmundo</forename><surname>Guerra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniele</forename><surname>Nardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rene</forename><surname>Alquezar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alessandro</forename><surname>Saffiotti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">release 3.0</title>
				<imprint>
			<publisher>AI4EU internal delivery</publisher>
			<date type="published" when="2020-06">june 2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Spatial contextaware person-following for a domestic robot</title>
		<author>
			<persName><forename type="first">Fang</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marc</forename><surname>Hanheide</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gerhard</forename><surname>Sagerer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Workshop on Cognition for Technical Systems</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Potential for the dynamics of pedestrians in a socially interacting group</title>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Zanlungo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tetsushi</forename><surname>Ikeda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Takayuki</forename><surname>Kanda</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Physical Review E</title>
		<imprint>
			<biblScope unit="volume">89</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">12811</biblScope>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
