=Paper=
{{Paper
|id=Vol-2659/sanfeliu
|storemode=property
|title=Collaborative-AI: social robots accompanying and
approaching people
|pdfUrl=https://ceur-ws.org/Vol-2659/sanfeliu.pdf
|volume=Vol-2659
|authors=Alberto Sanfeliu,Anais Garrell,Ely Repiso
|dblpUrl=https://dblp.org/rec/conf/ecai/SanfeliuGR20
}}
==Collaborative-AI: social robots accompanying and
approaching people==
Collaborative-AI: Social Robots Accompanying and
Approaching people 1
Alberto Sanfeliu, Ely Repiso and Anaı́s Garrell 2
Abstract.
Collaborative AI to approach or to accompany people using social
robots will be a fundamental robotics field in the near future. If we
desire to share and to collaborate with social robots during the de-
velopment of our daily task, social robots should be able to develop
Collaborative AI task, such us accompanying or approaching people.
In this article, we will present the robot-people accompaniment and
approaching missions through the four levels of abstractions of Col-
laborative AI systems and describe the main Collaborative AI func- Figure 1. Real-life experiments in the Barcelona Robot Lab.
tionalities that are needed for these missions. We will also show the Left: Our robot, named Tibi, accompanies one volunteer using the
system that we have developed for accompany one or two pedestrians adaptive side-by-side [26]. Center: Tibi accompanies the volunteers
at the lateral of the side-by-side [27]. Right: Tibi accompanies the
and approaching on person by a robot in an urban space. volunteers at the middle of the Side-by-side formation [27].
1 Introduction
operational (task oriented), cognitive (knowledge oriented) collabo-
Researchers stated robots will share humans’ environments, will per- rational levels-; diverse functionalities working on-line (multimodal
form tasks together with humans, and will assist and help humans perception, multimodal actions, decision making, etc.); and complex
in their daily tasks. Robots should behave in a social way and have computations at real time. In a typical accompany task the robot has
to be accepted by people, and for these reasons, robots must under- to infer the final destination and the best path to go through; to take
stand the spatio-temporal situation, must understand humans’ behav- into account the orientation of the movement of the group; to adapt
iors and their intentions, and must take into account the goal that both their desired velocity to the changes of people’s velocity (acceler-
pursue in a collaborative task. There are many tasks, where robot and ating, decelerating and even stopping when necessary); to maintain
people share a collaborative task, but in this article we will focus in the formation and to be able to change their position in the group if
the well-known “companion robot”, which is defined as a robot mov- people change their positions; to always detect their companions or
ing in a human crowd environment while accompanying one or more at least include a behavior to deal with people’s occlusions by other
pedestrians (e.g. for assisting them [13], guiding them [10,14,17], or members of the group; and, finally, to anticipate the behavior of all
following them). pedestrians to avoid collisions in advance. In this work, we explain
We will focus in the collaborative task of accompany one or two the human-robot accompaniment task through Collaborative AI is-
persons and approaching a person by a robot in a dynamic envi- sues, however we are not explaining the details of the methods, nei-
ronment which have static obstacles (for example, walls, buildings, ther the experiments done due to the lack of space. These methods
urban furniture, etc.) and dynamic obstacles, for example, moving and experiments can be found in [9, 26, 28].
pedestrians, bicycles, etc., see Fig.1. When humans walk in these In the remainder of the paper, we will explain briefly the following
environments, they follow specific formations, for example two peo- issues: in Section 3, the four levels of abstraction of Collaborative
ple groups typically walk in a side-by-side formation; three people AI applied to the accompany task; in Section 4, the Social Force
groups usually walk in a V-formation [31]; etc.; and in any of these Model and other techniques used for the human-robot accompany
situations, they do the accompany in a social manner. The “compan- and approaching of people; in Section 5, the functionalities involved
ion robot” will have to behave in a similar way as the humans ac- in these missions; and in Section 6, the conclusions.
company other people, and also they have to navigate human-aware,
and adapt to people in the different type of formations. Fig. 1 shows
examples of accompany people by a robot. 2 Related work
Collaborative task while robots walk in groups is not at all a triv-
ial problem, as it involves different collaborative levels of abstrac- Researchers have developed techniques for robot guiding and fol-
tion -for example reactive sensori-motor, spatio-temporal situational, lowing people. A context-aware following behaviour was developed
by [30]. Hybrid approaches combined following, guiding and accom-
1 Copyright c 2020 for this paper by its authors. Use permitted under Cre- pany behaviours have been developed by [24] and [25]. A new tech-
ative Commons License Attribution 4.0 International (CC BY 4.0)
2 Institut de Robòtica i Informàtica Industrial (CSIC-UPC). Llorens Artigas nique of following behaviour that could be perceived by a non-expert
4-6, 08028 Barcelona, Spain. erepiso@iri.upc.edu, agarrell@iri.upc.edu, as merely following someone, or as a guiding companion, has been
sanfeliu@iri.upc.edu developed by [16].
Recently, researchers have developed more complex strategies in • Cognitive (knowledge-oriented) collaboration: This level is ori-
their work on social robots [22]. Morales et al. [20] proposed a model ented to the all collaborative cognitive issues required to the reach
of people walking side-by-side which could predict the partner’s fu- the goal.
ture position, and subsequently generate a plan for the robot’s next
position. Furthermore, [21] and more recently [18] did a side-by- Anyway, these levels are interconnected and they share informa-
side method inferring the final goal of the person and also recorded tion among them.
a database of people walking in a fixed side-by-side formation that In order to detail these levels for the accompany and approach-
is different from our database, included in [26], which includes also ing missions, we will first explain the mission of accompany people
situations of an adaptive side-by-side companion behaviour. side-by-side by a robot. The robot has to accompany one or two per-
While previous studies only discussed the challenge of navigating sons in an urban environment, where there are buildings, walls, urban
around the person in a fixed side-by-side formation, our algorithm furniture, etc. that are static obstacles; and people or bicycles mov-
allows a more dynamic positioning around the human partner. This ing, that are dynamic objects. The goal is to reach from an origin a
is, the method allows the robot to position itself at front, at lateral destination, without colliding with the static or dynamic obstacles,
and at back of the person who accompanies depending on the situa- following some criteria (for example, the minimum time) and taking
tion. Then, if no obstacle interferes with the group’s path, the robot into account that the robot has to behave human-aware (e.g. the robot
accompanies the person in a side-by-side, but if any obstacle inter- navigation and planning has to be socially acceptable and minimiz-
feres with the group’s path, the robot changes its position around the ing the people paths disturbances). In the present system (accompa-
person to avoid it. niment mission), we assume that the robot always follows the people,
Another innovation that sets our approach apart from others is that that means that the robot does not know where the people wants to
our method is able to render a real-time prediction of the dynamic go, and also we assume that the robots knows the actual urban map.
movements of the partner, as well as that of other people, in a horizon It is clear that the behaviour of the system will be different if the per-
time. This kind of prediction performed within a determined time son has to follow the robot (guiding mission). For the approaching
window allows the robot to anticipate people’s navigation and react mission we also assume that the robot knows the actual map.
accordingly. Let us going to explain these levels for the case of robot-people
The Human-Robot approach is an important collaborative task that accompany and approaching missions.
takes place between humans and robots in order to generate interac- Reactive sensori-motor collaboration: The robot uses the per-
tion; central to this task is the ability to recognise and predict the in- ception systems to detect the person or people that are accompanied
tentions of the other party’s movements. In the past, researchers have (or the person that has to approach) and the static and dynamic obsta-
used different control strategies for approaching a moving target by cles, and also uses this system to localize the robot and the accom-
either pursuing it, or trying to intercept it [2]. panied persons in the urban map. The perception system includes
Fajen et al. [3] presented different control strategies for approach- several range-laser (Lidar) and one stereo-vision camera. The Lidar
ing a moving target. Narayanan et al. [23] used a task-based control is also used to detect the velocity and acceleration of any moving ob-
law to enable the robot to meet two standing persons and interact with ject in the scene. Moreover, the system uses microphones to listen the
them, by carefully considering their respective positions and orienta- voice of the persons. The system uses as actuators, the motors of the
tions, and use that knowledge to calculate an optimal meeting point. mobile platform to navigate, and the speaker to have a dialogue with
Other researchers [1,19] studied human social behaviours in order to the people. Although in the present work we do not use the people
yield better results. [1] recorded the different trajectories that people gaze people tracking, this information is important to know where the
took when approaching other persons. In [11], the authors used prox- are looking to infer where they want to go. Moreover, the system is
emics rules to define the approaching distance to the target person for not detecting, neither identifying sidewalk signals, restaurants, bars,
teaching robots proactive behaviour. shop, etc.
In contrast to the previous approaches, our work employs a predic- Spatio-temporal situational collaboration: The robot monitors
tion module based on the social force model, which includes human- its poses (position and orientation) and velocity, the poses of all the
like behaviours for navigating within dynamic environments, and for static obstacles and the poses and velocity of all the moving persons
mapping out the best path for the robot to take towards a moving des- and objects. Using this information the robot always knows the side-
tination. We are also able to compute the best meeting point between by-side position and orientation of the accompanied people, which is
parties by considering the status of the group, the state of the target used to know how well the robot accompanies the people. Moreover,
person, and the target person’s future position. using the paths followed by the nearby pedestrians and other moving
objects, the system is able to predict where they will be after some
3 Multiple levels of collaboration for the time and if it is going to be a collision. Then the robot creates several
human-robot accompany mission plans, select the best one and sends commands to the Sensori-motor
collaboration to adapt the robot poses to the people, to maintain the
We can describe the collaboration through four levels [29]: best accompaniment formation and to avoid collisions. Because our
path plans are human-aware, our system always adapts to path people
• Reactive sensori-motor collaboration: This level involves all the modifications and in this way maintain an implicit agreement with
perception sensors and actuators required in a collaborative task to pedestrians to not bother them.
react in the environment. Operational (task-oriented) collaboration: The robot helps in
• Spatio-temporal situational collaboration: This level includes route planning and navigation, providing the best route to the final
the spatio-temporal situation assessment in a collaborative task. destination, and the alternatives when the route is closed or there is a
• Operational (task-oriented) collaboration: This level includes too narrow route or the route is too busy. In our system, although this
the collaboration from the point of view of the task to be devel- route is always computed, the local path always depends on the ac-
oped. companied person decision, since the robot follows the person path.
The robot can also help in providing information of the upcoming + γ (Frped + ped obs
+ i∈Pc Fpobs
P P
i∈Pci Fpci ) + δ (Fr i
ci
),
restaurants, shops and other services in this level, but again, this func- goal d
where fr,d (Dn ) is the attractive force until the final des-
tionality has not been implemented in the present system.
tination. In the accompaniment case this final destination is in-
Cognitive (knowledge-oriented) collaboration: The robot can
ferred using the direction of movement of the accompanied people.
share with the person the goal destination and the alternatives routes
Also, this final destination can be a physical static destination in-
to reach the destination in the shortest time. However, in our system
side the environment,Dnd (a door, street, passageway, etc), or other
this has not been implemented due what we mentioned before, the
person in the environment in the case of the approaching, Dndg .
robot always follows the people path. Moreover, in our system, the P goal goal
robot generates a dialogue with the two persons (in two persons side- i∈Pc fr,pci (Dpci
i
) are the attractive forces to maintain the
by-side accompaniment), in order to maintain them together side-by- side-by-side formation with each i companion of the robot. Frped
side while they are navigating. and Frobs P are the repulsivePforces respect to other people and ob-
stacles. i∈Pc Fpped ci
and i∈Pc Fpobs ci
are the repulsive forces that
i i
the accompanied people feel from all the other people and obstacles
4 Collaborative-AI Accompaniment models of applied to the robot, to be able to do a more effective accompani-
people by social robots ment. For better explanation of what forces are used for each method,
In any robot accompaniment mission, the first thing that the robot the reader is directed to the cited papers of accompaniment and ap-
has to do is inferring the final destination of the accompanied peo- proaching in the current section.
ple in order to do it efficiently. Also, if the robot has to approach Once the robot has computed all the paths to accompany the group
to a person, it needs to know the final person’s destination to meet or approach to one person, the robot has to select the best one. The
him/her at some point. Finally, for the rest of people of the environ- evaluation of these paths is done using a multi-cost function that con-
ment, the robot needs to know to which destination they are going siders several sub-cost related to some characteristics of the paths,
to, in order to make the navigation human-aware and avoid colli- Eq. 1. These sub-costs evaluate: the distance between the robot and
sions. To know all people’s destinations, we use the Bayesian Hu- the final dynamic destination of the group (Jd ); the orientation of
man Motion Intentionally Predictor (BHMIP) method [5]. The BH- the robot respect to the orientation to arrive to the final destination
MIP uses a set of predefined known destinations of the environment, (Jor ); the attractive force to control the robot (Jr ); and the repulsive
D = {D1 , D2 , ..., Dn , ..., Dm }, and a geometric-based long term interaction forces respect to people (Jp ) and obstacles (Jo ), and the
prediction method that uses a Bayesian classifier to selects the best accompaniment cost (Jc ), respectively. The first five costs were in-
destination of the person. These predefined destinations are locations troduced in [7] and the companion cost was introduced in [26].
where people usually go, like entrances, exits or work places of the J(S, sgoal , U ) = [Jd , Jor , Jr , Jp , Jo , Jc ] (1)
environment.
Finally, the computation of the cost needs three steps. First, the
Once the robot knows all the final accompaniment destination and
robot computes each individual cost in each step of the path. Second,
the rest of the pedestrian and other moving objects destinations, the
to avoid the scaling effect of the weighted sum method, each cost
robot computes the best path to reach the destination and avoid col-
function is normalized between (−1, 1) using the mean and variance
lision with the pedestrians and the static objects. Our navigation sys-
of an erf function, that are calculated after the computation of all the
tem is based on the the Social Force Model [15], and we have ex-
paths. Third, a projection via weighted sum J : Rn → R is ob-
tended this model (ESFM - Extended Social Force Model) to include
tained giving the weighted cost formula [6]. Where n is the number
repulsion of static objects and of the robot itself [4]. Moreover, we
of costs.
have developed a dynamic path planner, that computes the best path
to be followed by the robot, that computes all the paths to go to the
final destination, taken into account the pedestrian paths. This model 5 Functionalities in Collaborative AI Systems to
is denominated Anticipative Kinno-dynamic Planner (AKP) [6]. accompany people
Once the robot knows all people’s behaviours, the robot has to
In this section, we include the main functionalities that center the
plan its collaborative behavior with respect to the people it accom-
research efforts in the Collaborative AI systems to accompany and/or
panies or with respect to the people it will approach. To plan the
approach people. The functionalities of the Collaborative AI systems
accompaniment of the robot with respect one accompanied person,
to accompany and/or approach people are listed in Fig. 2, as well as
we use the Adaptive Side-by-side Accompaniment of One Person
the relations among them.
(ASSAOP) [26]. Also, this method was combined with an anticipate
robot approaching behaviour that infers in advance the best encounter
point and do an engagement with an accompanied person and one ap- 5.1 Multimodal Perception
proached person, by using a triangle formation. In addition, to plan To interact in dynamic urban environments, robots must detect all
the accompaniment of the robot with respect a group of two accom- pedestrians and objects of the environment. In our case we use three
panied people, we use the Adaptive Side-by-Side Accompaniment of types of perception systems: a 360◦ range-laser range sensor (Lidar);
Groups of People (ASSAGP) [27], which allows the robot to accom- a video camera system; and a sterovision camera. The video camera
pany the group in the central an lateral position of the group. Further, system [12] is used for identifying specific people that we want to
to do a robot’s approaching to a person, we use the G2 -Spline and search for or track. The stereovision is used for tracking people in
ESFM to approach people [9]. All these methods have in common the accompaniment and approaching missions.
that uses the ESFM to plan the tree of paths for the robot to be able The 360◦ range-laser range sensor allows to compute person posi-
to fulfill all the tasks. The next equation includes all the attractive and tion with high accuracy, high frequency and in large areas. These are
repulsive forces necessary to carry out all these collaborative naviga- important characteristics to do interactions in a real time. However,
tion’s with humans: the Lidar does not allow to identify a specific person, and for this
goal
(Dnd ) + goal goal
P
Fr = α fr,d i∈Pc fr,pci (Dpci ) purpose, it is used the video camera.
i
explicit verbal communication.
5.3 Intentionality
To have an effective Collaborative-AI interaction between the robot
and the people, it is mandatory that the robot infers the people inten-
tionality. Then, in the case of accompaniment or approaching tasks,
the robot needs to predict the walking behaviour of all the people in
the environment. In our case, we use the the Bayesian Human Mo-
tion Intentionality Predictor (BHMIP) [5] to predict all the people
walking behavior.
Specifically, for the accompaniment task, the robot needs to pre-
dict the accompanied people behaviour, to anticipate their move-
ments and improve the accompany task. In our case, we use it for
maintaining a specific formation and inferring the final destination
and the best path to arrive to the people destination. The intention-
ality is computed using the previous person path and the position of
Figure 2. Graph of Accompaniment and Approach Issues and relations
among them.
the goal.
For the approaching task, the robot has to predict where will be the
position of the person that has to be approached. If the person stops
The Lidar is also used for the adaptation of the robot in the ac- in a specific location, then the prediction is simplified to a known
companiment and approaching missions. It allows to keep the per- destination. If the person is moving, then the robot using the BHMIP
son in all the accompaniment process and to detect the person in the algorithm, computes where the person will be, and then modify its
approaching process. Moreover, the Lidar is also used to detect the path to reach him or her.
pedestrians position and orientation, and predict their paths.
5.4 Adaptation
5.2 Communication
In the accompaniment task there are mutual adaptation between the
Communication between the robot and the human is a main func- human and the robot. The robot is continuously adapting its path in
tionality to allow an efficient accompaniment. Communication is order to fulfill a side-by-side accompaniment, and usually the person
needed to reach common understanding about the environment that is also doing something similar. However, there are cases where for
surrounds the group (1-robot and 1 person or 2-people), to agree example the person stops without telling anything, in this case the
on shared final destination, to share the perception of the accompa- robot modifies its trajectory to stop or to approach the person. When
nied people or other people in the environment, to agree on common there are obstacles that have to be avoided, then the robot modifies
plans to arrive until the destination and synchronize the execution of its formation to allow the person to go ahead or behind the robot. If
these plans or more concretely paths until the destination. The hu- an obstacle implies that the side-by-side formation is broken, then
mans and the robot must communicate and coordinate among them- the robot recovers the side-by-side formation after overcoming the
selves to fulfill a effective and efficient accompaniment or to avoid obstacle.
collide among each other. During the accompaniment task this com- In our experiments with inexpert people, we start explaining to
munication can be verbal and non-verbal or low level (implicit or the users the minimum information necessary to interact with the
explicit) [8]. robot. This information includes: the destination where they will go
Regarding the explicit verbal communication, the robot uses it for together; the required time that needs the robot to start moving; and
interacting with the accompanied person, the approached person or that the persons have to walk slowly in order that the robot can main-
with other people using voice (robot speakers). In our accompani- tain the side-by-side formation. In addition, we explain that the robot
ment, this communication was done by speech dialogue between the has a safe distance, so they can not walk very close to it. Finally, for
robot and the human. For example, the robot communicates if it loses the accompaniment of two people, we also explain the child game
the target of the accompanied person. In the case of accompaniment that we be used.
of two persons, the robots makes an interactive dialogue with the per- To fulfill this accompaniment adaptation, the robot needs to de-
sons, using a child game to create engagement between the persons tect, track and predict the behaviour of the accompanied people and
and the robot (in our case we use the child game of discovering the also of other people or obstacles of the environment, to facilitate the
name of an environment object), while walking towards a destination group’s navigation in the dynamic environment.
in the environment. In the case of the approaching mission, there is a mutual adaptation
Regarding the implicit non-verbal communication of the accom- between the person and the robot. If both are moving, there is an
paniment task, the communication is done through the range-laser, adaptation between the speeds of both to approach and to stop in
which gives information of the person with respect the robot. In any front of each other.
of the accompaniment missions, the robot knows in real time the po- Furthermore, the accompaniment group must adapt to the dynamic
sition and orientation of the accompanied persons, and also the po- environment. This means that by detecting and predicting the people
sition, orientation and velocity of the pedestrians. The implicit com- and obstacles in the environment, the robot must avoid them in an
munication is only in one direction, from person to robot, and when anticipatory way, while accompanying a person or a group of people.
the robot needs to inform the person, from robot to person, it uses the In our case, the robot facilitates the navigation behaviour of the group
that accompanies, while at the same time, facilitates the navigation destination. Third, in our case they must agree in the adaptive for-
behaviour of other people in the environment [26]. mation when they overpass people or obstacles. Then, to avoid other
people in the environment, the robot changes its position around the
person to allow the group to avoid easily other people or static ob-
5.5 Interaction stacles. As the robot is usually slower than the person, for security
Interaction among robot and humans plays an important role in reasons, it has been decided that the robot goes always behind. And
Collaborative-AI, where in our case, these interactions will be as the robot changes its group’s position in advance to avoid static
Master-Slave for the robot and Peer-to-peer for inexpert people that obstacles and other people, the people in the group can adapt and un-
interact with the robot. derstand that the robot prefers to go behind of him/her, to overcome
In our accompaniment case, we have two types of interactions. obstacles. Fourth, for the accompaniment of groups, the members of
First, the robot interacts using the position of the person or persons the group must decide in which central or lateral position they will be
being accompanied. The robot interacts with the accompanied people within this group formation, and that position within the group can
by approaching or moving away, depending in the type of formation, change dynamically for reasons of comfort and / or the environment.
for example side-by-side or V formation. In the case of two persons For a robot approaching to a person, but without accompanying
being accompanied, the robot will interact in a different way if the any one, the robot and the approached person may have to agree in: if
robot is in between both persons, or if the robot is the lateral position. both will approach at the same time; if it is the robot that approaches
Moreover, in case that the robot has to break the formation, due for the person; in which way the robot has to approach the person; or
example to an obstacle, the robot will interact again with the persons whether the person really wants that the robot approaches him/her.
to recover the previous side-by-side formation.
Second, the robot and humans can use direct communication 5.7 Decision Making, Reasoning & Planning
among them. The direct communication is done through the robot
speaker, for example by telling to the people that the robot can not When human and robot collaborate doing a specific task, they need to
move because there are too many people blocking its path or for share some decision making, reasoning and planning through direct
maintaining the group formation using a child game. This game es- or indirect communication by using nonverbal cues. In the compan-
tablishes a dialogue of questions-answers, where the accompanied ion and approaching cases, we achieve these issues using a social
people have to be near the robot and to follow side-by-side forma- human-aware navigation [6] [7]. In addition, this navigation is ac-
tion to maintain the dialogue. cessed using the extended social force model (ESFM) based on the
In the case of Collaborative approaching, we use only the interac- relative position between humans and people. The ESFM includes
tions regarding the position among the humans of the environment. several interactions between the robot, the accompanied people, the
First, the robot and the approached human can interact using posi- approached people and other people in the environment. Using these
tion in two different situations, where only the robot approaches the interactions and the intentionality prediction of all people, our robot
human or where both approach each other. Second, the robot interact is able to infer a planning behaviour that allows the robot to accom-
with other people of the environment, by avoiding them. pany people or approach to people through a social accepted way.
5.6 Agreement 6 Experiments
There are always agreement between robots and humans who collab- We have done a number of real-life experiments, for accompaniment
orate to do accompaniment or approaching tasks. These agreements of one person, two persons and approaching a person. In all the ex-
are in the shared goals, shared plans of actions and action execu- periments, we have used different groups of people that they did not
tion. They can negotiate verbally these shared behaviours or in some know robots before. We have set up the parameters of the models
cases, they can negotiate implicitly, for example using the distance doing experiments only with people, without robots, and other ex-
between them. The negotiation exist and both of them have to agree periments with people and a tele-operated robot. With these param-
on what has to be the next action. In most of the cases, the robot eters, we have complete our models and then tested the models with
has to anticipate what the human will do, in order to facilitate the people and robots. Fig. 1 shows examples of accompany two people
accompaniment or the approaching. by a robot. Fig. 3 shows approaching robot a person. We have not
In the accompaniment tasks there are several agreements between included in this paper the experiments that we did, due to the lack of
the robot and the human. First, the group must agree on the final space, but they can be found in [26] [28] [9].
destination to go. In our case, the person decides to which destina-
tion, of all the possible environment destinations, he/she prefers to 7 Conclusions
go and the robot infers this destination from the person’s navigation
behavior. In the case of two people, the robot infers the most likely We have described in this article, the basic collaborative AI multi
destination for the group taking into account the behavior of both levels required to do accompaniment and approaching of people by
people, and in the case that they separate, it will take into account robots. We have explained each one of the collaborative AI function-
the behavior of the closest person. Furthermore, the final destination alities to do these two missions and show some illustrative images
can be static (an environment destination: door, stairs, passageway, of the experiments. Finally, we showed that robot accompaniment
etc) or a dynamic destination, for example other person position in involves complex Collaborative AI issues.
the environment. Then, the group need to agree to which person they
want to reach. Second, they must agree on which path they must fol-
ACKNOWLEDGEMENTS
low to reach the final destination. In our case, the robot takes into
account the behavior of humans by evaluating different costs in the Work supported by the Spanish Ministry of Science project ROCO-
possible computed paths and selects the best of them to reach the TRANSP (PID2019-106702RB-C21-RAEI/FEDER EU) by Minis-
hofer Inst. Manufact. Eng. Automat.(IPS). Lund, Sweden, pp. 199–206,
(2001).
[14] H-M Gross, H Boehme, Ch Schroeter, Steffen Müller, Alexander
König, Erik Einhorn, Ch Martin, Matthias Merten, and Andreas Bley,
‘Toomas: interactive shopping guide robots in everyday use-final im-
plementation and experiences from long-term field trials’, in IEEE/RSJ
International Conference on Intelligent Robots and Systems, pp. 2005–
2012, (2009).
[15] Dirk Helbing and Peter Molnar, ‘Social force model for pedestrian dy-
namics’, Physical review E, 51(5), 4282, (1995).
[16] Jwu-Sheng Hu, Jyun-Ji Wang, and Daniel Minare Ho, ‘Design of sens-
ing system and anticipative behavior for human following of mobile
robots’, IEEE Transactions on Industrial Electronics, 61(4), 1916–
1927, (2014).
[17] Takayuki Kanda, Masahiro Shiomi, Zenta Miyashita, Hiroshi Ishiguro,
and Norihiro Hagita, ‘An affective guide robot in a shopping mall’, in
Proceedings of the 4th ACM/IEEE international conference on Human
robot interaction, pp. 173–180, (2009).
[18] Deneth Karunarathne, Yoichi Morales, Takayuki Kanda, and Hiroshi
Ishiguro, ‘Model of side-by-side walking without the robot knowing the
goal’, International Journal of Social Robotics, 10(4), 401–420, (2018).
Figure 3. The robot uses the implemented method to approach a static and [19] Yusuke Kato, Takayuki Kanda, and Hiroshi Ishiguro, ‘May i help you?:
moving person, while avoiding several static obstacles of the environment. Design of human-like polite approaching behavior’, in Proceedings
of the Tenth Annual ACM/IEEE International Conference on Human-
terio de Ciencia e Innovación , the EU AI4EU project (H2020-ICT- Robot Interaction, pp. 35–42. ACM, (2015).
2018-2-825619), by the Spanish State Research Agency through the [20] Yoichi Morales, Takayuki Kanda, and Norihiro Hagita, ‘Walking to-
Marı́a de Maeztu Seal of Excellence to IRI (MDM-2016-0656). Ely gether: side by side walking model for an interacting robot’, Journal of
Repiso is suported by the FPI-grant, BES-2014-067713. Human-Robot Interaction, 3(2), 51–73, (2014).
[21] Ryo Murakami, Luis Yoichi Morales Saiki, Satoru Satake, Takayuki
Kanda, and Hiroshi Ishiguro, ‘Destination unknown: walking side-by-
REFERENCES side without knowing the goal’, in Proceedings of the ACM/IEEE inter-
national conference on Human-robot interaction, pp. 471–478. ACM,
[1] Eleanor Avrunin and Reid Simmons, ‘Using human approach paths to (2014).
improve social navigation’, in 8th ACM/IEEE International Conference [22] Kazushi Nakazawa, Keita Takahashi, and Masahide Kaneko, ‘Move-
on Human-Robot Interaction, pp. 73–74, (2013). ment control of accompanying robot based on artificial potential field
[2] Fethi Belkhouche, Boumediene Belkhouche, and Parviz Rastgoufard, adapted to dynamic environments’, Electrical Engineering in Japan,
‘Line of sight robot navigation toward a moving goal’, IEEE Transac- 192(1), 25–35, (2015).
tions on Systems, Man, and Cybernetics, Part B (Cybernetics), 36(2), [23] Vishnu K Narayanan, Anne Spalanzani, François Pasteau, and Marie
255–267, (2006). Babel, ‘On equitably approaching and joining a group of interacting
[3] Brett R Fajen and William H Warren, ‘Behavioral dynamics of inter- humans’, in IEEE/RSJ International Conference on Intelligent Robots
cepting a moving target’, Experimental Brain Research, 180(2), 303– and Systems, pp. 4071–4077, (2015).
319, (2007). [24] Akihisa Ohya, ‘Human robot interaction in mobile robot applications’,
[4] Gonzalo Ferrer, Anais Garrell, and Alberto Sanfeliu, ‘Robot compan- in Proceedings. 11th IEEE International Workshop on Robot and Hu-
ion: A social-force based approach with human awareness-navigation man Interactive Communication, pp. 5–10, (2002).
in crowded environments’, in IEEE/RSJ international conference on In- [25] Wee Ching Pang, Gerald Seet, and Xiling Yao, ‘A multimodal person-
telligent robots and systems, pp. 1688–1694, (2013). following system for telepresence applications’, in Proceedings of the
[5] Gonzalo Ferrer and Alberto Sanfeliu, ‘Bayesian human motion inten- 19th ACM Symposium on Virtual Reality Software and Technology, pp.
tionality prediction in urban environments’, Pattern Recognition Let- 157–164. ACM, (2013).
ters, 44, 134–140, (2014). [26] Ely Repiso, Anaı́s Garrell, and Alberto Sanfeliu, ‘Adaptive side-by-side
[6] Gonzalo Ferrer and Alberto Sanfeliu, ‘Proactive kinodynamic planning social robot navigation to approach and interact with people’, Interna-
using the extended social force model and human motion prediction in tional Journal of Social Robotics, 1–22, (2019).
urban environments’, in IEEE/RSJ international conference on Intelli- [27] Ely Repiso, Anaı́s Garrell, and Alberto Sanfeliu, ‘People’s adaptive
gent robots and systems, pp. 1730–1735. IEEE, (2014). side-by-side model evolved to accompany groups of people by social
[7] Gonzalo Ferrer and Alberto Sanfeliu, ‘Anticipative kinodynamic plan- robots’, in IEEE Robotics and Automation Letters IEEE/RSJ and In-
ning: multi-objective robot navigation in urban and dynamic environ- ternational Conference on Robotics and Automation. IEEE, (2020).
ments’, Autonomous Robots, 43(6), 1473–1488, (2019). [28] Ely Repiso, Francesco Zanlungo, Takayuki Kanda, Anaı́s Garrell,
[8] Terrence Fong, Illah Nourbakhsh, and Kerstin Dautenhahn, ‘A survey and Alberto Sanfeliu, ‘People’s v-formation and side-by-side model
of socially interactive robots’, Robotics and autonomous systems, 42(3- adapted to accompany groups of people by social robots’, IEEE/RSJ
4), 143–166, (2003). international conference on Intelligent robots and systems, (2019).
[9] Marta Galvan, Ely Repiso, and Alberto Sanfeliu, ‘Robot navigation [29] Alberto Sanfeliu, James Crowley, Javier Vazquez, Luca Iocchi, Angulo
to approach people using g 2 -spline path planning and extended so- Cecilio, Antony G. Cohn, Antoni Grau, Geza Nemeth, Anais Garrell,
cial force model’, in Iberian Robotics conference, pp. 15–27. Springer, Edmundo Guerra, Daniele Nardi, Rene Alquezar, and Alessandro Saf-
(2019). fiotti, ‘State of the art in collaborative ai, release 3.0, june 2020’, in
[10] Anais Garrell and Alberto Sanfeliu, ‘Cooperative social robots to ac- AI4EU internal delivery, (2020).
company groups of people’, The International Journal of Robotics Re- [30] Fang Yuan, Marc Hanheide, Gerhard Sagerer, et al., ‘Spatial context-
search, 31(13), 1675–1701, (2012). aware person-following for a domestic robot’, International Workshop
[11] Anais Garrell, Michael Villamizar, Francesc Moreno-Noguer, and Al- on Cognition for Technical Systems, (2008).
berto Sanfeliu, ‘Teaching robot’s proactive behavior using human assis- [31] Francesco Zanlungo, Tetsushi Ikeda, and Takayuki Kanda, ‘Potential
tance’, International Journal of Social Robotics, 2(9), 231–249, (2017). for the dynamics of pedestrians in a socially interacting group’, Physi-
[12] A. Goldhoorn, A. Garrell, R. Alquezar, and A. Sanfeliu, ‘Searching cal Review E, 89(1), 012811, (2014).
and tracking people with cooperative mobile robots’, in Autonomous
Robots, pp. 739–759, (2018).
[13] Birgit Graf, JM Hostalet Wandosell, and Christoph Schaeffer, ‘Flexi-
ble path planning for nonholonomic mobile robots’, in Proc. 4th Euro-
pean workshop on advanced Mobile Robots (EUROBOT’01)., Fraun-