=Paper= {{Paper |id=Vol-2978/casa-paper1 |storemode=property |title=Towards Novel and Intentional Cooperation of Diverse Autonomous Robots: An Architectural Approach |pdfUrl=https://ceur-ws.org/Vol-2978/casa-paper1.pdf |volume=Vol-2978 |authors=Niko Mäkitalo,Simo Linkola,Tomi Laurinen,Tomi Männistö |dblpUrl=https://dblp.org/rec/conf/ecsa/MakitaloLLM21 }} ==Towards Novel and Intentional Cooperation of Diverse Autonomous Robots: An Architectural Approach== https://ceur-ws.org/Vol-2978/casa-paper1.pdf
Towards Novel and Intentional Cooperation of Diverse
Autonomous Robots: An Architectural Approach
Niko Mäkitalo1 , Simo Linkola1 , Tomi Laurinen1 and Tomi Männistö1
1
    Department of Computer Science, University of Helsinki


                                             Abstract
                                             In most autonomous robot approaches, the individual robot’s goals and cooperation behavior are fixed during the design.
                                             Moreover, the robot’s design may limit its ability to perform other than initially planned tasks. This leaves little room for
                                             novel dynamic cooperation where new (joint) actions could be formed or goals adjusted after deployment. In this paper, we
                                             address how situational context augmented with peer modeling can foster cooperation opportunity identification and coop-
                                             eration planning. As a practical contribution, we introduce our new software architecture that enables developing, training,
                                             testing, and deploying dynamic cooperation solutions for diverse autonomous robots. The presented architecture operates
                                             in three different worlds: in the Real World with real robots, in the 3D Virtual World by emulating the real environments
                                             and robots, and in an abstract 2D Block World that fosters developing and studying large-scale cooperation scenarios. Feed-
                                             back loops among these three worlds bring data from one world to another and provide valuable information to improve
                                             cooperation solutions.

                                             Keywords
                                             robot software architecture, robot cooperation, ontology-based reasoning, peer modeling, autonomous robots



1. Introduction                                                 after deployment. Nonetheless, this kind of creative use
                                                                of complementary capabilities could highly benefit the
In autonomous robot cooperation, understanding the whole robot population, especially when the population
robots’ context plays a key role. Situational context is a is sparse and consists of low-end consumer robots built
term used to describe why some phenomenon occurs in a for singular tasks, e.g., cleaning, with ample idle time to
specific situation and what actions can be associated with allocate to other goals.
this situation [1]. This paper presents an architecture that       To optimize the use of context and training the robots
fosters the robots’ situational awareness in their present to understand their situation and cooperation possibil-
context. Central in our approach is the information that ities, we propose a novel three-world development ap-
is relevant for the cooperation planning: A robot must proach. The development approach involves Real World,
be able to form an understanding of the other robots and 3D Virtual World, and 2D Block World, and an associated
their resources and an understanding of the environment software architecture and frameworks that can operate
where the cooperation is intended to take place. Hence in all these three different worlds, allowing to focus on
our architectural approach does not provide a solution different aspects of the development.
to form a complete or joint contextual understanding be-           The 2D Block World works as a platform and test bed
tween the robots. Instead, the architecture enables each for developing the ontology-based understanding as it
robot to form its own view of the situation. The robots allows simulation of large number of diverse robots in dif-
then use their situational context model and understand- ferent cooperation scenarios. Ontological reasoning and
ing as a basis for forming joint action plans for meeting planning provide robots a shared understanding of "how
their own personal goals.                                       the world works" and thus are crucial in our approach
   In most autonomous robot approaches, the goal of the for multi-robot cooperation.
individual robot and its cooperation behavior is fixed dur-        As our starting ontology, we adopt DUL (DOLCE+DnS
ing the design. However, in heterogeneous encounters Ultralite) ontology1 , which suits well for autonomous
with diverse peers and other computational actors, this robot reasoning (see, e.g., KnowRob 2.0 [2]). It serves as
leaves little room for novel dynamic cooperation where a top-level ontology, which applications are supposed to
new (joint) actions could be formed or goals adjusted extend by their own ontological concepts. For this work
                                                                we have made a minimal extension to DUL to showcase
Context-aware, Autonomous and Smart Architecture Workshop
                                                                the applicability of our approach.
co-located with the 15th European Conference on Software
Architecture 13-17 September, 2021, Virtual Event CASA@ECSA2021    In the Real World and 3D Virtual World implementa-
" niko.makitalo@helsinki.fi (N. Mäkitalo);                      tion, we have focused on robots based on Robot Oper-
simo.linkola@helsinki.fi (S. Linkola); tlaurinen@gmail.com      ating System (ROS). Briefly put, ROS is an open-source
(T. Laurinen); tomi.mannisto@helsinki.fi (T. Männistö)
                                       © 2021 Copyright for this paper by its authors. Use permitted under Creative      1
                                       Commons License Attribution 4.0 International (CC BY 4.0).                          http://ontologydesignpatterns.org/wiki/Ontology:
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073       CEUR Workshop Proceedings (CEUR-WS.org)                                        DOLCE+DnS_Ultralite
robot development framework where different nodes, or         of the environment and the robot should find itself in.
programs, communicate asynchronously by subscribing           A goal can be, e.g., to keep a room clean or deliver a
and publishing to topics shared over a network. A single      package to a specific place. A robot may have multiple
ROS node acts as the server, or master, to which other        or even conflicting goals.
ROS-enabled devices can be connected to as clients, form-        To achieve its goals (either by itself or in cooperation),
ing the network. Being a leading open-source project in       robot forms plans which consist of tasks. A plan describes
robotics, ROS has an active development community, and        how a certain goal is achieved, i.e., which tasks should be
a different, newer version, ROS2, is also seeing increasing   done and their (partial) order. To make a plan concrete,
amounts of use and development. In this work, we use          each task needs to be assigned to a robot (or a set of
ROS2 in our implementation.                                   robots). This concrete plan is called a workflow.
   Our approach aims to support ad hoc encounters of het-        Tasks are the individual elements from which plans and
erogeneous autonomous robots, which each have their           workflows are composed of. Each task includes individual
own individual goals, which can be used to define vari-       objects to be achieved, e.g., open a particular door, move
ous plans that include different types of tasks. Typically,   to a specific place, etc. Tasks can be hierarchically nested
the cooperation tasks can be categorized into loosely         in two ways. First, there can be general tasks (open a
and tightly coupled cooperation tasks [3]: Tightly cou-       door) and refinements of those tasks (open a door by
pled tasks cannot be performed by one robot but require       pulling the handle). Second, lower-level tasks may be
multiple robots working cooperatively; Loosely coupled        combined to compose higher-level tasks, e.g., moving,
tasks, on the other hand, can be performed by a single        opening a door, and moving again can be seen as one
robot but the task can be performed more efficiently in       higher-level moving task. These task structures are used
cooperation.                                                  when generating and communicating workflows.
   The proposed software architecture enables coopera-           Tasks have defined start and end conditions. How-
tion in both tightly coupled and loosely coupled tasks        ever, the actions (see below) can be partly responsible for
mainly through peer modeling, which has been argued           checking these conditions. The start condition is checked
to be a requirement for cooperation [4]. The robots ex-       before the task can be attempted, e.g., to open a door man-
change, learn, use and evaluate models of themselves          ually, the robot must be next to it. The end conditions
and their peers to identify and exploit cooperation oppor-    are checked to see if the task was completed successfully,
tunities. Although the architecture proposes means for        e.g., if the door is open. The task end conditions can be
coordination and communication, implementing tightly          thought of as individual, low-level goals.
coupled tasks, however, requires more work from the              To achieve tasks, each robot has actions by which the
developer.                                                    tasks can be completed. The robot may have multiple
   The rest of this paper is structured as follows. In Sec-   (sets of) actions that achieve the same task, and an action
tion 2, we introduce concepts related to our architecture.    may be utilized in multiple tasks. Where goals, plans,
In Section 3, we describe our solution – a software archi-    workflows, and tasks are platform-independent, actions
tecture that enables the developing, training, and testing    should be implemented on each platform (and the world)
cooperation of autonomous robots. In Section 4, we ex-        separately.
plain the current status of the architecture and what kind       To allow cooperation, robots communicate their goals,
of experiments are currently possible with the architec-      suggested workflows, and tasks to develop workflows,
ture. In Section 5, we cover work related to our approach.    including multiple robots. To make this communication
In Section 6, we discuss how we plan to improve the so-       more fluent, robots maintain a model of themselves and
lution in the future and what we are currently focusing       each of their peers. In general, these models may hold
on implementing. Finally, in Section 7, we draw some          any important information of the robot in question, such
conclusions for this work.                                    as their physical properties, capabilities, i.e., which tasks
                                                              they can perform, the robot’s goals, and the history of the
                                                              workflows they have been included in and their success.
2. Cooperation Concepts
To understand our architecture, we first introduce the 3. Software Architecture for
ontological concepts we use to enable cooperation. The
basic concepts introduced here are part of DUL ontology,      Autonomous Robot
but we extend them in our work to provide concrete            Cooperation
solutions and a more fine-grained understanding of the
situation at hand.                                        At the core of our research is the CACDAR architecture.
   The robots’ essential operation revolves around goals The architecture, with its components and the leveraged
describing desirable situations, which we model as states services, is depicted in Figure 1. The architecture can
     World




           Robot A
                                                                                                                                                                                             Sense
                                                                           Cooperative Brain Service
                              Planner
                                                                                                  Knowledge Manager
                                                                                                                                                                                            Analysis
                                                                  Self                                                                                              Task to                 Service A
                                                                                    Situational                                 Workflow         Task
                                                                Configu    Peer                         Environment                                                  Action
                                                                                     Context                                     History       Hierarchy
                                                                 ration   Models                          Model                                                     Mapping
                                                                                      Model                                      Model          Models
                                                                 Model                                                                                               Model
                                Goal
                                Model
                                                                                                                                                                                            Actuate
                                                                                                   Task Runtime


                                                                      Scheduler
                                                                                                                                                                                            Service A
                                                                                                                                                             Action
                                                                                                                                                     - Block, virtual, and real
                                                                                                              Task Model                               world
                              Workflow                                                                  - Start condition                            - Platform dependent
                                                                                                        - End condition (implicit                      (e.g., ROS service)
                               Model                                                                      goal)                                                                             Service B
                                                                                                        - Resource estimate




                                                             Coop Communication Service

                          Coop Message                                                                     Coop Message                                Coop Message
                      -   FIPA Communicative Act                                                    -     FIPA Communicative Act                -     FIPA Communicative Act
                          Library specification                                                           Library specification                       Library specification




                                                                            Robot B                                                                                                Sense
                                                                                                                  Task                                       Action
                                                                                                                                                                                  Actuate




  Legend

           Runtime environment:                                                         Coop Messaging                         Inter-process                         Component relation:                Input and output
                                                   Model             Message
   World
           - 2D Block World                                                             (Socket.IO)                            communication                         aggregation                        with the world
           - 3D Virtual World
           - Real World                                                                                                        Component relation:                   Component relation:
                                                                                        Platform specific
                                                                                                                               composition                           use
                                                                                        communication
                                                   Service           Runnable
           Component                                                                    (e.g., DDS)




Figure 1: CACDAR architecture.



operate in all three worlds, 2D Block World, 3D Virtual                                                      3.1. Cooperative Brain Service
World, and Real World, and it also provides feedback
                                                           The critical enabling service for the novel and valuable
loops between these three different worlds, allowing us
                                                           cooperation is platform-agnostic Cooperative Brain Ser-
to manually and automatically incorporate the insights
                                                           vice, which encloses several components. The service is
in order to advance the situational context awareness
                                                           responsible for the high-level functionality of the robot
that fosters the robot cooperation (see Figure 2).
                                                           such as planning of future tasks and cooperation (see
                                                           Planner), scheduling of tasks to be executed (see Sched-
                                                           uler) by Task Runtime, and it gathers information from
                                                           sensors, its operation and communication with other
      2D Block World    3D Virtual World    Real World
                                                           robots into Knowledge Manager which it uses in its rea-
                                                           soning. For cooperation, the service needs to be able to
                                                           detect if there is a robot that has requested help, and then
                                                           try to reason if it would a) have the missing resources,
Figure 2: Lessons learned feedback loops between three de- or b) would have free resources or less important tasks
velopment and evaluation stages.                           so that it could free up the resources for the cooperation.
                                                           The availability of such resources (e.g., time and battery)
are estimated in collaboration with Scheduler and Task           and currently opening. However, to keep the "backward
Runtime components.                                              functionality" intact from Real World back to 2D Block
   However, the most crucial responsibility of the ser-          World, individual object states and actions that manipu-
vice is to estimate whether it will meet its own goals.          late them in Real World model should be mappable into
It constantly keeps track of its resources and what re-          the 2D Block World model.
sources other robots have allocated for helping it to meet          Self Model and Peer Models contain information
its goals. Hence, it leverages Knowledge Manager and             about the robot itself and its peers. In general, each peer
Task Runtime components by observing changes in the              has its model, but aggregate models, e.g., considering cer-
models that represent the other robots and environment,          tain classes of robots, are possible. Robots exchange basic
and then notifies the Planner which can alter its work-          information considering themselves (drawn from their
flow and tasks (e.g., by replanning tasks with missing           Self Model and other knowledge sources) when they first
resources or reorganizing tasks in its workflow).                meet their peers and update and replace this informa-
                                                                 tion through communication and observations. Where
3.2. Knowledge Manager                                           Situational Context Model offers current information of
                                                                 the state of the world, and Environment Model offers an
Knowledge Manager takes care of maintaining the robot’s          understanding of how the world works, these models
understanding of the world and the information associ-           provide knowledge of what are each robot’s goals, which
ated with the cooperation. The main input source for the         tasks are possible for the robot, and what restrictions
component is the robot’s (platform-dependent) service            the robot may have for performing specific tasks, e.g., if
components that the robot uses for observing and sens-           the robot can only open specific types of doors. From
ing. Knowledge Manager may also exchange information             the cooperation perspective, these models are highly rel-
with other robots’ Knowledge Manager components via              evant, as their information is needed in Planner when
respective Cooperative Brain Services with Coop Messages.        determining whether who can perform a particular Task.
Knowledge Manager maintains the following models that               Task to Action Mapping Models contains knowl-
enable novel and valuable cooperation as well as robot’s         edge about mapping the task realizations to actions. This
individual goal-oriented behavior:                               knowledge is mainly about the robot’s tasks, but peers’
   Situational Context Model captures information                tasks to action mapping information can also be partially
considering the robot’s current situation, e.g., where it        stored. This applies especially to cases if the robots are
and other robots currently are, what is the state of the en-     of the same type. Additionally, other peers may provide
vironment objects near it, and other dynamic properties.         some information about their action mapping for a par-
The model’s contents can be updated using feedback from          ticular task, e.g., resource estimates, timing information,
sensors, Environment Model (e.g., by making queries of           or constraints that can be used in planning.
possible state changes in the physical objects represented          Workflow History Model contains the information
in the ontology if they are not directly perceived), Self        on earlier cooperation situations, such as performed task
and Peer Models, and direct communication with other             hierarchies, their configurations, and execution results.
actors, such as robots, through Coop Messages. To this ex-       The information is used for improving the quality of the
tent, Situational Context Model operates in tandem with          cooperation by analyzing which workflows and roles
the environment and peer models to provide a unified             have previously worked well and which ones have failed.
view of the most current understanding of the situation.            Task Hierarchy Models are used as configuration
This model can be used directly in Planner, whereas other        models for creating task hierarchies (consisting of task-
models provide more fractured view of the situation.             goal-plan nodes), e.g., options for decomposing tasks or
   Environment Model connects actions in the operat-             goals and constraints for valid hierarchy configurations.
ing environment, e.g., moving or object manipulation,            It can be used to determine whether a particular task
into state changes in ontological objects. The model             hierarchy configuration is valid, and the hierarchies can,
should represent the environment and its objects in suffi-       then, be used by Planner or other components in Knowl-
cient detail so that it can be used to derive reasonable Situ-   edge Manager, e.g., to represent aggregated high-level
ational Context Model and reason about possible changes          capabilities of the peers.
of certain actions in particular situations. It can be up-
dated using feedback from the environment (either per-
                                                                 3.3. Planner
ceived or received through communication). The level of
detail in Environment Model varies across the different          Planner is responsible for constructing Workflows which
world types. In 2D Block World, the model is sufficient          are then, e.g., passed to Scheduler for execution or stored
to possess simple logical states, e.g., is the door open or      for later use. As input, Planner is given some starting
closed, while in Virtual and Real World the model may            situation, e.g. the current Situational Context, a desired
be more elaborate, e.g., a door can be partially closed          end condition, e.g. the current Goal, and other related
parameters, e.g. restrictions for the workflow. Planner         perform a task Guide. Such task then consists of other
leverages the information maintained by Knowledge Man-          tasks, like Move, Turn, Navigate, etc.
ager in its attempts to select the robot and its peers to          Action is the mapping from the behavior modeled
specific roles and to assign them Tasks. For actually as-       with tasks to the actual implementation of a specific task.
signing Tasks for its peer robots, Planner negotiates with      Actions are generally platform-specific, but there can be
different robots’ Planner components. The purpose is            alternative versions of actions for different robots even
to ensure that the robot has a correct understanding of         within the same platform. Similar to the tasks, also ac-
its peer’s capabilities (i.e., Tasks it can perform) and that   tions can consist of other sub-level actions. For instance,
the peer has sufficient resources, e.g., time and battery       conforming Action: Guide may leverage various other
power, to participate in the workflow.                          action implementations.
   Goal Model defines a single mission that is expected
to be carried out by a single robot or a set of robots.         3.5. Robot’s Services
However, it does not define how the actual plan and the
mission is expected to be performed. Instead, a Goal            For actuating and sensing the events coming from the
Model can set some ground rules for the robot behavior,         world, the architecture enables leveraging various ser-
like time constraints or quality attributes. A Goal Model       vices and communication between them. In Figure 1,
is used for deriving start and end conditions for specific      such services have been illustrated: an imaginary actuat-
tasks. It may also affect what types of robots get selected     ing Service A is used, for example controlling the robot,
into the roles of the cooperation.                              and at the same time, it sends data to Analysis Service A.
   Workflow Model consists of a Goal Model and a par-           While we have mainly used ROS2 based services in our
tially ordered list of Task Models where each task is as-       current implementation, the Cooperation Brain is not
signed to a (set of) robots. By default, Planner tries to put   tied to any specific robot technology. Hence the services
together a Workflow Model where the robot itself is in the      may also be realized as ROS1 services or any other type of
primary role, and its peers are assigned only if the robot      service technology (e.g., as a Docker-based microservice).
cannot meet the Goal. However, the Goal Model can af-
fect how the workflow is put together: As the Goal Model        3.6. Scheduler
contains information regarding a single robot’s mission,
it can then define the mission to be highly cooperative     Scheduler component is part of the Task Runtime com-
or act as a leader. For example, consider that one robot    ponent. Scheduler fetches the Tasks from the Planner
is expected to act as a supervisor for the other robots –   components Workflow and delivers the runnable Tasks
its mission is then defined to coordinate the others and    to the Task Runtime. Scheduler’s primary duty is to main-
their cooperation.                                          tain a Task list for execution in the robot, considering
                                                            priorities and constraints of active goals. For this purpose,
                                                            the Scheduler uses each task’s start and end conditions to
3.4. Task Runtime                                           ensure that the situation is correct for running the task.
Different types of robots can feature very differing un- The Scheduler also uses the resource estimates to ensure
derlying platforms for development and interfacing in that the robot has the promised resources for performing
general. Therefore, the platform is essentially what dic- the task.
tates how actions have to be implemented. The Task
Runtime is accordingly designed so that support for new 3.7. Coop Communication Service
platforms can be added at will, in the form of platform
modules. Currently supported platforms are the older and In order to cooperate effectively in varying situations and
newer versions of the Robot Operating System, ROS1 and environments, the robots require a communication plat-
ROS2, introduced in more detail later. However, as a par- form that can relay messages between the components
ticular measure stemming from the similarity of these deployed on various nodes. The base technology for inter-
platforms, actions are shared between both by abstracting robot communication is Socket.IO. It provides a relatively
implementation differences of subscribers and publishers reliable and fast enough communication channel for ne-
using the respective platform modules.                      gotiating about the cooperation-related activities, like
   Task. The self-adaptive aspects of the architecture tasks and roles in workflows, and providing feedback.
come into play when the autonomous operation or coop-          In our present research, we mainly leverage ROS2-
eration requires certain resources. Each robot describes    based  robots. ROS2, on the other hand, leverages DDS
its capabilities by communicating to others what kind of technology for communication between the ROS2 ser-
tasks they can execute. A task may consist of sub-level vices. Hence, in the future, our implementation may
tasks, that is, a task may group other tasks into a higher- change using DDS also for the cooperation communi-
level behavior. As an example, consider that a robot can cation to make the architecture more streamlined. The
downside, however, is that setting up a DDS-based com-           There are also a number of other robots with different
munication infrastructure can be challenging for robots          capabilities executing their tasks, such as cleaning the
that lack the required resources, and as there are sev-          place, who have appropriate knowledge related to their
eral different DDS implementations, incompatibility is-          responsibilities, such as the layout of the cleaning areas.
sues may emerge and issues with licensing. For this              As their responsibilities, e.g., cleaning, may leave time
reason, the implementation yet relies on our service and         for other tasks, they can help the delivery robot.
Socket.IO technology. Additionally, to support also non-            While the above scenario seems very simple, there are
ROS2 based robots, we have been discussing implement-            almost unlimited possibilities to advance creative coop-
ing a communication bridge that would allow ROS and              eration. The scenario requires the robots to a) become
other types of robots and smart objects and resources (e.g.,     aware of each other skills, objectives, and knowledge; b)
sensors, existing facility service systems, smart home           be able to define their joint problem: delivering the pack-
systems, etc.) in the environments to participate and            age; and c) together form and execute good enough plans
enhance the cooperation.                                         for solving the joint problem. Hence, despite the limited
   Coop Message is the base unit of the communication            domain, we believe that the above scenario can serve as
in the CACDAR architecture. Two other base message               a basis for numerous other cooperation applications for
types – BroadcastMessage and Direct Message – are in-            diverse autonomous robots.
herited from the base, and the idea is that the communi-
cation language is extended by inheriting new subtypes.          4.2. Introducing ROS and Real-World
The only requirement is that each message has a sender.
The actual communication messages are based on FIPA
                                                                      Robots
Communicative Act Library Specification [5] from which           To get started with implementing the package delivery
we use a subset.                                                 scenario and prototyping the Task Runtime, we initially
   Broadcast Messages are sent publicly to all robots and        focused on physical robots of the Real World. We had
services connected to the Coop Communication Service.            an already available supply of small-sized research use
Typical use cases for these messages are when a new              robots, and with Real World being the most intricate of
robot arrives at a specific venue and then gets connected        the three-world approach, it was deemed beneficial to get
to the Coop Communication Service located at this venue.         familiar with the particularities of physical robot devel-
The robot may then greet the other connected ones by             opment from early on. The robots chosen for these early
broadcasting its name and the tasks it considers capable         implementation efforts were Rosbot 2.0 and TurtleBot3,
of performing. The robot may also request help from              both which used Robot Operating System, ROS, as their
other robots by trying to describe its goal to other robots.     development platform. As ROS was to become the first
   Direct messages, on the other hand, are sent directly         platform the Task Runtime would support, familiarizing
from one robot to a set of recipients. These messages            ourselves with it was necessary to get started.
are mainly used for negotiating a cooperation plan and               Being small, economical robots for education and re-
communicating during the execution of the plan.                  search use, Rosbot 2.0 and TurtleBot3 were at the time
                                                                 equipped simply with wheels for movement and LIDARs
                                                                 for scanning surroundings, with the Rosbot 2.0 also fea-
4. Current Status                                                turing a camera. The fundamental premises of the project
In this section, we present the current implementation           also meant further restrictions: we could not have the
status of the architecture. We start by describing an exam-      robots share any common understanding of the world,
ple use case and then continue presenting some proof of          not a common map or even coordinate system. Inspired
concept implementations for the use case. Additionally,          by these limitations, the very first mutually coordinated
we report our experiences so far about the three-world           Action made was that of Rosbot 2.0 following Turtlebot3
approach and its benefits.                                       with the help of QR codes. Due to both robots using
                                                                 ROS, the development of Task Runtime began ROS sup-
                                                                 port first, but care was taken in making it possible to add
4.1. Example Use Case: Package Delivery                          support for other robot platforms also.
Throughout our implementation work and experiments,                  The QR code method is straightforward in principle:
we have used the following as a base use case and sce-           A QR code stand is propped up on the Turtlebot3. When
nario which has been adjusted and changed to different           Rosbot 2.0 sees the QR code on Turtlebot with its camera,
environments and worlds in our three-world approach:             it tries to move itself so that the QR code is centered on
A delivery robot with a heavy package, e.g., a tool rented       the image at a direct angle and a certain distance. The
online, comes to a construction site previously unknown          movement direction is based on distance, derived by com-
to it. It has a goal: deliver the tool to a specific location.   paring the QR’s width in the image to a predetermined
                                                                 expected width, and rotation, derived from the homogra-
phy matrix between the image and the QR code within             due to major design differences between the two, but
it.                                                             we deemed it worthwhile for a number of reasons: ROS1
    However, this method alone does not make for an             uses a client/server architecture, where all machines have
adequate following logic. When the guide, Turtlebot3,           to be registered on the server machine. ROS2 is instead
moves, it is easy for the follower, Rosbot 2.0, to lose sight   an entirely peer-to-peer solution based on the DDS mid-
of the QR code. As a solution, we implemented a com-            dleware, which would quite intuitively appear a better
munication protocol specific to this Follow action: If          fit for cooperation of autonomous robots. Our project
the guide goes out of view, the follower asks the guide to      was also at an early stage at the time, and ROS2 exhibited
stop. The follower then moves to where it last saw the          more promising prospects for the future. In contrast to
guide, and begins a search process that uses rotation data      ROS, which was designed back in 2010 for research and
exchanged between the follower and the guide. This ap-          educational purposes, improvements in aspects such as
proach utilizes the fact that we can calibrate and compare      real-time programming claim to bring ROS2 closer to
the rotations of the robots, even when the coordinates          applicability even for industry use. For us, this heavily
cannot be shared (as both have their own map). As an            implied that any interesting future developments would
additional measure, QR codes are added on all sides of          most likely focus on ROS2. Therefore, we are now using
the guide. If the follower sees a code other than the one       the newly released stable release of ROS2, Foxy Fitzroy,
behind the guide, the guide will attempt to rotate so that      as the platform for Turtlebots and Gazebo simulation.
the follower is lined right behind the guide again.             The support for ROS2 on Rosbot 2.0 has been more lim-
    As a result, the Follow and Guide actions were cre-         ited so far, so have instead kept it at ROS1 to demonstrate
ated successfully on the physical robots alone. Yet, many       how the Task Runtime is made to support both ROS1 and
realities of the Real World hampering robot development         ROS2.
became apparent: Lighting conditions would affect the
detection of QR codes greatly, even the slightest of obsta-     4.4. Bring Cooperation from Real World
cles such as cables were insurmountable for the Turtle-
Bot3, and having to reset the positions of the robots
                                                                     to 3D Virtual World
manually every attempt was also rather inconvenient       As the intermediate world of the three-world approach,
in the long run. We also did not have the equipment or    we chose Gazebo2 for our first simulation environment.
means for complicated feats such as having the robots to  Gazebo’s close integration with ROS matches well with
carry objects. To top it all off, the worsening COVID-19  our work on real-world ROS robots up to that point, and
situation meant that work would remain remote for the     it being the de facto standard for robot simulation on
foreseeable future, so we started to look into the robot  ROS also means that ample support is available from the
simulation environments next.                             open source community.
                                                             In many aspects, making the jump from Real World to
                                                          Gazebo was quite straightforward. A model of Turtlebot3
                                                          was already available for Gazebo, and it controlled effec-
                                                          tively the same as in the Real World. Case in point, the
                                                          QR code following method implemented in Real World
                                                          worked as-is in the simulated world too. What proved
                                                          difficult instead was having multiple robots in the same
                                                          Gazebo simulation. In a Real World environment, differ-
                                                          ent robots can be assigned different domain IDs to avoid
                                                          topic overlaps in messaging. In Gazebo, however, all sim-
                                                          ulated robots belong to the same domain by design, so
                                                          so-called namespaces have to be used differentiate the
                                                          topics. Unfortunately, as namespaces are no longer the
Figure 3: Rosbot 2.0 following a QR code-equipped Turtle-
                                                          preferred solution like they were in ROS1, many ROS2
Bot3.
                                                          components do not work very smoothly with them, re-
                                                          sulting in some hack-like approaches required. Yet in
                                                          the end, we have managed to get multiple Turtlebots
                                                          running in the simulation, each complete with their own
4.3. Migrating from ROS to ROS2
                                                          namespace and navigation stack.
Before moving from the Real World to the 3D Virtual          For all that, running multiple robots in Gazebo presents
World, we also changed the primary development plat- us with a certain reality specific to the simulated world.
form from the original ROS, also known as ROS1, to ROS2.
Switching to the newer platform was not entirely trivial       2
                                                                 http://gazebosim.org/
Figure 4: Cooperation running in the 3D Virtual World: Gazebo.



Increasing the number of robots simulated also increases      do to provide similar functionality in 3D Virtual World or
the processing power required quite significantly. So far,    Real World for the physical objects. Sensing the states of
we have been running Gazebo in a virtualized Ubuntu           the objects, e.g. is the door open or closed, may become a
20.04 on work use laptops. Just with three robots present     problem especially in Real World if the environment does
in the simulation, the simulation runs at anything be-        not provide any support for it. However, this is not a prob-
tween 0.4–0.7 times of the ideal normal speed. Evidently      lem that is unique to our approach as any autonomous
this would indicate a need for a more powerful, possibly      robot encounters similar hardships in understanding its
distributed solution, which we are looking into.              current situation.
   Though it can now be said that robot simulation too
certainly has its own set of challenges, getting the Gazebo
setup working has been quite beneficial in the end. The       5. Related work
simulation environment can be edited at will, and reset-
                                                              In this section, we discuss on related research work on
ting the simulation is naturally much simpler. Although,
                                                              architectures enabling autonomous robot cooperation,
for reasons yet unclear, we are experiencing bugs with
                                                              leveraging ontologies for forming an understanding of
both to much inconvenience. Particular to our scenario,
                                                              the cooperation possibilities and situations, as well as
the light conditions are no longer an issue when it comes
                                                              task planning and decision making in the context of au-
to QR code detection, and item delivery actions can be
                                                              tonomous robot cooperation.
simulated simply by spawning and despawning items.
Currently, Gazebo is our main platform for development,
as seen with our newest demo.                                 5.1. Architectures for Autonomous Robot
                                                                   Cooperation
4.5. Ontology Status                                          Autonomous robots cooperating in uncertain and con-
In 2D Block World, we have currently implemented a            stantly changing environments have been studied for
minimal extension to DUL with concepts related to coop-       many years. The general interest in the overall topic
eration and planning, i.e. goals, plans, workflows, tasks     has spawned several research subfields, e.g., swarm
and actions, accompanied by a few physical object imita-      robotics [6], collaborative robotics (cf. [7]) and unmanned
tions residing in the environment, such as doors, which       autonomous vehicles (UAV) (cf. [8]).
can be manipulated by completing the tasks (using par-           We find that the closest works related to our work
ticular actions).                                             from the architectural perspective are related to tightly
   Although the first simulations in 2D Block World seem      coupled multi-robot cooperation. For example, Chaimow-
promising, it is still unclear how much work one needs to     icz et al. [9] have studied architecture in which the key
feature is flexibility which enables changes in leadership     using ontological representations. This aids cooperation,
and assignment of roles during the execution of a task.        especially on low-end robots, as the robot does not need
While the approach allows dynamical behavior, the co-          to perceive these attributes from its raw sensor outputs
operation is yet tightly coupled. In our approach, each        such as camera streams.
robot is expected to individually execute their tasks and
then ask for help when needed. Hence the cooperation is        5.3. Planning for Agents and Robots
less tightly coupled. In addition, the aim is not to jointly
execute predefined tasks but instead, enable the robots        Single robot planning may be approached from multiple
to learn from their environment and their peers so that        perspectives. Two often used ones are heuristic short-
they could independently form new plans and meet their         est path search, such as the famous A* algorithm and its
personal goals.                                                dynamic counterparts, and solutions used for logical opti-
   While Chaimowicz et al. also use the transportation         mization problems, e.g. (weighted) maximum satisfiabil-
of objects as an example, the same use case has been           ity solvers. The shortest path search provides (estimates)
studied many times during the years. Recently, Zhang et        for moving from one node to another in a graph and aims
al. [10] as well as Manko et al. [11] have studied control     to find the path of nodes with the shortest length, and
architecture that is using deep reinforcement learning         logical optimization aims to find a (maximal or minimal)
in the transportation of large or heavy objects with a         set of clauses that satisfy certain conditions. Dynamic
particular focus on decentralized decision making. While       shortest path algorithms fit well in environments where
these approaches have similarities to our work, our work       the robot may not fully understand its situation, e.g., a
aims more for enabling individual robots to fulfill their      complete map and logical optimization excels in cases
personal goals instead of the group’s goal. Hence our ar-      where it is crucial to ensure the correctness of the solu-
chitecture would likely not be well-suited for such tightly    tion beforehand.
coupled cooperation. However, we can learn from their             However, our goal is to provide a planner that uses
experiences on how they use deep learning technologies         both logical verifications of the workflows through the
and Q-learning-based algorithms for training the robots        fulfillment of each tasks’ start and end conditions and a
to execute a tightly coupled task, and in the future, we       heuristic estimate of its execution resources through peer
could try a similar approach in our 2D Block World.            models and communication. Our approach differs from
                                                               typical multi-robot task planning (see, e.g., Yan et al. [14])
                                                               in that one robot initiates the planning of the workflow
5.2. Ontologies for Cooperation
                                                               phase (task decomposition), and it communicates, based
Ontologies have been widely used to make agents and            on its peer models, with other robots to find suitable
robots understand the structures of the physical and           members to execute the tasks (task allocation).
social world around them (see, e.g., Olivares-Alarcos
et al. [12], Beetz et al. [2]), and initiatives considering
their usage to build robot collectives that can commu-         6. Discussion
nicate and cooperate have been suggested before, e.g.,
                                                               The implementation work has brought us numerous in-
RoboEarth [13]. In contrast to RoboEarth, cooperation
                                                               sights into the realities of cooperation in both real and
understanding and planning take place inside the indi-
                                                               simulated worlds. We have not yet faced any truly insur-
vidual robots in our architecture. The robots do not share
                                                               mountable issues, but many aspects make working with
their world views in general as they are assumed to hold
                                                               these environments not entirely straightforward.
also information that should not be shared with others,
                                                                  In Real World, there are innumerable factors that can
such as maps of restricted areas or passwords. Instead,
                                                               potentially affect the robots’ ability to perform, such as
they will only exchange information relevant to the cur-
                                                               the lighting as mentioned above conditions and cables
rent situation and goals directly with each other. That
                                                               on the floor. Of course, for our project’s purposes, this
said, cloud-based solutions, such as RoboEarth, could be
                                                               would not seem a significant issue, as we can perform our
integrated into the architecture as optional components.
                                                               tests in a carefully designed, controlled environment.3
   Mainly due to the advent of IoT, ontologies prove to
                                                               However, this does not remove the fundamental issue of
be an exciting starting point for robots to understand
                                                               unexpected factors. How would this uncertainty be dealt
the world as a built environment is getting populated
                                                               with within a hypothetical practical environment? One
with intelligent devices capable of communicating with
                                                               possible approach would be introducing some degree
other computational actors. This means that, e.g., a door
                                                               of "self-healing" properties in the design, both in terms
can be opened using software communication alone and
does not have to rely on physical door manipulation, and          3
                                                                    In fact, we have long had plans to set up such an environ-
that sensors and other IoT devices may send information        ment in the university campus, but the ongoing COVID-19 situation
of their physical composition, purpose, and capabilities       means these plans are still postponed.
of the robots’ performance and the cooperation context.             generation knowledge processing framework for
Currently, extensive work on this aspect is beyond the              cognition-enabled robotic agents, in: 2018 IEEE
scope of this project, however.                                     International Conference on Robotics and Automa-
   In contrast to the unpredictable Real World, the simu-           tion (ICRA), 2018, pp. 512–519.
lated 3D environments are inherently about control and          [3] L. Chaimowicz, M. Campos, V. Kumar, Simulating
thus easier to work with. Nevertheless, there can still             loosely and tightly coupled multi-robot cooperation
be considerable effort to set up a simulation the desired           (2001).
way, as seen with the difficulties in simulating multiple       [4] C. Castelfranchi, Modelling social action for AI
robots simultaneously in Gazebo. It also became appar-              agents, Artificial Intelligence 103 (1998) 157–182.
ent that multi-robot simulation can involve substantial         [5] A. Edwardes, D. Burghardt, M. Neun, Fipa com-
hardware requirements. Still, we have found the Gazebo              municative act library specification. foundation for
3D simulation fulfills its purpose satisfactorily as a plat-        intelligent physical agents, in: University of Maine:
form where cooperative actions can be developed for                 Orono, John Wiley Sons, 2000, pp. 377–387.
Real World (ROS2) robots in a more controlled manner.           [6] L. Bayındır, A review of swarm robotics tasks, Neu-
   However, it could also be noted that while the usage             rocomputing 172 (2016) 292–321.
of 3D simulation does simplify some aspects, designing          [7] S. El Zaatari, M. Marei, W. Li, Z. Usman, Cobot
Actions for the Task Runtime remains an endeavor that               programming for collaborative industrial tasks: An
relies on detailed knowledge in leveraging a particular             overview, Robotics and Autonomous Systems 116
robot’s inner workings. Contrary to how the primary                 (2019) 162–180.
interests of this project are in the dynamic and creative       [8] N. Mathew, S. L. Smith, S. L. Waslander, Planning
aspects of robot cooperation, there remains a nontrivial            paths for package delivery in heterogeneous mul-
effort necessary in creating the actual units of implemen-          tirobot teams, IEEE Transactions on Automation
tation, Actions. Future work could explore how to design            Science and Engineering 12 (2015) 1298–1308.
the Actions more efficiently.                                   [9] L. Chaimowicz, T. Sugar, V. Kumar, M. Campos, An
                                                                    architecture for tightly coupled multi-robot coop-
                                                                    eration, in: Proceedings 2001 ICRA. IEEE Inter-
7. Conclusions                                                      national Conference on Robotics and Automation
                                                                    (Cat. No.01CH37164), volume 3, 2001, pp. 2992–2997
In this paper, we presented a new software architecture
                                                                    vol.3.
and development approach for diverse multi-robot coop-
                                                               [10] T. Zhang, G. Liu, Design of formation control ar-
eration. The core idea of the new approach is improving
                                                                    chitecture based on leader-following approach, in:
the situational context by developing and training peer
                                                                    2015 IEEE International Conference on Mechatron-
models and an ontology that improves understanding
                                                                    ics and Automation (ICMA), 2015, pp. 893–898.
of the world. The peer models enable the robots to take
                                                               [11] S. V. Manko, S. A. K. Diane, A. E. Krivoshatskiy, I. D.
their peers’ capabilities and goals into account in their
                                                                    Margolin, E. A. Slepynina, Adaptive control of a
reasoning, and the ontology can be used as a shared ba-
                                                                    multi-robot system for transportation of large-sized
sis for communication and forming cooperation plans.
                                                                    objects based on reinforcement learning, in: 2018
The presented work is yet in its early stage, but we have
                                                                    IEEE Conference of Russian Young Researchers in
already provided encouraging results and will continue
                                                                    Electrical and Electronic Engineering (EIConRus),
the work.
                                                                    2018, pp. 923–927.
                                                               [12] A. Olivares-Alarcos, D. Beßler, A. Khamis,
Acknowledgments                                                     P. Goncalves, M. K. Habib, J. Bermejo-Alonso,
                                                                    M. Barreto, M. Diab, J. Rosell, J. Quintas, et al.,
The work was supported by the Academy of Finland                    A review and comparison of ontology-based
(project 328729).                                                   approaches to robot autonomy, The Knowledge
                                                                    Engineering Review 34 (2019) e29.
                                                               [13] M. Beetz, J. Civera, R. D’Andrea, J. Elfring,
References                                                          D. Galvez-lopez, Roboearth: a world wide web
                                                                    for robots, IEEE Transactions on Robotics and Au-
 [1] J. Berrocal, J. Garcia-Alonso, J. Galán-Jiménez, J. M.
                                                                    tomation 6 (2011) 69–82.
     Murillo, N. Mäkitalo, T. Mikkonen, C. Canal, Situa-
                                                               [14] Z. Yan, N. Jouandeau, A. A. Cherif, A survey and
     tional context in the programmable world, in: 2017
                                                                    analysis of multi-robot coordination, International
     IEEE SmartWorld, 2017, pp. 1–8.
                                                                    Journal of Advanced Robotic Systems 10 (2013) 399.
 [2] M. Beetz, D. Beßler, A. Haidu, M. Pomarlan, A. K.
     Bozcuoğlu, G. Bartels, Know rob 2.0 — a 2nd