<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A User-friendly Interface to Control ROS Robotic Platforms</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ilaria Tiddi</string-name>
          <email>ilaria.tiddi@vu.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emanuele Bastianelli</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianluca Bardaro</string-name>
          <email>gianluca.bardaro@polimi.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enrico Motta</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, VU University Amsterdam</institution>
          ,
          <addr-line>NL</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dip. di Elettronica, Informazione e Bioingegneria</institution>
          ,
          <addr-line>Politecnico di Milano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Knowledge Media Institute, The Open University</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this work we present a user interface assisting non-expert users to design complex robot behaviours, hence facilitating the deployment of robot-integrated applications. Due to the increasing number of robotic platforms available for commercial use, robots are being approached by users with di erent backgrounds, whose interests lie in the high-level capabilities of the platforms rather than their technical architecture. Our interface allows non-experts to use a robot as a development platform, i.e. giving it high-level commands (e.g. autonomous navigation, vision, natural language generation) by relying on a basic ontology of high-level capabilities mapped on the robot low-level capabilities (e.g. communication, synchronisation, drivers) exposed by the most common robotic middleware ROS. To show how our work can sensibly reduce the e orts required for having robots achieving basic tasks, we propose a live demonstration in which the ISWC audience will remotely program a robot to achieve di erent tasks without any previous knowledge of ROS.</p>
      </abstract>
      <kwd-group>
        <kwd>Knowledge Representation Robots Ontology Engineering</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The goal of our work is to facilitate the usability of robotic platforms to
nonexpert users, hence simplifying the time-consuming process of con guring and
programming robotic platforms to integrate in real-world applications. In this
demo, we present a user-friendly interface that we designed and implemented in
order to allow users to make robots of di erent types and capabilities achieve
basic tasks without having any previous knowledge of robot programming.</p>
      <p>Our motivation stems from the growth of robotic platforms available for
commercial use, which parallels to the increasing number of non-technical users that
approach robots to exploit them as development platforms for their own
purposes. For example, we are looking at integrating robots in a number of smart
city applications, such as parking monitoring, building surveillance and garbage
collection. Without the technical knowledge or the expertise of dedicated robot
developers, we are limited to exploit the capabilities allowed by the
commercial platforms (e.g. using a remote-controlled drone to record photos or videos),
while more advanced usages supported by those systems could be exploited (e.g.
programming the drone to autonomously survey an area).</p>
      <p>These limitations could be overcome by integrating a middleware between
the robots and an application, such as the ROS framework (Robot Operating
System4) promoted by the robotics community with the aim to relieve developers
from the management of low-level components. From an end-user perspective,
ROS remains a low-level technical platform directly above a robot's hardware
components. This means that programming a robot to achieve high-level tasks
is still a time-consuming process, requiring advanced knowledge of robot-speci c
ROS components, the ROS framework, and robotics in general.</p>
      <p>
        Ontologies, on the other hand, have proven to be successful in facilitating use
of complex systems, e.g. sensor networks, smart products and smart homes [1{3].
Starting from the idea that an ontological representation of robot capabilities
could help abstracting from the low-level implementation of the robot, we
developed in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] an ontology-based system to derive high-level capabilities (such as
autonomous navigation, vision, natural language understanding) from the ROS
components (managing low-level capabilities such as communication,
synchronisation, driver control). This paper then focuses on the presentation of the
user interface and the live demonstration of the tool. During the
demonstration, users will be remotely programming robots without previous expertise to
achieve speci c, purpose-made tasks in our department (the Knowledge Media
Institute). We will show how using an ontological abstraction is not only bene
cial to reduce learning times and e orts, but also to increase the interoperability
of robotic platforms.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Exposing Robot Capabilities</title>
      <p>The user interface, presented in Figure 1, is composed of three blocks: the
Capability Panel (left block), the Construct Panel (right block) and the main Program
Panel (middle block). All demo material, i.e. a video of the process, the ontology
and code to perform the tasks, is available online5.</p>
      <p>
        Capability Panel. This panel o ers the users the list of capabilities abstracted
from a robotic platform to which the system is connected. We refer the reader
to [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] for the in-depth description of the ontology-based system lying behind
the interface. Here, we limit ourselves in mentioning that the system is based
on a formal representation of the ROS architecture (in short, :Nodes that
exchange :Messages routed via speci c :Topics6), and that such communication
exchanges mapped into speci c robot :Capability(ies), organised into as a
taxonomy whose highest classes are :Sensing, :Movement and :RobotKnowledge.
Based on the ontology and the robot running on ROS, an Analyser module scans
all active ROS components to determine the capabilities processed by the robot,
based on the high-to-low-level capability mappings described in the knowledge
4 http://www.ros.org/
5 https://tinyurl.com/ybqg7om5
6 http://wiki.ros.org/ROS/Concepts
base. The retrieved list of capabilities is then shown to the user, for he/she to
create an ad-hoc program. Note that capabilities come in two modalities: read
capabilities, giving information about the robot (e.g. the Vision capability
offering data from a camera), and write capabilities that expect some inputs (e.g.
Navigation expects a position in space).
      </p>
      <p>Construct Panel. The Construct Panel shows the user the set of programming
constructs that he/she can choose to create the program to instruct the robot
with. A program can be created as an imperative programming language, in
which the atomic blocks are invocations of the available robot capabilities on
the left panel (using the add button), and a set of conditional operators, such
as if-then-else, while-do and the repeat statement. An additional no-operation
statement can be employed to perform empty operations. The parameters of a
capability can be used in the conditions, so to exploit any robot output to drive
the program ow (e.g. moving forward until an object is detected).
Program Panel. The Program Panel in the middle of Figure 1 shows the user the
program he/she has created through constructs and capabilities of the respective
panels. Once the user is satis ed with the set of commands, they are sent to the
robot through a Dynamic module. If a read capability is requested, the Dynamic
module relays to the interface the messages received from the robot. If a write
capability is requested, the Dynamic module reads the parameters from the
interface, structures them in a message and then sends them to the robot, which
consequently performs the operations.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Planned Demonstration</title>
      <p>We plan a live demonstration where the audience will be able to remotely instruct
a ground wheeled robot operating in an o ce environment (the Knowledge Media
Institute) to solve a number of high-level tasks using the presented interface, i.e.
creating a program allowing the robot to achieve a speci c task. We plan tasks
to be of increasing di culty for the audience to get familiar with the interface,
namely:</p>
      <p>The goal of the demonstration will be to show users without previous robotics
expertise that e orts and time required to program robots can be sensibly
reduced through relying on ontological knowledge, hence making a step towards
facilitating the development of applications in which robots are integrated. If
time and conditions allow, we also plan to collect quantitative and qualitative
data about the audience's performance (e.g. time taken to perform an action or
robot precision) in view of a larger and thorough evaluation of our tool.</p>
      <p>In the future, we will be o ering a set of ne-grained APIs for a more
highlevel communication with the robot, following the structure of the user interface
presented in the paper.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nugent</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mulvenna</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finlay</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hong</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>Semantic smart homes: towards knowledge rich assisted living environments</article-title>
          .
          <source>Intelligent Patient</source>
          Management pp.
          <volume>279</volume>
          {
          <issue>296</issue>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Corcho</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a-Castro,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Five challenges for the semantic sensor web</article-title>
          .
          <source>Semantic Web</source>
          <volume>1</volume>
          (
          <issue>1</issue>
          , 2),
          <volume>121</volume>
          {
          <fpage>125</fpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Sabou</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kantorovitch</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nikolov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tokmako</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Motta</surname>
          </string-name>
          , E.:
          <article-title>Position paper on realizing smart products: Challenges for semantic web technologies</article-title>
          .
          <source>In: Proceedings of the 2nd International Conference on Semantic Sensor NetworksVolume 522</source>
          . pp.
          <volume>135</volume>
          {
          <fpage>147</fpage>
          .
          <string-name>
            <surname>CEUR-WS. org</surname>
          </string-name>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Tiddi</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bastianelli</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bardaro</surname>
          </string-name>
          , G.,
          <string-name>
            <surname>d'Aquin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Motta</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>An ontology-based approach to improve the accessibility of ros-based robotic systems</article-title>
          .
          <source>In: Proceedings of the Knowledge Capture Conference</source>
          . p.
          <fpage>13</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>