<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Connecting natural language to task demonstrations and low-level control of industrial robots</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maj Stenmark</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jacek Malec</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept. of Computer Science, Lund University</institution>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <fpage>25</fpage>
      <lpage>29</lpage>
      <abstract>
        <p>- Industrial robotics is a complex domain, not easily amenable to formalization using semantic technologies. It involves such disparate aspects of the real world as geometry, dynamics, constraint-satisfaction, planning and scheduling, real-time control, robot-robot and human-robot communication and, finally, intentions of the robot user. To represent so different kinds of knowledge is a challenge and the research on combining those topics is only in its infancy. This paper describes our attempts to combine descriptions of robot tasks using natural language together with their realizations using robot hardware involving force sensing, ultimately leading to a potential of learning new robot skills employing force-based assembly. We believe it is a novel approach opening possibilities of semantic anchoring for learning from demonstration.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>Recent developments in robotics, artificial intelligence and
cognitive science lead to bold predictions about the
soonto-come robotization of all aspects of human life. Robots
will help the elderly, perform mundane jobs no one wants,
drive our cars, fill our refrigerators when needed, tirelessly
rehabilitate patients in need of physical exercise, fight our
wars, become our sex partners, etc., etc. Some draw even
the conclusion that robots will take over Earth and will turn
humans into obsolete pets.</p>
      <p>However, when observing the development of the robotics
field, we can realize that this perspective is rather far, far
away. Service robots are clumsy and unskilled, no one
trusts a robotized car, and production still relies on simple
manipulators programmed in a classical manner by skilled
engineers. Any attempt to instruct a robot to perform a
concrete manufacturing task consists of person-weeks of
work of skilled system integrator engineers, who take into
account the geometrical layout of the workcell, all the objects
involved, involving their geometry, physical properties and,
last but not least, purpose of the task. It is the implicit
knowledge that needs to be transferred into robot code, which
makes this task so complex.</p>
      <p>Use of semantic technologies is advocated since at least
a decade. Unfortunately, industrial robotics is a complex
domain, not easily amenable to formalization. It involves
such disparate aspects of the real world as geometry,
dynamics (including forceful interaction with the work
objects), constraints, planning, scheduling, optimization,
realtime control, robot-robot and human-robot communication
and, finally, intentions of the robot user. To represent so
different kinds of knowledge is a challenge and the research
on combining those topics is only in its infancy.</p>
      <p>In particular, we have devoted last years to understand and
describe robotic assembly, including force-based operations
(snap, drill, press, etc.), using machine-readable formalism
expressing the semantics of possible robot actions. Without
it there will be no possibility to create a meaningful
reasoning leading from the task specification (what needs to be
manufactured) to task synthesis (how can this be achieved
using the available robot skills) and robust execution by
the synthesized code for a particular architecture. Not to
mention swift error handling in case of unexpected problems,
portability of a robot skill from one robot to another, and
learning of new skills.</p>
      <p>
        In previous research, we have focused our attention on two
areas: interaction between the user and the robotic system,
preferably on the user’s conditions, e.g., using natural
language [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and representation of force-controlled
assembly operations, particularly problematic due to the inherent
mix of continuous and discrete aspects [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Besides being
able to talk with the robot about a force-controlled assembly
operation, we would like it to be learnt automatically from a
demonstration and be represented semantically in a manner
enabling portability among different robots.
      </p>
      <p>
        So far these kinds of systems are developed only in
research laboratories. Our own research is done in the context
of several EU projects, in particular ROSETTA, PRACE and
SMEROBOTICS, aiming at developing intelligent interactive
systems suitable for inexperienced users, such as SMEs.
Before they reach the factory floor though, they need to
be filled with sufficient production knowledge so that they
become useful. Knowledge acquisition is a bottleneck in
developing practical systems, as it can only happen while
the system is used, but it won’t be useful before it is done:
a classical chicken and egg problem. Therefore the only
viable solution is a learning system, capable of sharing
its experiences by storing them in a (possibly cloud-based)
knowledge base [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and using experiences of other robots by
importing and adapting their skills. However, such a solution
requires a common understanding of the contents of this
knowledge base, thus, a commonly agreed-upon semantics.
      </p>
      <p>
        The work on standardization of robotics domain is already
quite well advanced. There exist ontologies for specific
domains, like service robotics or surgical robotics, and a
core ontology for robotics and automation (CORA) recently
standardized by IEEE [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. However, they introduce concepts
in symbolic form without properly connecting to all their
denotations, e.g., robot programs instantiating skills named
in these ontologies. Our work addresses this problem by
providing concrete denotations belonging to several modalities.
As we have mentioned before, we describe robot actions
using natural language, using assembly graphs, using
transition systems, using iTaSC formalism, and using the actual
robot code. Those multiple modalities are co-existing in one
system, letting the reasoner switch between representations
when such a need arises.
      </p>
      <p>Learning from demonstration leads to new problems in
semantic anchoring of robot actions, as there is no obvious,
apparent meaning in robot movements. Semantics may be
either guessed, derived by inductive reasoning, or attributed
post factum by humans via some form of annotating. In
particular, force-based assembly is problematic as quite often
the difference between success and failure depends on a
particular profile of the force signal. So far, this issue has
been approached using sensor fusion techniques, without
direct support from semantics. Our work attempts to remedy
this situation, introducing natural language into the picture
and letting assembly to be not only detected via sensor
readings, but also by being simultaneously told about.</p>
    </sec>
    <sec id="sec-2">
      <title>II. RELATED WORK</title>
      <p>
        In the domain of service robotics, there are some
interesting frameworks for representation of household tasks
and environments. KnowRob [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] is a knowledge processing
system that combines declarative and procedural knowledge
from multiple sources, e.g., the RoboEarth [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] database and
web sites. A similar project is RoboHow [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], which
developed a knowledge-based reasoning service OpenEASE [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
and attempts at bridging the gap from symbolic planning
to constraint-based control [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Ontologies for kit building
applications for industrial robots have been developed by
Balakirsky et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and Carbonera et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] developed
a ontology for positions. We have already mentioned the
standardization work of IEEE Working Group ORA [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        We are interested in integrating low-level statistical task
representations taken from demonstrations. Such tasks can
be represented by a trajectory or force profile. The
trajectories can be extracted from the demonstration by first
applying segmentation algorithms and then parameterizing
each segment as a trajectory. Niekum et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] use
Beta Process Autoregressive Hidden Markov Models from
Fox et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] to automatically segment demonstrations and
dynamic movement primitives (DMPs) [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] to represent the
trajectories. Since the statistical properties of semantically
different sub-tasks can be similar, they use predecessor states
to refine the classification and determine the transitions
in a finite state machine. Other learning methods are for
example reinforcement learning, used by Metzen et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
to learn skill templates, and Iterative Learning Control, used
by Nemec et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] to follow demonstrated force profiles.
      </p>
      <p>
        One way to annotate objects and actions is to describe
them using natural language. Matuszek et al. [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], Kollar et
al. [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] and Landsiedel [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] use natural language to describe
routes and Walter et al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] use language descriptions to
semantically annotate maps. She et al. [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] studies the dialogue
system, while Cakmak [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] evaluates methods for teaching
operators how to interact with a robot using kinesthetic
teaching and dialogue.
      </p>
      <p>
        Please note that our understanding of the term multimodal
semantics differs from the one quite commonly encountered
in literature, see e.g. [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], where the authors aim at finding
the meaning of a particular text fragment using the statistical
approach grounded both in text and image corpora. However,
there is no attempt to use this semantics in the reverse
direction, to generate new utterances (that our robot programs
would correspond to).
      </p>
    </sec>
    <sec id="sec-3">
      <title>III. CURRENT WORK</title>
      <p>The focus of current work is to semantically annotate task
demonstrations to enable reuse and reasoning. This involves
annotating log data with quantities, units, and task states. The
logs can then be used to identify force/torque and position
constraints and application specific parameter values
(positions, velocity, stiffness, etc.). The demonstrations are use to
segment the task in different sub-skills and extract parameters
for each skill. One approach is to describe the trajectory
of the sub-skills using DMPs and then parameterize the
primitives and describe them with for example skill type,
preconditions, and postconditions.</p>
      <p>As an example, when demonstrating picking and placing
of an object, the task can be segmented into different
sub-skills. First the robot approaches the object, opens the
gripper, moves into pick position, closes the gripper, retracts
from the surface moves to the place position (perhaps using
via positions as well), positions the object correctly, releases
it and retracts. Each segment can be described using a
trajectory (e.g., a DMP) in some reference frame together
with a gripper state. Multiple demonstrations can be used
for each sub-skill in order to detect which for example
relevant reference frame and allowed gripping poses. To
enable reuse, we are working on annotating the segments
with initial allowed start positions and gripper state, a skill
type and postconditions. This will allow the planner to add
required actions before or after the skill and add
errorhandling procedures to the task (e.g., if the robot drops the
object when transporting it, the object should be localized
and picked up again). The skill also has to be parameterized
so that it can be initialized correctly, for example, specifying
controller, reference frames, velocity values.</p>
      <p>
        Another example is force-controlled assembly. The force
data is not used for sensor fusion, it is used to control
the motions of the robot and signals failure or success
of the assembly. In a snapfit assembly skill, where a two
plastic pieces, a switch and a box shown in Fig. 1, are
”snapped” together, the force signature indicates whether the
snap occurred or not. In previous work [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] such task could
be expressed using the force constraint directly in guarded
motions. Using a graphical user interface, primitive actions
and skills could be combined into a sequence. An example
is shown in Fig. 2. In the sequence, the box is first picked
and placed on a fixture using three search motions. The first
motion moves the robot down until it feels contact forces
in the z-direction, then, while pressing down, it searches in
the y-direction until contact with the wall, finally, it searches
in the x-direction while simultaneously pressing down and
towards the wall. In the sequence pickbox, movetofixt,
pickswitch and retract are position based motions
running on the native robot controller. The snapFitSkill
is a reused skill, which in turn contains multiple guarded
searches. From the graphical representation, the skill
specification can be exported to XML-format (see excerpt in Fig. 3)
and to runnable format, see Fig. 4. The skills are semantically
annotated with sensor and controller type, and the parameters
are described with units.
      </p>
      <p>In another assembly, a rectangular metal plate (a shield
can) is inserted on a printed circuit board (PCB). The PCB
is attached in a fixture, which is attached to a force sensor.
The assembly starts by tilting the shield can above the PCB
(see Fig. 5), moving down until the corner touches the board.
Then, the robot attaches one corner of the plate to a corner
on the PCB and rotates the plate into place. The rotation is
first carried out around the xy-plane of the PCB until either
the long or the short side of the rectangle touches the PCB,
then the last side has to be rotated into place. That is, if the
longer side of the rectangle is parallel with the x-axis and
the rotation around a xy-vector from the initial tilted position
will align it with the PCB, the execution will branch into a
rotation around the x-axis until the short side is aligned with
the PCB. Otherwise, the rotation will align the short side
first, as seen in Fig. 6.</p>
      <p>
        To lower the threshold for the user, we want to use natural
language dialogues to describe the demonstration and extend
the task. Together with the parameterized demonstrations,
this will allow the user to use high-level structures such as
loops and if-then-else statements, which are easily described
using language but tedious or difficult to describe using
demonstrations only. In our current system, the user can
instruct the robot using unstructured text or dictate the task
using Google dictation tools. An example instruction is
displayed in Fig. 7. All parameters have default values, which
makes the high-level nominal task easy and fast to generate
from text. The programming interface use language specific
statistical tools to extract the semantics of the sentences,
then a rule-based mapping to robot skills and world objects.
At the moment, English and Swedish [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] are supported
programming languages.
      </p>
      <p>The immediate future work involves investigating how to
teach pre-and postconditions for skills learned from
demonstration, to enable online reasoning. These conditions need
to be anchored in sensor readings. Inductive inference is one
possibility; another is to use mixed-initiative dialogue with
the user, asking for guidance or confirmation, yet another is
to introduce some annotation tool to be used simultaneously
with the learning procedure.</p>
      <p>It is desirable to have natural language support on all
levels in the system. At the moment, we only support task
instruction, but we also want to be able to describe the world
and connect the perceived objects and situations to (new)
semantic symbols. E.g., saying ”This is a nut” after teaching
the camera system to recognize an object, or describing a
pallet as ”empty”. At the moment, the robot is a passive
participant in the dialogue, only reacting on commands from
the human. When interacting with non-expert users, the robot
should ask questions and come with suggestions on what to
do.</p>
      <p>The next step is to introduce the possibility of extending
the robot knowledge by adding new concepts to the semantic
hierarchy. This is a more complex task than the previous one,
as it involves inducing relations with existing concepts and
proper placing of the new symbol in the IsA hierarchy.</p>
      <p>Yet another interesting problem is to reason about
“synonyms” among robot programs, i.e. syntactically different
structures or programs leading to the same effect. A simple
example is a “localize and pick” task that may use different
kinds of sensors to localize an object, while the goal (of
picking the object from its current location) is achieved
irrespectively of which concrete sensor is used. How to teach
the system that two skills are equivalent in such (or some
other) sense? What needs to be told? What kind of reasoning
performed?</p>
      <p>Representing knowledge about industrial processes
involving semantically-capable robots is a challenge leading to
fascinating questions. We are quite sure we will have a lot
to do in years to come.</p>
    </sec>
    <sec id="sec-4">
      <title>ACKNOWLEDGMENTS</title>
      <p>The research leading to these results has received
partial funding from the European Union’s seventh framework
program under grant agreement No. 287787 (project
SMErobotics) and from the European Union’s H2020 program
under grant agreement No. 644938 (project SARAFun).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stenmark</surname>
          </string-name>
          , “
          <article-title>Instructing industrial robots using high-level task descriptions</article-title>
          ,
          <source>” Ph.D. dissertation</source>
          , Lund University, Department of Computer Science, Mar.
          <year>2015</year>
          , licentiate Thesis.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stenmark</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Malec</surname>
          </string-name>
          , “
          <article-title>Knowledge-Based Instruction of Manipulation Tasks for Industrial Robotics,” Robotics and Computer Integrated Manufacturing</article-title>
          , vol.
          <volume>33</volume>
          , pp.
          <fpage>56</fpage>
          -
          <lpage>67</lpage>
          ,
          <year>2015</year>
          . [Online]. Available: http://lup.lub.lu.se/record/4679243/file/4679245.pdf
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Malec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nilsson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Bruyninckx</surname>
          </string-name>
          , “
          <article-title>Describing assembly tasks in declarative way,”</article-title>
          <source>in Proc. IEEE</source>
          ICRA 2013 Workshop on Semantics,
          <article-title>Identification and Control of Robot-Human-Environment Interaction</article-title>
          , Karlsruhe, Germany, May
          <year>2013</year>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stenmark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Malec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nilsson</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Robertsson</surname>
          </string-name>
          , “
          <article-title>On Distributed Knowledge Bases for Robotized Small-Batch Assembly,”</article-title>
          <source>IEEE Transactions on Automation Science and Engineering</source>
          , vol.
          <volume>12</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>519</fpage>
          -
          <lpage>528</lpage>
          ,
          <year>2015</year>
          . [Online]. Available: http://dx.doi.org/10.1109/TASE.
          <year>2015</year>
          .2408264
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>[5] “IEEE standard ontologies for robotics and automation</article-title>
          ,
          <source>” IEEE Standard 1872-2015</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tenorth</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          , “
          <article-title>Knowrob: A knowledge processing infrastructure for cognition-enabled robots,”</article-title>
          <source>The International Journal of Robotics Research</source>
          , vol.
          <volume>32</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>566</fpage>
          -
          <lpage>590</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tenorth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Perzylo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lafrenz</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          , “
          <article-title>Representation and exchange of knowledge about actions, objects, and environments in the roboearth framework,” Automation Science and Engineering</article-title>
          , IEEE Transactions on, vol.
          <volume>10</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>643</fpage>
          -
          <lpage>651</lpage>
          ,
          <year>July 2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tenorth</surname>
          </string-name>
          , G. Bartels, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          , “
          <article-title>Knowledge-based specification of robot motions</article-title>
          ,”
          <source>in Proceedings of the European Conference on Artificial Intelligence (ECAI)</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tenorth</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Winkler</surname>
          </string-name>
          , “
          <article-title>Open-EASE - a knowledge processing service for robots and robotics/ai researchers</article-title>
          ,”
          <source>in IEEE International Conference on Robotics and Automation (ICRA)</source>
          , Seattle, Washington, USA,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Scioni</surname>
          </string-name>
          , G. Borghesan,
          <string-name>
            <given-names>H.</given-names>
            <surname>Bruyninckx</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Bonfe</surname>
          </string-name>
          , “
          <article-title>Bridging the gap between discrete symbolic planning and optimization-based robot control</article-title>
          ,
          <source>” in 2015 IEEE International Conference on Robotics and Automation</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Balakirsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Kootbally</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schlenoff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kramer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Gupta</surname>
          </string-name>
          , “
          <article-title>An industrial robotic knowledge representation for kit building applications,” in Intelligent Robots and Systems</article-title>
          (IROS),
          <year>2012</year>
          IEEE/RSJ International Conference on,
          <source>Oct</source>
          <year>2012</year>
          , pp.
          <fpage>1365</fpage>
          -
          <lpage>1370</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Carbonera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Rama</given-names>
            <surname>Fiorini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Prestes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Jorge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Abel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Madhavan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Locoro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Goncalves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Haidegger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Barreto</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Schlenoff</surname>
          </string-name>
          , “
          <article-title>Defining positioning in a core ontology for robotics,” in Intelligent Robots and Systems</article-title>
          (IROS),
          <year>2013</year>
          IEEE/RSJ International Conference on,
          <source>November</source>
          <year>2013</year>
          , pp.
          <fpage>1867</fpage>
          -
          <lpage>1872</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Niekum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chitta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Marthi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Osentoski</surname>
          </string-name>
          , “
          <article-title>Incremental semantically grounded learning from demonstration</article-title>
          ,” Berlin, Germany,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Niekum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Osentoski</surname>
          </string-name>
          , G. Konidaris,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chitta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Marthi</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Barto</surname>
          </string-name>
          , “
          <article-title>Learning grounded finite-state representations from unstructured demonstrations,”</article-title>
          <source>The International Journal of Robotics Research</source>
          , vol.
          <volume>34</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>131</fpage>
          -
          <lpage>157</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>E. B.</given-names>
            <surname>Fox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. I.</given-names>
            <surname>Jordan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. B.</given-names>
            <surname>Sudderth</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Willsky</surname>
          </string-name>
          , “
          <article-title>Sharing features among dynamical systems with beta processes,”</article-title>
          <source>in Advances in Neural Information Processing Systems</source>
          22,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuurmans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lafferty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <surname>A</surname>
          </string-name>
          . Culotta, Eds. Curran Associates, Inc.,
          <year>2009</year>
          , pp.
          <fpage>549</fpage>
          -
          <lpage>557</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Ijspeert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nakanishi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Schaal</surname>
          </string-name>
          , “
          <article-title>Learning attractor landscapes for learning motor primitives</article-title>
          ,”
          <source>in Advances in Neural Information Processing Systems</source>
          <volume>15</volume>
          (
          <issue>NIPS2002</issue>
          ),
          <year>2002</year>
          , pp.
          <fpage>1547</fpage>
          -
          <lpage>1554</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Metzen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Senger</surname>
          </string-name>
          , J. de Gea Fernndez, and E. Kirchner, “
          <article-title>Towards learning of generic skills for robotic manipulation</article-title>
          ,
          <source>” KI - Knstliche Intelligenz</source>
          , vol.
          <volume>28</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>20</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>B.</given-names>
            <surname>Nemec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Abu-Dakka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ridge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ude</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jorgensen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Savarimuthu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jouffroy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Petersen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Kruger</surname>
          </string-name>
          , “
          <article-title>Transfer of assembly operations to new workpiece poses by adaptation to the desired force profile,” in Advanced Robotics (ICAR</article-title>
          ),
          <year>2013</year>
          16th International Conference on,
          <source>Nov</source>
          <year>2013</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>C.</given-names>
            <surname>Matuszek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Herbst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Fox</surname>
          </string-name>
          , “
          <article-title>Learning to parse natural language commands to a robot control system,” in Experimental Robotics</article-title>
          , ser. Springer Tracts in Advanced Robotics. Springer International Publishing,
          <year>2013</year>
          , vol.
          <volume>88</volume>
          , pp.
          <fpage>403</fpage>
          -
          <lpage>415</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kollar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tellex</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Roy</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Roy</surname>
          </string-name>
          , “
          <article-title>Grounding verbs of motion in natural language commands to robots,” in Experimental Robotics</article-title>
          , ser. Springer Tracts in Advanced Robotics. Springer Berlin Heidelberg,
          <year>2014</year>
          , vol.
          <volume>79</volume>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>C.</given-names>
            <surname>Landsiedel</surname>
          </string-name>
          , R. de Nijs,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kuhnlenz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wollherr</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Buss</surname>
          </string-name>
          , “
          <article-title>Route description interpretation on automatically labeled robot maps</article-title>
          ,”
          <source>in Proceedings of the International Conference on Robotics and Automation (ICRA)</source>
          , Karlsruhe, Germany, May
          <year>2013</year>
          , p.
          <fpage>22512256</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Walter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hemachandra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Homberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tellex</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Teller</surname>
          </string-name>
          , “
          <article-title>Learning semantic maps from natural language descriptions</article-title>
          ,”
          <source>in Proceedings of the 2013 Robotics: Science and Systems IX Conference</source>
          , Berlin, Germany,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>L.</given-names>
            <surname>She</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          , Y. Cheng, Y. Jia,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chai</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Xi</surname>
          </string-name>
          , “
          <article-title>Back to the blocks world: Learning new actions through situated human-robot dialogue,” in Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)</article-title>
          . Philadelphia, PA, U.S.A: for Computational Linguistics,
          <year>2014</year>
          , pp.
          <fpage>89</fpage>
          -
          <lpage>97</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cakmak</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Takayama</surname>
          </string-name>
          , “
          <article-title>Teaching people how to teach robots: The effect of instructional materials and dialog design,” in International Conference on Human-Robot Interaction (HRI), Bielefeld</article-title>
          , Germany, Mar.
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>E.</given-names>
            <surname>Bruni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. K.</given-names>
            <surname>Tran</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Baroni</surname>
          </string-name>
          , “Multimodal distributional semantics,
          <source>” J. Artif. Int. Res.</source>
          , vol.
          <volume>49</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>47</lpage>
          , Jan.
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stenmark</surname>
          </string-name>
          , “
          <article-title>Bilingual robots: Extracting robot program statements from swedish natural language instructions,”</article-title>
          <source>in Proc. of The 13th Scandinavian Conf. on Artificial Intelligence</source>
          , Halmstad, Sweden,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>