<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Controlling a General Purpose Service Robot By Means Of a Cognitive Architecture</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jordi-Ysard Puigbo</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Albert Pumarola</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ricardo Tellez</string-name>
          <email>ricardo.tellez@pal-robotics.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Pal Robotics</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Technical University of Catalonia</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper, a humanoid service robot is equipped with a set of simple action skills including navigating, grasping, recognizing objects or people, among others. By using those skills the robot has to complete a voice command in natural language that encodes a complex task (de ned as the concatenation of several of those basic skills). To decide which of those skills should be activated and in which sequence no traditional planner has been used. Instead, the SOAR cognitive architecture acts as the reasoner that selects the current action the robot must do, moving it towards the goal. We tested it on a human size humanoid robot Reem acting as a general purpose service robot. The architecture allows to include new goals by just adding new skills (without having to encode new plans).</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Service robotics is an emerging application area for human-centered technologies.
Even if there are several speci c applications for those robots, a general purpose
robot control is still missing, specially in the eld of humanoid service robots
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The idea behind this paper is to provide a control architecture that allows
service robots to generate and execute their own plan to accomplish a goal. The
goal should be decomposable into several steps, each step involving a one step
skill implemented in the robot. Furthermore, we want a system that can openly
be increased in goals by just adding new skills, without having to encode new
plans.
      </p>
      <p>Typical approaches to general control of service robots are mainly based on
state machine technology, where all the steps required to accomplish the goal
are speci ed and known by the robot before hand. In those controllers, the list
of possible actions that the robot can do is exhaustively created, as well as all
the steps required to achieve the goal. The problem with this approach is that
everything has to be speci ed beforehand, preventing the robot to react to novel
situations or new goals.</p>
      <p>
        An alternative to state machines is the use of planners [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Planners decide at
running time which is the best sequence of skills to be used in order to achieve the
goal speci ed, usually based on probabilistic approaches. A di erent approach
to planners is the use of cognitive architectures. Those are control systems that
try to mimic some of the processes of the brain in order to generate a decision
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ][
        <xref ref-type="bibr" rid="ref7">7</xref>
        ][
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        There are several cognitive architectures available: SOAR [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], ACT-R [
        <xref ref-type="bibr" rid="ref10 ref11">10,
11</xref>
        ], CRAM [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], SS-RICS [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. From all of them, only CRAM has been
designed with direct application to robotics in mind, having been applied to the
generation of pan cakes by two service robots [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Recently SOAR has also been
applied to simple tasks of navigation on a simple wheeled robot [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>At time of creating this general purpose service robot, CRAM was only able
to build plans de ned beforehand, that is, CRAM is unable to solve unspeci ed
(novel) situations. This limited the actions the robot could do to the ones that
CRAM had already encoded in itself. Because of that, in our approach we have
used the SOAR architecture to control a human sized humanoid robot Reem
equipped with a set of prede ned basic skills. SOAR selects the required skill
for the current situation and goal, without having a prede ned list of plans or
situations.</p>
      <p>The paper is structured as follows: in section 2 we describe the implemented
architecture, in section 3, the robot platform used. Section 4 presents the results
obtained and we end the paper with the conclusions.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Implementation</title>
      <p>The system is divided into four main modules that are connected to each other
as shown in the gure 1. First, the robot listens a vocal command and
translates it to text using the automatic speech recognition system (ASR). Then, the
semantic extractor divides the received text into grammatical structures and
generates a goal with them. In the reasoner module, the goal is compiled and
sent to the cognitive architecture (SOAR). All the actions generated by SOAR
are translated into skill activations. The required skill is activated through the
action nodes.
2.1</p>
      <sec id="sec-2-1">
        <title>Automatic Speech Recognition</title>
        <p>In order to allow natural voice communication the system incorporates a speech
recognition system capable of processing the speech signal and returns it as text
for subsequent semantic analysis. This admits a much natural way of
HumanRobot Interaction (HRI). The ASR is the system that allows translation of voice
commands into written sentences.</p>
        <p>
          The ASR software used is based on the open source infrastructure Sphinx
developed by Carnegie Mellon University [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. We use a dictionary that contains
200 words which the robot understands. In case the robot receives a command
with a non-known word the robot will not accept the command and is going to
request for a new command.
The semantic extractor is the system in charge of processing the imperative
sentences received from the ASR, extracting and retrieving the relevant knowledge
from it.
        </p>
        <p>The robot can be commanded using two types of sentences:
Category I The command is composed by one or more short, simple and
speci c subcommands, each one referring to very concrete action.</p>
        <p>Category II The command is under-speci ed and requires further information
from the user. The command can have missing information or be composed
of categories of words instead of speci c objects (ex. bring me a coke or
bring me a drink. First example does not include information about where
the drink is. Second example does not explain which kind of drink the user
is asking for).</p>
        <p>The semantic extractor implemented is capable of extracting the
subcommands contained on the command, if these actions are connected in a single
sentence by conjunctions (and ), transition particles (then) or punctuation marks.
Should be noticed that, given that the output comes from an ASR software, all
punctuation marks are omitted.</p>
        <p>We know that a command is commonly represented by an imperative
sentence. This denotes explicitly the desire of the speaker that the robot performs a
certain action. This action is always represented by a verb. Although a verb may
convey an occurrence or a state of being, as in become or exist, in the case of
imperative sentences or commands the verb must be an action. Knowing this, we
asume that any command will ask the robot to do something and these actions
might be performed involving a certain object (grasp a coke), location (navigate
to the kitchen table) or a person (bring me a drink ). In category I commands,
the semantic extractor should provide the speci c robot action and the object,
location or person that this action has to act upon. In category II, commands do
not contain all the necessary information to be executed. The semantic extractor
must gure out which is the action, and identify which information is missing in
order to accomplish it.</p>
        <p>
          For semmantic extraction we constructed a parser using the Natural
Language ToolKit (NLTK) [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. A context-free grammar (CFG) was designed to
perform the parsing. Other state-of-the-art parsers like Stanford Parser [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]
or Malt Parser [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] were discarded for not having support for imperative
sentences, having been trained with deviated data or needing to be trained
beforehand. It analyses dependencies, prepositional relations, synonyms and, nally,
co-references.
        </p>
        <p>Using the CFG, the knowledge retrieved from each command by the parser
is stored on a structure called parsed-command. It contains the following
information:
{ Which action is needed to perform
{ Which location is relevant for the given action
{ Which object is relevant for the given action
{ Which person is relevant for the given action</p>
        <p>The parsed-command is enough to de ne most goals for a service robot at
home, like grasp - coke or bring - me - coke. For multiple goals (like in the
category I sentences), an array of parsed-commands is generated, each one populated
with its associated information.</p>
        <p>The process works as follows: rst the sentence received from the ASR is
tokenized. Then, NLTK toolkit and Stanford Dependency Parser include already
trained Part-Of-Speech (POS) tagging functions for English. Those functions
complement all the previous tokens with tags that describe which is the POS
more plausible for each word. By applying POS-tagging, the verbs are found.
Then, the action eld of the parsed-command is lled with the verb.</p>
        <p>At this point the action or actions that are needed to eventually
accomplish the command have been already extracted. Next step is to obtain their
complements. To achieve this a combination of two methods is used:
1. Identifying from all the nouns in a sentence, which words are objects, persons
or locations, using an ontology.
2. Finding the dependencies between the words in the sentence. Having a
dependency tree allows identi cation of which parts of the sentence are connected
to each other and, in that case, identify which connectors do they have.
This means that nding a dependency tree (like for example, the Stanford
Parser), allows to nd which noun acts as a direct object of a verb.
Additionally, looking for the direct object, allows us to nd the item over which
the action should be directed. The same happens with the indirect object or
even locative adverbials.</p>
        <p>Once nished this step, the full parsed-command is completed. This structure
is sent to the next module, where it will be compiled into a goal interpretable
by the reasoner.
2.3</p>
      </sec>
      <sec id="sec-2-2">
        <title>Reasoner</title>
        <p>Goal Compiler A compiler has been designed to produce the goal in a
format understandable by SOAR from the received parsed-command, called the
compiled-goal.</p>
        <p>It may happen that the command lacks some of the relevant information
to accomplish the goal (category II ). This module is responsible for asking the
questions required to complete this missing information. For example, in the
command "bring me a drink ", knowing that a drink is a category, the robot will
ask for which drink is asking the speaker. Once the goals are compiled they are
sent to SOAR module.</p>
        <p>SOAR SOAR module is in charge of deciding which skills must be executed in
order to achieve the compiled-goal. A loop inside SOAR selects the skill that will
move Reem one step closer to the goal. Each time a skill is selected, a petition is
sent to an action node to execute the corresponding action. Each time a skill is
executed and nished, SOAR selects a new one. SOAR will keep selecting skills
until the goal is accomplished.</p>
        <p>The set of skills that the robot can activate are encoded as operators. This
means that there is, for each possible action:
{ A rule proposing the operator, with the corresponding name and attributes.
{ A rule that sends the command through the output-link if the operator is
accepted.
{ One or several rules that depending on the command response, re and
generate the necessary changes in the world.</p>
        <p>
          Given the nature of the SOAR architecture, all the proposals will be treated
at the same time and will be compared in terms of preferences. If one is best than
the others, this one is the only operator that will execute and a new deliberation
phase will begin with all the new available data. It's important to know that all
the rules that match the conditions are treated as if they red at the same time,
in parallel. There is no sequential order [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
        <p>Once the goal or list of goals have been sent to SOAR the world representation
is created. The world contains a list of robots, and a list of objects, persons and
locations. Notice that, at least, there is always one robot represented, the one
that has received the command, but, instead of just having one robot, one can
generate a list of robots and because of the nature of the system they will perform
as a team of physical agents to achieve the current goal.</p>
        <p>SOAR requires an updated world state, in order to make the next decision.
The state is updated after each skill execution, in order to re ect the robot
interactions with the world. The world could be changed by the robot itself or
other existing agents. Changes in the world made by the robot actions directly
re ect the result of the skill execution in the robot world view. Changes in the
world made by other agents, may make the robot fail the execution of the current
skill, provoking the execution of another skill that tries to solve the impasse (for
example, going to the place where the coke is and nding that the coke is not
there any more, will trigger the search for object skill to gure out where the
coke is).</p>
        <p>This means that after the action resolves, it returns to SOAR an object
describing the success/failure of the action and the relevant changes it provoked.
This information is used to change the current knowledge of the robot. For
instance, if the robot detected a beer bottle and its next skill is to grasp it, it
will send the command 'grasp.item = beer bottle', while the action response after
resolving should only be a 'succeeded' or 'aborted' message that is interpreted
in SOAR as 'robot.object = beer bottle'.</p>
        <p>In the current state of the system 10 di erent skills are implemented. The
amount of productions checked in every loop step is of 77 rules.</p>
        <p>
          It may happen that there is no plan for achieving the goal. In those situations
SOAR implements several mechanisms to solve them:
{ Subgoal capacity [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], allows the robot to nd a way to get out of an impasse
with the current actions available in order to achieve the desired state. This
would be the case in which the robot could not decide the best action in the
current situation with the available knowledge because there is no distinctive
preference.
{ Chunking ability [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ][
          <xref ref-type="bibr" rid="ref22">22</xref>
          ][
          <xref ref-type="bibr" rid="ref23">23</xref>
          ], allows the production of new rules that help
the robot adapt to new situations and, given a small set of primitive actions,
execute full featured and speci c goals never faced before.
{ Reinforcement learning [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ], together with the two previous features, helps
the robot in learning to perform maintained goals such as keeping a room
clean or learning by the use of user-de ned heuristics in order to achieve,
not only good results like using chunking, but near-optimal performances.
        </p>
        <p>The two rst mechanisms were activated for our approach. Use of the
reinforcement learning will be analysed in future works. Those two mechanisms are
specially important because thanks to them, the robot is capable of nding its
own way to achieve any goal achievable with the current skills of the robot. Also,
chunking makes decisions easier when the robot faces similar situations early
experienced. This strengths allow the robot to adapt to new goals and situations
without further programming than de ning a goal or admit the expansion of its
capabilities by simply de ning a new skill.
2.4</p>
      </sec>
      <sec id="sec-2-3">
        <title>Action Nodes</title>
        <p>The action nodes are ROS software modules. They are modular pieces of software
implemented to make the robot capable of performing each one of its abilities,
de ned in the SOAR module as the possible skill. Every time that SOAR
proposes an skill to be performed calls the action node in charge of that skill.</p>
        <p>When an action node is executed it provides some feedback to SOAR about
its succees or failure. The feedback is captured by the interface and sent to SOAR
in order to update the current state of the world.
The robot platform used for testing the system developed is called Reem 2, a
humanoid service robot created by PAL Robotics. Its weight is about 90 Kg, 22
degrees of freedom and an autonomy of about 8 hours. Reem is controlled by
OROCOS for real time operations and by ROS for skill depletion. Among other
abilities, it can recognize and grasp objects, detect faces, follow a person and
even clean a room of objects that do not belong to it. In order to include robust
grasping and gesture detection, a kinnect sensor on a headset on her head has
been added to the commercial version.
The robot is equipped with a Core 2 Duo and an ATOM computer, which
provide all the computational power required to perform all tasks control. This
means that all the algorithms required to plan and perform all the abilities are
executed inside the robot.
4</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Results</title>
      <p>
        The whole architecture has been put to test in an environment that mimics that
of the RoboCup@Home League at the GPSR test [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] (see gure 3). In this test,
the robot has to listen three di erent types of commands with increased di culty,
and execute the required actions (skills) to accomplish the command. For our
implementation, only the two rst categories have been tested, as described in
section 2.2.
Testing involved providing the robot with a spoken command, and checking
that the robot was able to perform the required actions to complete the goal.
      </p>
      <p>Examples of sentences the robot has been tested with (among others):
Category I Go to the kitchen, nd a coke and grasp it</p>
      <p>Sequence of actions performed by the robot:
understand command, go to kitchen, look for coke, grasp coke
Go to reception, nd a person and introduce yourself
Sequence of actions performed by the robot:
understand command, go to reception, look for person, go to person,
introduce yourself
Find the closest person, introduce yourself and follow the person in front of
you
Sequence of actions performed by the robot:
look for a person, move to person, introduce yourself, follow person
Category II Point at a seating</p>
      <p>Sequence of actions performed by the robot:
understand command, ask questions, acknowledge all information, navigate
to location, search for seating, point at seating
Carry a Snack to a table
Sequence of actions performed by the robot:
understand command, ask questions, acknowledge all information, navigate
to location, search for snack, grasp snack, go to table, deliver snack
Bring me an energy drink ( gure 4)
Sequence of actions performed by the robot:
understand command, ask questions, acknowledge all information, navigate
to location, search for energy drink, grasp energy drink, return to origin,
deliver energy drink</p>
      <p>The system we present in this paper guarantees that the actions proposed
will lead to the goal, so the robot will nd a solution, although it can not be
assured to be the optimal one. For instance, in some situations, the robot moved
to a location that was not the correct one, before moving on a second action
step to the correct one. However, the completion of the task is assured since the
architecture will continue providing steps until the goal is accomplished.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>The architecture presented allowed to command a commercial humanoid robot
to perform a bunch of tasks as a combination of skills, without having to specify
before hand how the skills have to be combined to solve the task. The whole
approach avoids AI planning in the classical sense and uses instead a cognitive
approach (SOAR) based on solving the current situation the robot faces. By
solving the current situation skill by skill the robot nally achieves the goal (if
it is achievable). Given a goal and a set of skills, SOAR itself will generate the
necessary steps to ful l the goal using the skills (or at least try to reach the goal).
Because of that, we can say that it can easily adapt to new goals e ortlessly.</p>
      <p>SOAR cannot detect if the goal requested to the robot is achiebable or not.
If the goal is not achiebable, SOAR will keep trying to reach it, and send skill
activations to the robot forever. In our implementation, the set of goals that
one can ask the robot are restricted by the speech recognition system. Our
system ensures that all the accepted vocal commands are achievable by a SOAR
execution.</p>
      <p>The whole architecture is completely robot agnostic, and can be adapted
to any other robot provided that the skills are implemented and available to
be called using the same interface. More than that, adding and removing skills
becomes as simple as de ning the conditions to work with them and their
outcomes.</p>
      <p>The current implementation can be improved in terms of robustness, solving
two known issues:</p>
      <p>First, if one of the actions is not completely achieved (for example, the robot is
not able o reach a position in the space because it is occupied, or the robot cannot
nd an object that is in front of it), the skill activation will fail. However, in the
current implementation the robot has no means to discover the reason of the
failure. Hence the robot will detect that the state of the world has not changed,
and hence select the same action (retry) towards the goal accomplishment. This
behaviour could lead to an in nite loop of retries.</p>
      <p>
        Second, this architecture is still not able to solve commands when errors
in sentences are encountered (category III of the GPSR Robocup test). Future
versions of the architecture will include this feature by including semantic and
relation ontologies like Wordnet [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] and VerbNet [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], making this service robot
more robust and general.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Haidegger</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barreto</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goncalves</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Habib</surname>
            ,
            <given-names>M.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ragavan</surname>
            ,
            <given-names>S.K.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vaccarella</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perrone</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prestes</surname>
          </string-name>
          , E.:
          <article-title>Applied ontologies and standards for service robots</article-title>
          .
          <source>Robotics and Autonomous Systems (June</source>
          <year>2013</year>
          )
          <volume>1</volume>
          {
          <fpage>9</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Stuart</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.N.</surname>
          </string-name>
          :
          <article-title>Arti cial Intelligence: A Modern Approach</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Pollack</surname>
            ,
            <given-names>J.B.</given-names>
          </string-name>
          : Book Review : Allen Newell , Uni ed Theories of Cognition *
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>R.M.:</given-names>
          </string-name>
          <article-title>An Introduction to Cognitive Architectures for Modeling and Simulation</article-title>
          . (
          <year>1987</year>
          ) (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kelley</surname>
          </string-name>
          , T.D.:
          <article-title>Developing a Psychologically Inspired Cognitive Architecture for Robotic Control : The Symbolic and Subsymbolic Robotic Intelligence Control System</article-title>
          .
          <source>Internation Journal of Advanced Robotic Systems</source>
          <volume>3</volume>
          (
          <issue>3</issue>
          ) (
          <year>2006</year>
          )
          <volume>219</volume>
          {
          <fpage>222</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Langley</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laird</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rogers</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Cognitive architectures: Research issues and challenges</article-title>
          .
          <source>Cognitive Systems Research</source>
          <volume>10</volume>
          (
          <issue>2</issue>
          ) (
          <year>June 2009</year>
          )
          <volume>141</volume>
          {
          <fpage>160</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Laird</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wray</surname>
            <given-names>III</given-names>
          </string-name>
          , R.E.:
          <article-title>Cognitive Architecture Requirements for Achieving AGI</article-title>
          .
          <source>In: Proceedings of the Third Conference on Arti cial General Intelligence</source>
          . (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ji</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jin</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xie</surname>
          </string-name>
          , J.:
          <article-title>Developing High-level Cognitive Functions for Service Robots</article-title>
          .
          <source>AAMAS '10 Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems</source>
          <volume>1</volume>
          (
          <year>2010</year>
          )
          <volume>989</volume>
          {
          <fpage>996</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Laird</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kinkade</surname>
            ,
            <given-names>K.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mohan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>J.Z.</given-names>
          </string-name>
          :
          <article-title>Cognitive Robotics using the Soar Cognitive Architecture</article-title>
          .
          <source>In: Proc. of the 6th Int. Conf.on Cognitive Modelling</source>
          . (
          <year>2004</year>
          )
          <volume>226</volume>
          {
          <fpage>230</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>J.R.:</given-names>
          </string-name>
          <article-title>ACT: A Simple Theory of Complex Cognition</article-title>
          . American
          <string-name>
            <surname>Psychologist</surname>
          </string-name>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Stewart</surname>
            ,
            <given-names>T.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>West</surname>
            ,
            <given-names>R.L.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Deconstructing</surname>
            <given-names>ACT</given-names>
          </string-name>
          -R. In
          <source>: Proceedings of the Seventh International Conference on Cognitive Modeling</source>
          . (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Beetz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lorenz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tenorth</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <string-name>
            <surname>CRAM A C ognitive</surname>
          </string-name>
          <article-title>R obot A bstract M achine for Everyday Manipulation in Human Environments</article-title>
          .
          <source>In: International Conference on Intelligent Robots and Systems (IROS)</source>
          .
          <article-title>(</article-title>
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Wei</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hindriks</surname>
            ,
            <given-names>K.V.</given-names>
          </string-name>
          :
          <article-title>An Agent-Based Cognitive Robot Architecture</article-title>
          . (
          <year>2013</year>
          )
          <volume>54</volume>
          {
          <fpage>71</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Beetz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klank</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kresse</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maldonado</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Mosenlechner, L.,
          <string-name>
            <surname>Pangercic</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Ruhr, T.,
          <string-name>
            <surname>Tenorth</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Robotic Roommates Making Pancakes</article-title>
          .
          <source>In: 11th IEEE-RAS International Conference on Humanoid Robots</source>
          , Bled,
          <source>Slovenia (October</source>
          ,
          <volume>26</volume>
          {28
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Hanford</surname>
          </string-name>
          , S.D.:
          <article-title>A Cognitive Robotic System Based on Soar</article-title>
          .
          <source>PhD thesis</source>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Ravishankar</surname>
          </string-name>
          , M.K.:
          <article-title>E cient algorithms for speech recognition</article-title>
          .
          <source>Technical report</source>
          (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Bird</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>NLTK : The Natural Language Toolkit</article-title>
          .
          <source>In Proceedings of the ACL Workshop on E ective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics</source>
          . (
          <year>2005</year>
          ) 1{
          <fpage>4</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Klein</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manning</surname>
          </string-name>
          , C.D.:
          <article-title>Accurate Unlexicalized Parsing</article-title>
          .
          <source>ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics</source>
          <volume>1</volume>
          (
          <year>2003</year>
          )
          <volume>423</volume>
          {
          <fpage>430</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Hall</surname>
          </string-name>
          , J.:
          <article-title>MaltParser An Architecture for Inductive Labeled Dependency Parsing</article-title>
          .
          <source>PhD thesis</source>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Wintermute</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laird</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          :
          <article-title>SORTS : A Human-Level Approach to RealTime Strategy AI</article-title>
          . (
          <year>2007</year>
          )
          <volume>55</volume>
          {
          <fpage>60</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Laird</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Newell</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenbloom</surname>
            ,
            <given-names>P.S.:</given-names>
          </string-name>
          <article-title>SOAR: An Architecture for General Intelligence</article-title>
          .
          <source>Arti cial Intelligence</source>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Howes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>R.M.:</given-names>
          </string-name>
          <article-title>The Role of Cognitive Architecture in Modelling the User : Soar s Learning Mechanism</article-title>
          . (
          <volume>01222</volume>
          ) (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23. SoarTechnology: Soar :
          <string-name>
            <given-names>A Functional</given-names>
            <surname>Approach</surname>
          </string-name>
          to General Intelligence.
          <source>Technical report</source>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Nason</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laird</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          :
          <article-title>Soar-RL : Integrating Reinforcement Learning with Soar</article-title>
          .
          <source>In: Cognitive Systems Research</source>
          . (
          <year>2004</year>
          )
          <volume>51</volume>
          {
          <fpage>59</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25. :
          <article-title>Robocup@home rules and regulations</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>G.A.</given-names>
          </string-name>
          :
          <article-title>WordNet: A Lexical Database for English</article-title>
          .
          <source>Communications of the ACM</source>
          <volume>38</volume>
          (
          <issue>11</issue>
          ) (
          <year>1995</year>
          )
          <volume>39</volume>
          {
          <fpage>41</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Palmer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kipper</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Korhonen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ryant</surname>
          </string-name>
          , N.:
          <article-title>Extensive Classi cations of English verbs</article-title>
          .
          <source>In: Proceedings of the 12th EURALEX International Congress</source>
          . (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>