<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Collecting information for action understanding. The enrichment of the IMAGACT Ontology of Action</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andrea Amelio RAVELLI</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenzo GREGORI</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro PANUNZI</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>LABLITA - Universit a` degli Studi di Firenze</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper presents the status of our work aimed at enriching the IMAGACT Ontology of Action by linking it to other resources. In order to achieve this goal we performed a visual mapping, exploiting the IMAGACT visual component (video scenes that represent physical actions) as the linkage point among resources. By using visual objects, which are free from linguistic constraints and can be interpreted and described from different perspectives, we connected resources responding to different scopes and theoretical frameworks, in which a concept-to-concept mapping appeared difficult to obtain. We provide a brief description of two linking obtained by using this technique: an automatic linking between IMAGACT and BabelNet, a multilingual semantic network, and a manual linking between IMAGACT and Praxicon, a conceptual knowledge base of action.</p>
      </abstract>
      <kwd-group>
        <kwd />
        <kwd>ontology linking</kwd>
        <kwd>IMAGACT</kwd>
        <kwd>BabelNet</kwd>
        <kwd>Praxicon</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Action verbs contain the basic information that should be understood in order to make
sense of a sentence and that should be processed in instructions given to artificial systems.
The difficulty behind action verbs understanding comes out from the evidence that no one
to one correspondence can be established between action predicates and action concepts.
The same action can be predicated by multiple verbs (e.g. “John takes/brings/leads Mary
to the restaurant”) and, conversely, one verb can extend to multiple and different actions
(e.g. “John takes the cup from the table”, “John takes/brings the cup to Mary”). Most
of these verbs belong to the class of general verbs, which are characterized by a high
ambiguity and high frequency in the use [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In these circumstances, senses are often
vague and overlapping, their discrimination is not clear, and this is a critical issue for
their semantic representation.
      </p>
      <p>
        The representation is more difficult in a multilingual perspective, given that
different languages operate different action space segmentations. It has been observed [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] that
even with a fine-grained sense distinction it is not often possible to find an exact match
between action concepts lexicalized by verbs in different languages. Moreover, one
language could totally lack a lexical representation for a specific concept, whenever there is
a lexical gap [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. These problems deeply affect NLP task dealing with actions and their
correct interpretation [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>This paper reports two linking experiments performed on the IMAGACT Visual
Ontology of Action, in order to gather information about actions from several perspective
and at different levels: semantic, motoric and visual. Linkings have been led exploiting
the visual information of IMAGACT: instead of a classic concept-to-concept mapping
we performed a visual mapping, that is a concept-to-video linking. This strategy allowed
us to connect linguistic resources having different conceptualizations of events.</p>
      <p>This work, far from being definitive, could be useful for the future construction of
integrated resources on action understanding, to be effectively exploited for both
theoretical analysis and computational applications.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The IMAGACT Visual Ontology of Action</title>
      <p>Verbs are the lexical class that is normally responsible for event categorization. Among
events, actions (defined as goal-oriented events performed by an intentional agent) play
an important role from a linguistic perspective: action verbs are very frequent in spoken
language and they are also very ambiguous. Moreover the semantic classification of
action verbs is more complex and not equally linear as the one of nouns, so that frequently
it’s not possible to discriminate a coherent list of word senses.</p>
      <p>
        IMAGACT Visual Ontology of Action1 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is a multimodal and multilingual
resource that offers a novel integration of visual and linguistic information as
complementary elements. The resource contains 1010 distinct action concepts as a result of an
information bootstrapping form Italian and English spoken corpora. Metaphorical and
phraseological usage have been excluded from the annotation process, in order to collect
exclusively occurrences of verbs referring to physical actions.
      </p>
      <p>Verbs in IMAGACT are divided into action types, according to their semantic
variation; each type is linked to one or more video scenes (either 3D animations or filmed
video clips), in which a prototypical action is performed. The verbs referring to the same
concept are linked to the same scenes, creating an interlinguistic semantic network.</p>
      <p>The ontology is in continous development and, at present, contains 9 fully-mapped
languages and 13 that are underway, with an average of 730 action verbs per language.</p>
      <p>This resource gives a broad picture of the variety of actions and activities that are
prominent in everyday life and specifies the lexicon used to express each one in ordinary
communication, in all the included languages.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Linking resources, sharing knowledge</title>
      <p>In order to collect more information, we planned an extensive campaign of enrichment,
through the comparison and mutual exchange with other resources.</p>
      <p>For this task we applied a visual mapping, a methodology which aims at pointing
concepts to a shared visual representation. In fact, a video depicting an event is not
subject to any linguistic constraint, and the associated semantic information can be described
in various manners. Starting from this observation we used the videos to link concepts of
different resources, that express independent event conceptualization according to their
own theoretical framework. It follows that the multimodal feature of IMAGACT is a key
point for its enrichment and implementation.</p>
      <p>Herein we present some current results we obtained by linking IMAGACT with
BabelNet and Praxicon. An example of the obtained output can be observed in Figure 1,
that shows a beating event with the parallel representation in the 3 resources.</p>
      <sec id="sec-3-1">
        <title>3.1. IMAGACT and BabelNet</title>
        <p>
          BabelNet2 [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] is a multilingual semantic network obtained through the automatic
mapping of the WordNet thesaurus and the Wikipedia enciclopedia. At present, BabelNet 3.7
contains 284 languages and it is the widest multilingual resources available for semantic
disambiguation. Concepts and entities are represented by BabelSynsets (BSs), unitary
concepts identified by several kinds of information (semantic features, glosses, usage
examples, images, etc.) and related to lemmas (in any language) which have a sense
matching with that concepts. BSs are not isolated, but connected together into a huge network
by means of the semantic relations inherited from WordNet.
        </p>
        <p>BabelNet concepts (the BSs) are interlinguistic: they gather all the word senses in
different languages that are semantically equivalent (or almost equivalent). Conversely,
IMAGACT action types encode small semantic differences, so they are more granular
and language-dependent. Given these differences, an exact match between their concepts
is very rare; it’s also hard to establish less strict semantic relations (e.g. narrow-to-broad),
because the BSs boundaries are often fuzzy and the gloss is not always able to make a
clear discrimination between them.</p>
        <p>In this case visual mapping solved the problem: in fact even for the BSs where the
description is not precise, it’s easy to say if a video is a good action prototype for it or
not3.</p>
        <p>
          Given the multilingual nature of the two resources, we could exploit a rich lexical
information, i.e. all the verbs in many languages related both to IMAGACT scenes and
BabelNet BSs. The connections between BSs and scenes have been automatically
established on the basis of the number of shared verbal lemmas through an ML algorithm
[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>As a result from this linking, on the one hand, IMAGACT gained translation
information for languages still not implemented in the Visual Ontology and, on the other, BSs
referring to action verbs obtained a video representation. In Table 1, the detailed numbers
of scenes and BSs connected through this linking are shown.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. IMAGACT and Praxicon</title>
        <p>
          Praxicon4 is an ontology for the representation of action concepts, based on the
Minimalistic Grammar of Action [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. In Praxicon, an action is expressed through motor concepts,
specified in terms of 3 basic components: GOAL, TOOL and OBJECT. A wide part of
this ontology is also linked with WordNet synsets and ImageNet images [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>Praxicon makes a distinction between Actions, Movements, and Events5. Actions
are sets of structured motoric execution, intentionally performed by an agent to achieve
a goal. The goal is a necessary component, so any non-voluntary motoric activation is
addressed as a Movement, but not as an Action. Finally, actions that are too complex to
be described as a set of motoric concepts, are considered Events and are out of the scope
of the Praxicon resource.</p>
        <p>Similarly to the linking with BabelNet, the IMAGACT scenes are used to connect
the information of the two resources, given that their definitions of concepts are too
different to try a proper and extensive sense matching. In fact, the IMAGACT scenes
can work as a visual representation for Praxicon action concepts and, at the same time,
Praxicon syntax could be used to analytically describe, from a physical-motoric point of
view, all the low-level actions involved in the execution of more complex ones.</p>
        <p>Differently from the previous linking, in this case it is a totally manual work,
consisting in the analysis of each scene and the determination of the physical action
performed.</p>
        <p>3The measured inter-rater agreement for this task is a Fleiss k of 0.74 with 3 annotators. Annotated dataset
is available at http://bit.ly/2jt2cD4
4https://github.com/CSRI/PraxiconDB
5These categories have their own definition in the Praxicon framework. We use capital letters when referring
to this specific meaning</p>
        <p>The scene annotation has been accomplished on 281 IMAGACT scenes ( 28% of
the total) and we obtained the following results:
154 scenes ( 55%) have a one-to-one relation with Praxicon Action concepts;
64 scenes ( 23%) map on more than one Action concept;
19 scenes ( 7%) are Movement but not Actions (in the Praxicon framework);
30 ( 11%) are Events but not Actions (in the Praxicon framework);
14 scenes ( 5%) are unclear.</p>
        <p>IMAGACT scenes are specifically created to provide a prototypical representation of
a lexicalized action concept: every scene is a reference of at least an English action verb.
This allowed us to derive from these numbers some considerations about the relation
between motoric and lexical level.</p>
        <p>In Praxicon Events motoric properties does not play a role in the verb meaning,
which encodes an abstract result, that is independent from the physical action execution.
Examples are verbs like to drive, to clean or to rob, that encode a complex set of motoric
actions by predicating their final result: 11% of the actions that are commonly referred
with the language (in English) belong to this class. Conversely, 55% of the scenes have
a one-to-one mapping with a Praxicon concept, meaning that there is a low distance
between motoric and lexical level: we can consider these cases as the ones where the
physical execution of an action more deeply affect the verb semantics. Example verbs of this
class are to push, to gallop or to brush. Then, 23% of retrieved action are at an
intermediate level of abstraction: they can be expressed in terms of physical action concepts,
but more than one Praxicon concept is involved into a single lexicalized action. Some
example verbs are to break, to open or to glue. Finally, we found that 7% of events
that in English are referred through action verbs are Movement that do not correspond to
voluntary actions, like to fall or to drop.</p>
        <p>
          This work is still in progress, but we believe that the integration between
linguistic and motoric knowledge on action is very relevant both for theoretical analysis and
robotic applications. From one side an integrated resource is desirable to carry on deep
investigations on the relation between language and action, that is a long debated subject
in linguistics and neuroscience [
          <xref ref-type="bibr" rid="ref10 ref11">10,11</xref>
          ]. Praxicon is also exploited for robotic
applications [
          <xref ref-type="bibr" rid="ref12 ref13">12,13</xref>
          ] and the integration with a linguistic-oriented resource like IMAGACT can
be useful to enhance human-robot interaction through natural language.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions &amp; Future Works</title>
      <p>In this paper we presented the very first steps in the construction of a comprehensive
resource for the understanding of actions and their representation in the language systems,
built on top of the ontological structure of IMAGACT.</p>
      <p>We introduced the visual mapping methodology, that allows resource linking
through visual representations. This approach is particularly useful when it’s hard to find
relations between concepts, because it does not force any kind of convergence between
senses. For this reason we feel confident that this methodology could be successfully
applied also in other linking tasks involving multimodal resources.</p>
      <p>Two case studies have been described: the linking of IMAGACT with BabelNet and
Praxicon. In the first case we were dealing with lexical-semantic resources having huge
differences in sense discrimination and for this reason it was hard to find inter-resource
semantic relations. In the case of Praxicon we applied visual mapping to link IMAGACT
with a resource of a different type, in which the concepts are motoric and not linguistic.</p>
      <p>
        Finally, to extend the information connected to action concepts, we aim to enrich our
ontology with the annotation of noun senses and with the predicate argument structures
[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], in order to implement semantic selection restriction for the verbs in each action
type.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moneglia</surname>
          </string-name>
          &amp; A.
          <string-name>
            <surname>Panunzi</surname>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>I verbi generali nei corpora di parlato. Un progetto di annotazione semantica cross-linguistica</article-title>
          . In E. Cresti &amp; I. Korzen (eds), Language, Cognition and Identity. Extension of the Endocentric/Esocentric Typology. Firenze: Firenze University Press,
          <fpage>27</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moneglia</surname>
          </string-name>
          &amp; A.
          <string-name>
            <surname>Panunzi</surname>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Action Predicates and the Ontology of Action across Spoken Language Corpora. The Basic Issue of the SEMACT Project</article-title>
          . In M. Alca´ntara Pla´ &amp; T. Declerck (eds),
          <source>Proceeding of the International Workshop on the Semantic Representation of Spoken Language</source>
          . Salamanca: Universidad de Salamanca,
          <volume>51</volume>
          -
          <fpage>58</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gregori</surname>
          </string-name>
          &amp; A.
          <string-name>
            <surname>Panunzi</surname>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Measuring the Italian-English lexical gap for action verbs and its impact on translation</article-title>
          .
          <source>In Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</source>
          .
          <source>Valencia: Association for Computational Linguistics</source>
          ,
          <fpage>102</fpage>
          -
          <lpage>109</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moneglia</surname>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Natural Language Ontology of Action: A Gap with Huge Consequences for Natural Language Understanding and Machine Translation</article-title>
          . In
          <string-name>
            <given-names>Z.</given-names>
            <surname>Vetulani</surname>
          </string-name>
          &amp; J. Mariani (eds),
          <source>Human Language Technology Challenges for Computer Science and Linguistics</source>
          , volume
          <volume>8387</volume>
          of Lecture Notes in Computer Science. Springer International Publishing,
          <volume>379</volume>
          -
          <fpage>395</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moneglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Frontini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Gagliardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Monachini</surname>
          </string-name>
          &amp; A.
          <string-name>
            <surname>Panunzi</surname>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>The IMAGACT Visual Ontology. An Extendable Multilingual Infrastructure for the Representation of Lexical Encoding of Action</article-title>
          . In N. Calzolari,
          <string-name>
            <given-names>K.</given-names>
            <surname>Choukri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Declerck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Loftsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Maegaard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mariani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moreno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Odijk</surname>
          </string-name>
          &amp; S. Piperidis (eds),
          <source>Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC14)</source>
          , Reykjavik, Iceland.
          <source>European Language Resources Association (ELRA)</source>
          ,
          <fpage>3425</fpage>
          -
          <lpage>3432</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Navigli</surname>
          </string-name>
          &amp; S.
          <string-name>
            <surname>Ponzetto</surname>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>BabelNet: The Automatic Construction, Evaluation and Application of a Wide-Coverage Multilingual Semantic Network</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>193</volume>
          ,
          <fpage>217</fpage>
          -
          <lpage>250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gregori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Panunzi</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>A.A. Ravelli</surname>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Linking IMAGACT ontology to BabelNet through action videos</article-title>
          . In A. Corazza,
          <string-name>
            <given-names>S.</given-names>
            <surname>Montemagni</surname>
          </string-name>
          &amp; G. Semeraro,
          <source>Proceedings of the Third Italian Conference on Computational Linguistics</source>
          CLiC-it
          <year>2016</year>
          . 5-
          <issue>6</issue>
          <year>December 2016</year>
          ,
          <article-title>Napoli</article-title>
          . Accademia University Press,
          <fpage>162</fpage>
          -
          <lpage>167</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.</given-names>
            <surname>Pastra</surname>
          </string-name>
          , &amp; Y.
          <string-name>
            <surname>Aloimonos</surname>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>The minimalist grammar of action</article-title>
          .
          <source>Philosophical Transactions of the Royal Society of London B: Biological Sciences</source>
          <volume>1585</volume>
          ,
          <fpage>103</fpage>
          -
          <lpage>117</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Soecher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Li</surname>
          </string-name>
          &amp; L.
          <string-name>
            <surname>Fei-Fei</surname>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>ImageNet: A Large-Scale Hierarchical Image Database</article-title>
          .
          <article-title>IEEE Computer Vision and Pattern Recognition (CVPR).</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pustejovsky</surname>
          </string-name>
          (
          <year>1991</year>
          ).
          <article-title>The syntax of event structure</article-title>
          .
          <source>Cognition</source>
          <volume>41</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>3</lpage>
          ,
          <fpage>47</fpage>
          -
          <lpage>81</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pulvermller</surname>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Brain mechanisms linking language and action</article-title>
          .
          <source>Nature reviews. Neuroscience 6</source>
          .7:
          <fpage>576</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N.</given-names>
            <surname>Vitucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Franchi</surname>
          </string-name>
          &amp; G.
          <string-name>
            <surname>Gini</surname>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Programming a humanoid robot in natural language: an experiment with description logics</article-title>
          .
          <source>Workshop Simulation in robot programming</source>
          ,
          <source>SIMPAR</source>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>N. G.</given-names>
            <surname>Tsagarakis</surname>
          </string-name>
          , G. Metta,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sandini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vernon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Beira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Becchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Santos-Victor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Ijspeert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Carrozza</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>D. G. Caldwell</surname>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>iCub: the design and realization of an open humanoid platform for cognitive and neuroscience research</article-title>
          .
          <source>Advanced Robotics</source>
          <volume>21</volume>
          :
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.</given-names>
            <surname>Jezek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Magnini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Feltrcco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bianchini</surname>
          </string-name>
          &amp; O.
          <string-name>
            <surname>Popescu</surname>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>A resource of Typed Predicate Argument Structures for linguistic analysis and semantic processing</article-title>
          .
          <source>In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)</source>
          , Reykjavik, Iceland.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>