<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Collecting information for action understanding. The enrichment of the IMAGACT Ontology of Action</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Andrea</forename><forename type="middle">Amelio</forename><surname>Ravelli</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">LABLITA -Università degli Studi di Firenze</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lorenzo</forename><surname>Gregori</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">LABLITA -Università degli Studi di Firenze</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alessandro</forename><surname>Panunzi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">LABLITA -Università degli Studi di Firenze</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Collecting information for action understanding. The enrichment of the IMAGACT Ontology of Action</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">D1E0C7574837E9F57D8B46DF79C950A7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:35+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>ontology linking</term>
					<term>IMAGACT</term>
					<term>BabelNet</term>
					<term>Praxicon</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper presents the status of our work aimed at enriching the IMAGACT Ontology of Action by linking it to other resources. In order to achieve this goal we performed a visual mapping, exploiting the IMAGACT visual component (video scenes that represent physical actions) as the linkage point among resources. By using visual objects, which are free from linguistic constraints and can be interpreted and described from different perspectives, we connected resources responding to different scopes and theoretical frameworks, in which a concept-to-concept mapping appeared difficult to obtain.</p><p>We provide a brief description of two linking obtained by using this technique: an automatic linking between IMAGACT and BabelNet, a multilingual semantic network, and a manual linking between IMAGACT and Praxicon, a conceptual knowledge base of action.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Action verbs contain the basic information that should be understood in order to make sense of a sentence and that should be processed in instructions given to artificial systems. The difficulty behind action verbs understanding comes out from the evidence that no one to one correspondence can be established between action predicates and action concepts. The same action can be predicated by multiple verbs (e.g. "John takes/brings/leads Mary to the restaurant") and, conversely, one verb can extend to multiple and different actions (e.g. "John takes the cup from the table", "John takes/brings the cup to Mary"). Most of these verbs belong to the class of general verbs, which are characterized by a high ambiguity and high frequency in the use <ref type="bibr" target="#b0">[1]</ref>. In these circumstances, senses are often vague and overlapping, their discrimination is not clear, and this is a critical issue for their semantic representation.</p><p>The representation is more difficult in a multilingual perspective, given that different languages operate different action space segmentations. It has been observed <ref type="bibr" target="#b1">[2]</ref> that even with a fine-grained sense distinction it is not often possible to find an exact match between action concepts lexicalized by verbs in different languages. Moreover, one language could totally lack a lexical representation for a specific concept, whenever there is a lexical gap <ref type="bibr" target="#b2">[3]</ref>. These problems deeply affect NLP task dealing with actions and their correct interpretation <ref type="bibr" target="#b3">[4]</ref>.</p><p>This paper reports two linking experiments performed on the IMAGACT Visual Ontology of Action, in order to gather information about actions from several perspective and at different levels: semantic, motoric and visual. Linkings have been led exploiting the visual information of IMAGACT: instead of a classic concept-to-concept mapping we performed a visual mapping, that is a concept-to-video linking. This strategy allowed us to connect linguistic resources having different conceptualizations of events.</p><p>This work, far from being definitive, could be useful for the future construction of integrated resources on action understanding, to be effectively exploited for both theoretical analysis and computational applications.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">The IMAGACT Visual Ontology of Action</head><p>Verbs are the lexical class that is normally responsible for event categorization. Among events, actions (defined as goal-oriented events performed by an intentional agent) play an important role from a linguistic perspective: action verbs are very frequent in spoken language and they are also very ambiguous. Moreover the semantic classification of action verbs is more complex and not equally linear as the one of nouns, so that frequently it's not possible to discriminate a coherent list of word senses.</p><p>IMAGACT Visual Ontology of Action<ref type="foot" target="#foot_0">1</ref>  <ref type="bibr" target="#b4">[5]</ref> is a multimodal and multilingual resource that offers a novel integration of visual and linguistic information as complementary elements. The resource contains 1010 distinct action concepts as a result of an information bootstrapping form Italian and English spoken corpora. Metaphorical and phraseological usage have been excluded from the annotation process, in order to collect exclusively occurrences of verbs referring to physical actions.</p><p>Verbs in IMAGACT are divided into action types, according to their semantic variation; each type is linked to one or more video scenes (either 3D animations or filmed video clips), in which a prototypical action is performed. The verbs referring to the same concept are linked to the same scenes, creating an interlinguistic semantic network.</p><p>The ontology is in continous development and, at present, contains 9 fully-mapped languages and 13 that are underway, with an average of 730 action verbs per language.</p><p>This resource gives a broad picture of the variety of actions and activities that are prominent in everyday life and specifies the lexicon used to express each one in ordinary communication, in all the included languages.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Linking resources, sharing knowledge</head><p>In order to collect more information, we planned an extensive campaign of enrichment, through the comparison and mutual exchange with other resources.</p><p>For this task we applied a visual mapping, a methodology which aims at pointing concepts to a shared visual representation. In fact, a video depicting an event is not subject to any linguistic constraint, and the associated semantic information can be described in various manners. Starting from this observation we used the videos to link concepts of different resources, that express independent event conceptualization according to their own theoretical framework. It follows that the multimodal feature of IMAGACT is a key point for its enrichment and implementation.</p><p>Herein we present some current results we obtained by linking IMAGACT with BabelNet and Praxicon. An example of the obtained output can be observed in Figure <ref type="figure" target="#fig_0">1</ref>, that shows a beating event with the parallel representation in the 3 resources.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">IMAGACT and BabelNet</head><p>BabelNet<ref type="foot" target="#foot_1">2</ref>  <ref type="bibr" target="#b5">[6]</ref> is a multilingual semantic network obtained through the automatic mapping of the WordNet thesaurus and the Wikipedia enciclopedia. At present, BabelNet 3.7 contains 284 languages and it is the widest multilingual resources available for semantic disambiguation. Concepts and entities are represented by BabelSynsets (BSs), unitary concepts identified by several kinds of information (semantic features, glosses, usage examples, images, etc.) and related to lemmas (in any language) which have a sense matching with that concepts. BSs are not isolated, but connected together into a huge network by means of the semantic relations inherited from WordNet.</p><p>BabelNet concepts (the BSs) are interlinguistic: they gather all the word senses in different languages that are semantically equivalent (or almost equivalent). Conversely, IMAGACT action types encode small semantic differences, so they are more granular and language-dependent. Given these differences, an exact match between their concepts is very rare; it's also hard to establish less strict semantic relations (e.g. narrow-to-broad), because the BSs boundaries are often fuzzy and the gloss is not always able to make a clear discrimination between them.</p><p>In this case visual mapping solved the problem: in fact even for the BSs where the description is not precise, it's easy to say if a video is a good action prototype for it or not <ref type="foot" target="#foot_2">3</ref> .</p><p>Given the multilingual nature of the two resources, we could exploit a rich lexical information, i.e. all the verbs in many languages related both to IMAGACT scenes and BabelNet BSs. The connections between BSs and scenes have been automatically established on the basis of the number of shared verbal lemmas through an ML algorithm <ref type="bibr" target="#b6">[7]</ref>.</p><p>As a result from this linking, on the one hand, IMAGACT gained translation information for languages still not implemented in the Visual Ontology and, on the other, BSs referring to action verbs obtained a video representation. In Table <ref type="table" target="#tab_0">1</ref>, the detailed numbers of scenes and BSs connected through this linking are shown. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">IMAGACT and Praxicon</head><p>Praxicon<ref type="foot" target="#foot_3">4</ref> is an ontology for the representation of action concepts, based on the Minimalistic Grammar of Action <ref type="bibr" target="#b7">[8]</ref>. In Praxicon, an action is expressed through motor concepts, specified in terms of 3 basic components: GOAL, TOOL and OBJECT. A wide part of this ontology is also linked with WordNet synsets and ImageNet images <ref type="bibr" target="#b8">[9]</ref>.</p><p>Praxicon makes a distinction between Actions, Movements, and Events<ref type="foot" target="#foot_4">5</ref> . Actions are sets of structured motoric execution, intentionally performed by an agent to achieve a goal. The goal is a necessary component, so any non-voluntary motoric activation is addressed as a Movement, but not as an Action. Finally, actions that are too complex to be described as a set of motoric concepts, are considered Events and are out of the scope of the Praxicon resource.</p><p>Similarly to the linking with BabelNet, the IMAGACT scenes are used to connect the information of the two resources, given that their definitions of concepts are too different to try a proper and extensive sense matching. In fact, the IMAGACT scenes can work as a visual representation for Praxicon action concepts and, at the same time, Praxicon syntax could be used to analytically describe, from a physical-motoric point of view, all the low-level actions involved in the execution of more complex ones.</p><p>Differently from the previous linking, in this case it is a totally manual work, consisting in the analysis of each scene and the determination of the physical action performed.</p><p>The scene annotation has been accomplished on 281 IMAGACT scenes (∼28% of the total) and we obtained the following results:</p><p>• 154 scenes (∼55%) have a one-to-one relation with Praxicon Action concepts;</p><p>• 64 scenes (∼23%) map on more than one Action concept;</p><p>• 19 scenes (∼7%) are Movement but not Actions (in the Praxicon framework); • 30 (∼11%) are Events but not Actions (in the Praxicon framework); • 14 scenes (∼5%) are unclear.</p><p>IMAGACT scenes are specifically created to provide a prototypical representation of a lexicalized action concept: every scene is a reference of at least an English action verb. This allowed us to derive from these numbers some considerations about the relation between motoric and lexical level.</p><p>In Praxicon Events motoric properties does not play a role in the verb meaning, which encodes an abstract result, that is independent from the physical action execution. Examples are verbs like to drive, to clean or to rob, that encode a complex set of motoric actions by predicating their final result: ∼11% of the actions that are commonly referred with the language (in English) belong to this class. Conversely, ∼55% of the scenes have a one-to-one mapping with a Praxicon concept, meaning that there is a low distance between motoric and lexical level: we can consider these cases as the ones where the physical execution of an action more deeply affect the verb semantics. Example verbs of this class are to push, to gallop or to brush. Then, ∼23% of retrieved action are at an intermediate level of abstraction: they can be expressed in terms of physical action concepts, but more than one Praxicon concept is involved into a single lexicalized action. Some example verbs are to break, to open or to glue. Finally, we found that ∼7% of events that in English are referred through action verbs are Movement that do not correspond to voluntary actions, like to fall or to drop.</p><p>This work is still in progress, but we believe that the integration between linguistic and motoric knowledge on action is very relevant both for theoretical analysis and robotic applications. From one side an integrated resource is desirable to carry on deep investigations on the relation between language and action, that is a long debated subject in linguistics and neuroscience <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>. Praxicon is also exploited for robotic applications <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref> and the integration with a linguistic-oriented resource like IMAGACT can be useful to enhance human-robot interaction through natural language.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions &amp; Future Works</head><p>In this paper we presented the very first steps in the construction of a comprehensive resource for the understanding of actions and their representation in the language systems, built on top of the ontological structure of IMAGACT.</p><p>We introduced the visual mapping methodology, that allows resource linking through visual representations. This approach is particularly useful when it's hard to find relations between concepts, because it does not force any kind of convergence between senses. For this reason we feel confident that this methodology could be successfully applied also in other linking tasks involving multimodal resources.</p><p>Two case studies have been described: the linking of IMAGACT with BabelNet and Praxicon. In the first case we were dealing with lexical-semantic resources having huge differences in sense discrimination and for this reason it was hard to find inter-resource semantic relations. In the case of Praxicon we applied visual mapping to link IMAGACT with a resource of a different type, in which the concepts are motoric and not linguistic.</p><p>Finally, to extend the information connected to action concepts, we aim to enrich our ontology with the annotation of noun senses and with the predicate argument structures <ref type="bibr" target="#b13">[14]</ref>, in order to implement semantic selection restriction for the verbs in each action type.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. An example of the resulting linking between BabelNet, IMAGACT and Praxicon.</figDesc><graphic coords="3,122.30,167.14,350.69,197.32" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>IMAGACT-BabelNet linking results.</figDesc><table><row><cell>IM Scenes linked to BS</cell><cell>773</cell></row><row><cell>BS linked to Scenes</cell><cell>517</cell></row><row><cell>IM English Verbs related to Scenes</cell><cell>544</cell></row><row><cell cols="2">BabelNet English Verbs related to BS 1,100</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://www.imagact.it/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">http://www.babelnet.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">The measured inter-rater agreement for this task is a Fleiss k of 0.74 with 3 annotators. Annotated dataset is available at http://bit.ly/2jt2cD4</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://github.com/CSRI/PraxiconDB</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">These categories have their own definition in the Praxicon framework. We use capital letters when referring to this specific meaning</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">I verbi generali nei corpora di parlato. Un progetto di annotazione semantica cross-linguistica</title>
		<author>
			<persName><forename type="first">M</forename><surname>Moneglia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Panunzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Language, Cognition and Identity. Extension of the Endocentric/Esocentric Typology</title>
				<editor>
			<persName><forename type="first">E</forename><surname>Cresti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Korzen</surname></persName>
		</editor>
		<meeting><address><addrLine>Firenze</addrLine></address></meeting>
		<imprint>
			<publisher>Firenze University Press</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="27" to="46" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Action Predicates and the Ontology of Action across Spoken Language Corpora. The Basic Issue of the SEMACT Project</title>
		<author>
			<persName><forename type="first">M</forename><surname>Moneglia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Panunzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceeding of the International Workshop on the Semantic Representation of Spoken Language</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Plá</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Declerck</surname></persName>
		</editor>
		<meeting>eeding of the International Workshop on the Semantic Representation of Spoken Language<address><addrLine>Salamanca; Universidad de Salamanca</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="51" to="58" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Measuring the Italian-English lexical gap for action verbs and its impact on translation</title>
		<author>
			<persName><forename type="first">L</forename><surname>Gregori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Panunzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</title>
				<meeting>the 1st Workshop on Sense, Concept and Entity Representations and their Applications<address><addrLine>Valencia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="102" to="109" />
		</imprint>
	</monogr>
	<note>: Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Natural Language Ontology of Action: A Gap with Huge Consequences for Natural Language Understanding and Machine Translation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Moneglia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Human Language Technology Challenges for Computer Science and Linguistics</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">Z</forename><surname>Vetulani</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Mariani</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">8387</biblScope>
			<biblScope unit="page" from="379" to="395" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The IMAGACT Visual Ontology. An Extendable Multilingual Infrastructure for the Representation of Lexical Encoding of Action</title>
		<author>
			<persName><forename type="first">M</forename><surname>Moneglia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Frontini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gagliardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Khan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Monachini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Panunzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC14)</title>
				<editor>
			<persName><forename type="first">N</forename><surname>Calzolari</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Choukri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Declerck</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Loftsson</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Maegaard</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Mariani</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Moreno</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Odijk</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Piperidis</surname></persName>
		</editor>
		<meeting>the Ninth International Conference on Language Resources and Evaluation (LREC14)<address><addrLine>Reykjavik, Iceland</addrLine></address></meeting>
		<imprint>
			<publisher>European Language Resources Association (ELRA</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="3425" to="3432" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">BabelNet: The Automatic Construction, Evaluation and Application of a Wide-Coverage Multilingual Semantic Network</title>
		<author>
			<persName><forename type="first">R</forename><surname>Navigli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ponzetto</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">193</biblScope>
			<biblScope unit="page" from="217" to="250" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Linking IMAGACT ontology to BabelNet through action videos</title>
		<author>
			<persName><forename type="first">L</forename><surname>Gregori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Panunzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Ravelli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Corazza</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Montemagni</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Semeraro</surname></persName>
		</editor>
		<meeting>the Third Italian Conference on Computational Linguistics CLiC-it 2016<address><addrLine>Napoli</addrLine></address></meeting>
		<imprint>
			<publisher>Accademia University Press</publisher>
			<date type="published" when="2016-12">2016. December 2016</date>
			<biblScope unit="page" from="162" to="167" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The minimalist grammar of action</title>
		<author>
			<persName><forename type="first">K</forename><surname>Pastra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Aloimonos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philosophical Transactions of the Royal Society of London B: Biological Sciences</title>
		<imprint>
			<biblScope unit="volume">1585</biblScope>
			<biblScope unit="page" from="103" to="117" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">ImageNet: A Large-Scale Hierarchical Image Database</title>
		<author>
			<persName><forename type="first">J</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Soecher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fei-Fei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Computer Vision and Pattern Recognition</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The syntax of event structure</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pustejovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cognition</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page" from="47" to="81" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Brain mechanisms linking language and action</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pulvermller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature reviews. Neuroscience</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page">576</biblScope>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Programming a humanoid robot in natural language: an experiment with description logics</title>
		<author>
			<persName><forename type="first">N</forename><surname>Vitucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Franchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Workshop Simulation in robot programming</title>
				<meeting><address><addrLine>SIMPAR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">iCub: the design and realization of an open humanoid platform for cognitive and neuroscience research</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">G</forename><surname>Tsagarakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Metta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sandini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Vernon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Beira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Becchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Righetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Santos-Victor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Ijspeert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Carrozza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Caldwell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advanced Robotics</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page">10</biblScope>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A resource of Typed Predicate Argument Structures for linguistic analysis and semantic processing</title>
		<author>
			<persName><forename type="first">E</forename><surname>Jezek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Magnini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Feltrcco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bianchini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Popescu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC&apos;14)</title>
				<meeting>the Ninth International Conference on Language Resources and Evaluation (LREC&apos;14)<address><addrLine>Reykjavik, Iceland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
