<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Transforming Web Knowledge into Actionable Knowledge Graphs for Robot Manipulation Tasks</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Michael</forename><surname>Beetz</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute for Artificial Intelligence</orgName>
								<orgName type="institution">University of Bremen</orgName>
								<address>
									<settlement>Bremen</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Philipp</forename><surname>Cimiano</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Cluster of Excellence Cognitive Interaction Technology (CITEC)</orgName>
								<orgName type="institution">Bielefeld University</orgName>
								<address>
									<settlement>Bielefeld</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michaela</forename><surname>Kümpel</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute for Artificial Intelligence</orgName>
								<orgName type="institution">University of Bremen</orgName>
								<address>
									<settlement>Bremen</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Enrico</forename><surname>Motta</surname></persName>
							<affiliation key="aff2">
								<orgName type="department">Knowledge Media Institute</orgName>
								<orgName type="institution">The Open University</orgName>
								<address>
									<settlement>Milton Keynes</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ilaria</forename><surname>Tiddi</surname></persName>
							<affiliation key="aff3">
								<orgName type="department">Knowledge Representation and Reasoning Group</orgName>
								<orgName type="institution">Vrije Universiteit Amsterdam</orgName>
								<address>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jan-Philipp</forename><surname>Töberg</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Cluster of Excellence Cognitive Interaction Technology (CITEC)</orgName>
								<orgName type="institution">Bielefeld University</orgName>
								<address>
									<settlement>Bielefeld</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Transforming Web Knowledge into Actionable Knowledge Graphs for Robot Manipulation Tasks</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">6AF4FD3310F87633E1D87683881204F1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Knowledge Representation</term>
					<term>Cognitive Robotics</term>
					<term>Web Knowledge</term>
					<term>Actionable Knowledge</term>
					<term>Knowledge Extraction</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>One of the visions in AI based robotics are household robots that can autonomously handle a variety of meal preparation tasks. Based on this scenario, we present a best practice tutorial on how to create actionable knowledge graphs that a robot can use for execution of task variations of cutting actions. We implemented a solution for this task that integrates all necessary software components in the framework of the robot control process. In the context of this tutorial, we focus on knowledge acquisition, knowledge representation and reasoning, and simulating robot action execution, bringing these components together into a learning environment that -in the extended version -introduces the whole control process of Cognitive Robotics. In particular, the Tutorial will detail necessary concepts a knowledge graph should include for robot action execution, how web knowledge can be automatically acquired for the domain of cutting fruits, and how the created knowledge graph can be used to let robots execute tasks like slicing a cucumber or quartering an apple. The learning environment follows an immersive approach, using a physics-based simulation environment for visualization purposes that helps to illustrate the concepts taught in the tutorial.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>We envision household robots that can be placed in any kitchen to then be given a random recipe from the Web that they can understand and parse into action plans that can be broken down into executable body motions that can be performed with available objects in the environment. For this, robots need to be enabled to perform meal preparation tasks with any tool, on any available object and for a variation of tasks. This tutorial is based on prior research that proposed a methodology for creating actionable knowledge graphs <ref type="bibr" target="#b0">[1]</ref>, where a solution for creating knowledge graphs that link object to action and environment information and thus make them actionable is proposed, as well as a knowledge engineering methodology that is more specifically aligned to creating ontologies for meal preparation tasks that can be used to parameterise robot action plans in order to perform task variations of cutting actions <ref type="bibr" target="#b1">[2]</ref>.</p><p>There has been lots of research on creation of knowledge graphs, which has led to many domain knowledge graphs that have proven to be good in answering questions. Usually, these knowledge graphs contain object information (e.g. about food objects, recipes, people, books). To make such knowledge graphs actionable, it is important to link the contained object knowledge to environment knowledge. If robots shall use the knowledge graphs for action execution, they need to further include action knowledge.</p><p>ESWC 2024 Workshops and Tutorials Joint Proceedings, May 26-27, Heraklion, Greece beetz@cs.uni-bremen.de (M. Beetz); cimiano@techfak.uni-bielefeld.de (P. Cimiano); michaela.kuempel@uni-bremen.de (M. Kümpel); enrico.motta@open.ac.uk (E. Motta); i.tiddi@vu.nl (I. Tiddi); jtoeberg@techfak.uni-bielefeld.de (J. Töberg) 0000-0002-7888-7444 (M. Beetz); 0000-0002-4771-441X (P. Cimiano); 0000-0002-0408-3953 (M. Kümpel); 0000-0003-0015-1952 (E. Motta); 0000-0001-7116-9338 (I. Tiddi); 0000-0003-0434-6781 (J. Töberg) This implies that actionable knowledge graphs do not aim at perfectly modeling object knowledge, but instead focus on reuse of existing knowledge sources and modeling and linking of environment and action knowledge in order for making the contained knowledge applicable in agent applications. This tutorial will detail the necessary concepts for creating an actionable knowledge graph for the example domain of Cutting Fruits and Vegetables, which shall be used by robotic agents to be able to infer the correct body motions for quartering an apple or dicing a cucumber.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Structure of the Tutorial</head><p>The tutorial is centered around the knowledge engineering methodology introduced in <ref type="bibr" target="#b1">[2]</ref> and its application on the exemplary task of Cutting Fruits &amp; Vegetables. In general, the methodology consists of five steps to create actionable knowledge graphs that a robot can employ to handle manipulation tasks, as can be examined in Figure <ref type="figure" target="#fig_0">1</ref>. In the following we present a brief summarisation of these steps:</p><p>1) Defining Motion Parameters: Definition of the domain-and action-dependent parameters influencing the execution of the target manipulation action. An example is the knife position for cutting tasks. 2) Collecting Knowledge Sources: Collection of different sources for three types of knowledge: action knowledge, object knowledge &amp; knowledge for linking action and object knowledge 3a) Extraction of Action Groups &amp; Affordances: Collect information about the manipulation action and its associated synonyms and hyponyms. This information is used to organize different action verbs into groups based on similarities in their motion parameters. For each so called action group, a representative is chosen and their affordances are created. 3b) Extraction of Object Knowledge &amp; Dispositions: Collect information about objects participating in the manipulation action (e.g. tools, environments, targets). Then collect information and concrete values for the task-specific object properties that influence the action execution. This knowledge is represented through dispositions. 4) Relate Object to Action Knowledge: Relate the action affordances to the object dispositions in an ontology by re-using relations from the SOMA [3] ontology. 5) Link to Cognitive Architecture: Map concepts in the generalized manipulation plan to their representation in the ontology and use the architecture's perception system to ground objects and their properties. In this tutorial we present the whole methodology but focus on the steps 1), 3) and 4), which represent the knowledge collection and extraction from (Semantic) Web resources.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Defining Motion Parameters</head><p>In order to create an actionable knowledge graph for the domain of cutting fruits and vegetables, we first have to investigate motion parameters that influence action execution. For this, one can first investigate a lexical resource like WordNet <ref type="bibr" target="#b3">[4]</ref> to find commonly used synonyms of cutting, such as slicing, dicing, or halving.</p><p>We then investigate how different action verbs influence task execution, which results in the following motion parameters:</p><p>-number of repetitions: Cutting tasks vary in the number of repetitions to be executed. Sometimes, a cut is only performed once, while other tasks require to cut the whole object. -cutting position: Cutting tasks also vary in the applied cutting position. Halving requires a different position than slicing, for example. -result object: Cutting tasks result in objects of different amount and shape.</p><p>-prior actions: Some objects require a prior action (such as peeling) to be executed.</p><p>-dependent tasks: Some tasks depend on prior tasks (i.e. quartering depends on halving).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Extraction of Relevant Action Knowledge from the Web</head><p>The relevant action knowledge we focus on consists of the different verbs that are associated with the manipulation action. This includes the main verb (e.g. cut) as well as all of its hyponyms and synonyms. Additionally, action knowledge covers the properties of the different verbs that distinguish their action execution and generally influence the manipulation action.</p><p>In the tutorial we showcase the action knowledge extraction for the exemplary task of Cutting. We begin by extracting all synonyms and hyponyms from WordNet <ref type="bibr" target="#b3">[4]</ref> and VerbNet <ref type="bibr" target="#b4">[5]</ref>, two expertly created resources for lexical information and verb usage. For the verb cut, we extract 211 verbs from WordNet and 147 verbs from VerbNet. After pre-processing and duplicate removal, 181 verbs remain. These remaining verbs are then filtered based on their relevance for the domain using an instructionfocused corpus from WikiHow. We set a threshold of 100 occurrences in a specific part of an article across the whole corpus to warrant an inclusion of the verb in future steps. With this restriction, only 46 verbs remain. However, there is still a need for manual post-processing since some important verbs are missing (e.g. halve or quarter) or are very general and thus not relevant for cutting (e.g. make or pull).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Comparison of different methods for extracting anatomical parts for a given fruit sorted based on their F1-score. In each column, we mark the three methods with the highest performance in bold. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Method</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Extraction of Relevant Object Knowledge from the Web</head><p>For the object knowledge, we focus on information about objects involved in the manipulation action, their properties, usage and their specific purpose. In general we showcase a similar pipeline to the one explained in Section 2.2. We begin by extracting all relevant objects from domain-specific taxonomies.</p><p>For our focus on fruits and vegetables, we query the FoodOn <ref type="bibr" target="#b5">[6]</ref> using SPARQL, resulting in 257 unique fruits and 31 unique vegetables. Since not all of these fruits and vegetables are equally relevant and we need enough information to exist to evaluate their task-specific properties, we again use instructionfocused corpora to filter them based on their occurrence data. In this case we also look at the recipe corpus Recipe1M+ <ref type="bibr" target="#b6">[7]</ref> and only include fruits and vegetables that occur in 1% of any part of these two corpora. This filtering step results in 15 remaining fruits and one remaining vegetables. Lastly, we present our ongoing efforts in automating the extraction of task-specific object property values. For this, we compare three different pre-trained embeddings (GloVe <ref type="bibr" target="#b7">[8]</ref>, NASARI <ref type="bibr" target="#b8">[9]</ref> and ConceptNet Numberbatch <ref type="bibr" target="#b9">[10]</ref>), two large language models (ChatGPT and GPT-4) as well as two techniques for extracting this information from the Recipe1M+ on the task of extracting the existing anatomical parts for a given fruit. Our preliminary results and their condition can be examined in Table <ref type="table">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Linking Action to Object Knowledge in the Ontology</head><p>For connecting and linking the action to the object knowledge, we rely on the concepts of disposition and affordance. In general, a disposition describes the property of an object, thereby enabling an agent to perform a certain task <ref type="bibr" target="#b10">[11]</ref> as in a knife can be used for cutting, whereas an affordance describes what an object or the environment offers an agent <ref type="bibr" target="#b11">[12]</ref> as in an apple affords to be cut.</p><p>In recent works like SOMA <ref type="bibr" target="#b2">[3]</ref>, both concepts are set in relation by stating that dispositions allow objects to participate in events realizing affordances, which are more abstract descriptions of dispositions. This is achieved in the TBOX by using the affordsTask, affordsTrigger and hasDisposition relations from SOMA. An example for the disposition of Peelability can be examined in Section 2.4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>hasDisposition 𝑠𝑜𝑚𝑒</head><p>(𝑃 𝑒𝑒𝑙𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑎𝑛𝑑 (affordsTask 𝑠𝑜𝑚𝑒 𝑃 𝑒𝑒𝑙𝑖𝑛𝑔) 𝑎𝑛𝑑 (affordsTrigger 𝑜𝑛𝑙𝑦 (𝑐𝑙𝑎𝑠𝑠𝑖𝑓 𝑖𝑒𝑠 𝑜𝑛𝑙𝑦 𝐻𝑎𝑛𝑑)))</p><p>Figure <ref type="figure">2</ref>: Example for connecting an affordance ("Peeling with a hand") to a disposition ("Peelability") using relations from the SOMA ontology <ref type="bibr" target="#b2">[3]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Tutorial Material</head><p>For the tutorial, we made our implementation available in Jupyter Notebooks found in a GitHub repository <ref type="foot" target="#foot_0">1</ref> . Participants are encouraged to download the notebooks and follow along, but the notebooks are presented in depth during the talks, so actual hands-on experience is optional.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1:The Knowledge Engineering Methodology proposed in<ref type="bibr" target="#b1">[2]</ref> we use as the foundation for the tutorial.</figDesc><graphic coords="2,117.13,65.61,361.00,158.51" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Acc. Prec. Rec. Spec. F1 Threshold</head><label></label><figDesc></figDesc><table><row><cell>Recipe1M+ 2-Step</cell><cell>.863</cell><cell>.824</cell><cell>.636</cell><cell>.948 .718 Occ. in ≥ 1% of steps</cell></row><row><cell>ChatGPT</cell><cell>.775</cell><cell cols="2">.556 .909</cell><cell>.724 .690 -</cell></row><row><cell>GPT-4</cell><cell>.700</cell><cell cols="2">.476 .909</cell><cell>.621 .625 -</cell></row><row><cell>CN Numberbatch</cell><cell>.788</cell><cell>.609</cell><cell>.636</cell><cell>.845 .622 Cossim ≥ 0.20</cell></row><row><cell cols="2">Recipe1M+ Bigrams .688</cell><cell cols="2">.463 .864</cell><cell>.621 .603 Occ. in any step</cell></row><row><cell>Recipe1M+ 2-Step</cell><cell>.738</cell><cell>.517</cell><cell>.682</cell><cell>.759 .588 Occ. in ≥ 0.5% of steps</cell></row><row><cell cols="2">Recipe1M+ Bigrams .788</cell><cell>.667</cell><cell>.455</cell><cell>.914 .541 Occ. in ≥ 0.1% of steps</cell></row><row><cell>CN Numberbatch</cell><cell>.825</cell><cell>1.00</cell><cell>.364</cell><cell>1.00 .533 Cossim ≥ 0.30</cell></row><row><cell>GloVe</cell><cell>.550</cell><cell>.348</cell><cell>.727</cell><cell>.483 .471 Cossim ≥ 0.25</cell></row><row><cell>GloVe</cell><cell>.688</cell><cell>.435</cell><cell>.455</cell><cell>.776 .444 Cossim ≥ 0.40</cell></row><row><cell>NASARI</cell><cell>.750</cell><cell>.571</cell><cell>.364</cell><cell>.897 .444 Cossim ≥ 0.75</cell></row><row><cell>GloVe</cell><cell>.738</cell><cell>.533</cell><cell>.364</cell><cell>.879 .432 Cossim ≥ 0.50</cell></row><row><cell>NASARI</cell><cell>.500</cell><cell>.295</cell><cell>.591</cell><cell>.466 .394 Cossim ≥ 0.50</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/Food-Ninja/Tutorial_ESWC_HHAI</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The tutorial is organized by the SAIL Network in collaboration with the Joint Research Center on Cooperative and Cognition-enabled AI (CoAI JRC). The research towards this Tutorial has been partially supported by the German Federal Ministy of Education and Research; Project-ID 16DHBKI047 "IntEL4CoRo -Integrated Learning Environment for Cognitive Robotics", University of Bremen as well as the German Research Foundation DFG, as part of CRC (SFB) 1320 "EASE -Everyday Activity Science and Engineering", University of Bremen (http://www.ease-crc.org/). The research was conducted in subproject R04 "Cognition-enabled execution of everyday actions".</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Actionable knowledge graphs -how daily activity applications can benefit from embodied web knowledge</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kümpel</surname></persName>
		</author>
		<idno type="DOI">10.26092/elib/2936</idno>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Towards a Knowledge Engineering Methodology for Flexible Robot Manipulation in Everyday Tasks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kümpel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-P</forename><surname>Töberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Hassouna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cimiano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Beetz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Workshop on Actionable Knowledge Representation and Reasoning for Robots (AKR 3 )</title>
				<meeting><address><addrLine>Heraklion, Crete, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Foundations of the Socio-physical Model of Activities (SOMA) for Autonomous Robotic Agents</title>
		<author>
			<persName><forename type="first">D</forename><surname>Beßler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Porzel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pomarlan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vyas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Höffner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Beetz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Malaka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bateman</surname></persName>
		</author>
		<idno type="DOI">10.3233/FAIA210379</idno>
		<ptr target="https://ebooks.iospress.nl/doi/10.3233/FAIA210379.arXiv:2011.11972" />
	</analytic>
	<monogr>
		<title level="m">Frontiers in Artificial Intelligence and Applications</title>
				<meeting><address><addrLine>Amsterdam</addrLine></address></meeting>
		<imprint>
			<publisher>IOS Press</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">344</biblScope>
			<biblScope unit="page" from="159" to="174" />
		</imprint>
	</monogr>
	<note>Formal Ontology in Information Systems</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">WordNet: A Lexical Database for English</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Miller</surname></persName>
		</author>
		<idno type="DOI">10.1145/219717.219748</idno>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="page" from="39" to="41" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">K</forename><surname>Schuler</surname></persName>
		</author>
		<title level="m">VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon</title>
				<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
		<respStmt>
			<orgName>University of Pennsylvania</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. thesis</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">FoodOn: A harmonized food ontology to increase global food traceability, quality control and data integration</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Dooley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">J</forename><surname>Griffiths</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">S</forename><surname>Gosal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">L</forename><surname>Buttigieg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hoehndorf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Lange</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Schriml</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">S L</forename><surname>Brinkman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">W L</forename><surname>Hsiao</surname></persName>
		</author>
		<idno type="DOI">10.1038/s41538-018-0032-6</idno>
	</analytic>
	<monogr>
		<title level="j">npj Sci Food</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page">23</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images</title>
		<author>
			<persName><forename type="first">J</forename><surname>Marín</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Biswas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ofli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Hynes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Salvador</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Aytar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Weber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torralba</surname></persName>
		</author>
		<idno type="DOI">10.1109/TPAMI.2019.2927476</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="page" from="187" to="203" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Glove: Global Vectors for Word Representation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pennington</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Socher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Manning</surname></persName>
		</author>
		<idno type="DOI">10.3115/v1/D14-1162</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics</title>
				<meeting>the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics<address><addrLine>Doha, Qatar</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1532" to="1543" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">NASARI: A Novel Approach to a Semantically-Aware Representation of Items</title>
		<author>
			<persName><forename type="first">J</forename><surname>Camacho-Collados</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Pilehvar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Navigli</surname></persName>
		</author>
		<ptr target="http://aclweb.org/anthology/N/N15/N15-1059.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
				<meeting>the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies<address><addrLine>Denver, CO</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="567" to="577" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">ConceptNet 5.5: An Open Multilingual Graph of General Knowledge</title>
		<author>
			<persName><forename type="first">R</forename><surname>Speer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Havasi</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v31i1.11164</idno>
	</analytic>
	<monogr>
		<title level="j">AAAI</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Ecological foundations of cognition: Invariants of perception and action</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Turvey</surname></persName>
		</author>
		<idno type="DOI">10.1037/10564-004</idno>
	</analytic>
	<monogr>
		<title level="m">Cognition: Conceptual and Methodological Issues</title>
				<editor>
			<persName><forename type="first">H</forename><forename type="middle">L</forename><surname>Pick</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><forename type="middle">W</forename><surname>Van Den Broek</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><forename type="middle">C</forename><surname>Knill</surname></persName>
		</editor>
		<meeting><address><addrLine>Washington</addrLine></address></meeting>
		<imprint>
			<publisher>American Psychological Association</publisher>
			<date type="published" when="1992">1992</date>
			<biblScope unit="page" from="85" to="117" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The Ecological Approach to Visual Perception</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Bornstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Gibson</surname></persName>
		</author>
		<idno type="DOI">10.2307/429816</idno>
		<idno>arXiv:10.2307/429816</idno>
	</analytic>
	<monogr>
		<title level="j">The Journal of Aesthetics and Art Criticism</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page">203</biblScope>
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
