<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">May the FORCE be with Semantics: exploiting LLMs to Image Schematic Knowledge Enrichment</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Stefano</forename><surname>De Giorgis</surname></persName>
							<email>stefano.degiorgis@cnr.it</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Cognitive Science and Technologies</orgName>
								<orgName type="institution">National Research Council (ISTC-CNR)</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">The Eighth Image Schema Day (ISD8)</orgName>
								<address>
									<addrLine>25-28 November 2024</addrLine>
									<settlement>Bozen-Bolzano</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">May the FORCE be with Semantics: exploiting LLMs to Image Schematic Knowledge Enrichment</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">DB3F50F1648484AB190B3A6524ED160B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:33+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper addresses the underspecification of the FORCE image schema. We present a novel hybrid pipeline that combines large language model interactions, linguistic analysis, and knowledge extraction techniques to expand upon Johnson's initial categorization of FORCE types. Our methodology employs Claude 3.5 Sonnet for domain exploration, generates a dataset of 100 force-expressing verbs with contextual sentences, and integrates findings into ImageSchemaNet through AMR2FRED processing and SPARQL querying. Key contributions include: (1) a more nuanced understanding of the FORCE image schema, (2) a validated dataset of force-related linguistic expressions, and (3) an enhanced ontology with empirically derived FORCE concepts. This work bridges the gap between abstract image schema theory and specific linguistic realizations of FORCE, offering practical tools for natural language processing, knowledge representation, and cognitive computing.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Image schemas, as fundamental cognitive constructs <ref type="bibr" target="#b0">[1]</ref>, have been instrumental in our understanding of embodied cognition and conceptual metaphor theory. However, while certain image schemas have been extensively investigated, others remain ambiguous, and a comprehensive, agreed-upon list of these schemas continues to elude researchers. This lack of consensus poses significant challenges for advancing the field and applying image schema theory across various domains, including knowledge representation, natural language processing, and cognitive robotics.</p><p>Large Language Models (LLMs) offer a promising avenue to address some of these challenges. As the most extensive repositories of general approximate commonsense knowledge currently available, LLMs have inadvertently internalized a degree of embodiment "by proxy" through the way language is used to describe the world <ref type="bibr" target="#b1">[2]</ref>. This linguistic representation is inherently grounded in embodied cognition, reflecting how humans conceptualize and interact with their environment. Consequently, LLMs possess a substantial amount of knowledge that is implicitly grounded in their training data, potentially offering insights into image schemas that have yet to be fully explored or defined.</p><p>In the realm of image schemas, Force remains notably underspecified compared to more thoroughly explored schemas such as Source_Path_Goal and its related families, as highlighted by <ref type="bibr" target="#b2">[3]</ref>. While Johnson <ref type="bibr" target="#b0">[1]</ref> provided an initial distinction of Force types, including Compulsion, Blockage, Counterforce, Removal_Of_Restraint, Enablement, Diversion, Attraction, and Repulsion, this categorization has remained largely static. Two significant issues persist: (a) these distinctions have not been subjected to further in-depth analysis, and (b) they lack operational applicability in practical contexts. Our research addresses this gap by employing a hybrid pipeline that combines large language model interactions, linguistic analysis, and knowledge extraction techniques. This approach encompasses initial domain exploration using Claude 3.5 Sonnet, focused data generation of force-related verbs and sentences, expert validation, and the integration of derived knowledge into existing semantic resources through AMR2FRED tool processing and SPARQL querying. By doing so, we aim to provide a more nuanced and operationally viable understanding of the FORCE image schema, bridging the gap between abstract cognitive linguistic theory and practical applications in natural language processing and knowledge representation.</p><p>This approach not only addresses the scarcity of annotated resources and comprehensive datasets in the domain of image schemas but also opens up new possibilities for understanding how humans conceptualize their experiences. By tapping into the implicit knowledge encoded in LLMs, we can potentially bridge the gap between abstract image schema concepts and their concrete manifestations in language and thought.</p><p>The paper is organised as follows: Section 2 provides useful references to IS literarture and in particular about Force analysis; Section 3 details the hybrid approach; Section 4 shows our results and discuss them; finally Section 5 envisions future works and concludes the paper.</p><p>The dataset, knowledge base, scripts, and full prompts will be made fully available at "camera ready" time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>The concept of image schemas (IS), introduced by Lakoff and Johnson <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref>, has evolved into a foundational theory in cognitive linguistics. The process of perceptual meaning analysis (PMA) <ref type="bibr" target="#b5">[6]</ref> in children has provided valuable insights into how knowledge can be acquired through sensorimotor interactions with the environment, and specifically through image schemas. These schemas are now understood as sensorimotor cognitive patterns that shape our perception of the world and establish semantic relations based on bodily experiences <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>.</p><p>Furthermore, IS constitute a finite set of relational primitives that define the uses and affordances of objects within their environments. Prominent examples include Containment, which represents the capacity of one object to be enclosed within another, and Source_Path_Goal, which describes the potential or actual movement of objects along specific trajectories. These schemas are not merely static representations but serve as dynamic cognitive structures that underpin more complex reasoning processes.</p><p>Indeed, image schemas are widely recognized as foundational elements in human reasoning <ref type="bibr" target="#b11">[12]</ref> and have been demonstrated to evolve into sophisticated cognitive functions, including natural language processing and the conceptualization of abstract entities <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b12">13]</ref>. This evolution occurs through the grounding of these schemas in experiential patterns, highlighting the embodied nature of cognition.</p><p>Moreover, image schemas exhibit a remarkable capacity for combinatorial complexity. Consider the concept of "transportation, " which can be abstracted beyond specific objects to represent the "movement of object(s) from A to B. " In image-schematic terms, this can be formally described as a combination of Source_Path_Goal with either Support or Containment <ref type="bibr" target="#b13">[14]</ref>. This combinatorial property allows for the formal description of increasingly complex events through the construction of constellations and sequences of image schemas, effectively creating state spaces of conceptual structures <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16]</ref>.</p><p>Recent advancements in image schema research have investigated the capabilities of this combinatorial capacity, leading to the development of sophisticated analytical tools and frameworks. Notable among these are the Image Schema Logic ISL 𝐹𝑂𝐿 <ref type="bibr" target="#b16">[17]</ref>, which explores the schemas' compositional nature, studies on their role in conceptual blending <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b18">19]</ref>, and the ImageSchemaNet ontology <ref type="bibr" target="#b19">[20]</ref>. Additionally, a diagrammatic image schema language has been proposed for visual representation <ref type="bibr" target="#b20">[21]</ref>, further expanding the field's analytical capabilities.</p><p>While corpus-based studies <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b22">23]</ref> and machine learning approaches <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25,</ref><ref type="bibr" target="#b25">26]</ref> have made significant strides in identifying image schemas in natural language, the challenge of comprehensive image schema coverage remains an active area of research.</p><p>On the other side, the unique position of LLMs as both products and reflectors of human language use makes them valuable tools for investigating image schemas. By analyzing the patterns and structures within LLM outputs, researchers may uncover new image schemas, clarify ambiguous ones, and potentially work towards a more comprehensive list. Moreover, LLMs could be leveraged to generate synthetic data that captures the nuances of image schemas in linguistic expressions, rapidly expanding the available resources for studying these cognitive patterns.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>Retrieval Augmented Generation (RAG) <ref type="bibr" target="#b26">[27]</ref> is a cross-disciplinary research topic which focuses on refining models to make them able to retrieve precise information from big compressed amount of (usually textual) data, traditionally in the form of vector embeddings. Recent advancements in knowledge extraction have made graph generation from text a relatively straightforward process. The recent emergence of Graph-RAG <ref type="bibr" target="#b27">[28]</ref> and parallel techniques has demonstrated the feasibility of generating generic subject-predicate-object triples from textual input, even when using "smaller" language models. This capability has opened up new possibilities for automatically triplify information extracted from unstructured text. However, the true challenge lies in aligning this extracted information with existing knowledge structures, such as ontologies or conceptual schemas in knowledge bases, and effectively leveraging existing semantic web resources. To address this challenge, our methodology, shown in Figure <ref type="figure" target="#fig_0">1</ref>, builds upon previous work, including ImageSchemaNet <ref type="bibr" target="#b19">[20]</ref>, and enriches the formalized knowledge through two primary approaches. First, we employ a chain of thoughts prompting technique for knowledge elicitation from large language models (LLMs), as detailed in subsequent sections. This process allows us to tap into the vast knowledge encoded in LLMs while maintaining a structured approach to information extraction.</p><p>Our methodology also yields a valuable by-product: synthetic data augmentation for the Image Schema (IS) Catalogue <ref type="bibr" target="#b28">[29]</ref>, which serves as the primary resource for image schemas. We process the generated examples through AMR2FRED <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b30">31]</ref>, a tool capable of creating proper RDF graphs from text via Abstract Meaning Representation, and then use SPARQL queries to extract entities that can be declared as triggers for the Force image schema in existing repositories. This approach results in a hybrid pipeline that combines LLM-generated information (pink boxes in Figure <ref type="figure" target="#fig_0">1</ref>) with symbolic knowledge extraction (light blue boxes). To ensure the quality and relevance of our results, the final synthetic dataset of 100 sentences undergoes manual validation by domain experts, providing a robust foundation for further research and applications in the field of image schemas and its real world application.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments and Discussion</head><p>This section details our experimental approach to exploring and formalizing the Force image schema, combining LLM interactions, linguistic analysis, and knowledge extraction techniques. Our methodology encompasses initial domain exploration, focused data generation, and the integration of derived knowledge into existing semantic web resources.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Generative Knowledge Enrichment</head><p>For the generative task it has been used a state of the art model: Claude 3.5 Sonnet, a large language model known for its comprehensive knowledge base and nuanced ability in describing even complex concepts. The whole generation process is freely replicable since it is done via open chat interface. Our experimental approach commenced with a broad domain exploration, centered around the fundamental question: "List which kind of forces can activate the Force image schema. "</p><p>List which kind of forces can activate the FORCE image schema.</p><p>The model provided a detailed list of generic Force types, which included: physical forces, psychological forces, social forces, emotional forces, gravitational forces, electromagnetic forces, nuclear forces (strong and weak), frictional forces, tensile and compressive forces, and centripetal and centrifugal forces. This initial output served as a foundation for our subsequent investigation, offering a diverse range of force categories that could potentially activate the Force image schema.</p><p>Building upon this initial output, we employed a chain-of-thought prompting technique to generate a controlled vocabulary for each of these Force types. This method involved asking the model to elaborate on each force category, providing examples and related concepts. Now for each of these points generate a list of terms which can be used as controled vocabulary to generate a knowledge base.</p><p>After careful analysis of the results, we made a strategic decision to focus specifically on physical forces for several compelling reasons. Firstly, we observed that some of the other categories, such as psychological and social forces, often represented metaphorical extensions of physical forces. These metaphorical uses, while interesting, were already grounded in image-schematic concepts and would potentially introduce complexity in distinguishing between literal and figurative applications of Force. Additionally, certain categories, such as emotional forces, were deemed too generic and abstract for our purposes. Moreover, these emotional aspects had been previously addressed in existing literature, notably in <ref type="bibr" target="#b32">[32]</ref>, which provided a treatment of emotional forces in relation to image schemas.</p><p>Our next step involved a more focused prompt aimed at extracting a "complete list of verbs expressing Forces. " We instructed the model to concentrate solely on physical forces and provide a comprehensive list of verbs that evoke any Force idea.</p><p>Ok, now focus only on physical forces and provide a list of all verbs which evokes any FORCE idea.</p><p>The prompt was carefully crafted to elicit a wide range of verbs while maintaining relevance to physical manifestations of Force. This process resulted in a collection of 100 verbs expressing various types of Forces, ranging from common actions like "push" and "pull" to more specific verbs like "torque" and "propel. " To contextualize these verbs and ensure their applicability, we then requested linguistic examples that demonstrate specific occurrences of Force for each item on the list. Now provide a sentence as example for each item it this list, which realizes a situation of FORCE.</p><p>The prompt for this stage was designed to generate diverse, realistic sentences that clearly illustrated the Force concept embodied by each verb.</p><p>This iterative prompting process yielded a final output of 100 lines, each containing a lexical unit pointing to a type of Force, accompanied by a sentence illustrating its realization in context. For instance, one entry might include the verb "compress" along with the example sentence "The hydraulic press compressed the metal sheet into a thin disc. " This comprehensive collection serves as a valuable resource for further analysis and application of the Force (and possibly other co-occurring) image</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lexical Trigger Example Sentence</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Push</head><p>The firefighter pushed against the heavy door with all his might to rescue those trapped inside.</p><p>Pull With a strong pull on the rope, the sailor raised the mainsail against the wind's resistance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Shove</head><p>In the crowded subway, an impatient commuter shoved his way through the mass of people.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Lexical triggers and sentences generated by Claude 3.5 Sonnet about the force image schema.</p><p>schema(s) in the field. To ensure the quality and relevance of our dataset, each entry was reviewed by domain experts with previous background in image schemas. The experts evaluated the entries based on criteria such as clarity of Force representation, diversity of Force types, and linguistic naturalness of the example sentences. Any disagreements were resolved through discussion, and entries that did not meet the quality standards were replaced or refined through additional prompting sessions with the language model. This results in a synthetic dataset, manually curated, listing 100 verbs expressing Force.</p><p>Table <ref type="table">?</ref>? presents an excerpt from this curated dataset, showcasing three representative lexical units associated with different types of Force and their corresponding example sentences. This table not only demonstrates the diversity of Force-related verbs captured in our study but also illustrates how these verbs are contextualized in natural language use. The full dataset of 100 entries provides a useful extension of the IS Catalogue, with a focus on Force image schema and its varied manifestations in language.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Knowledge Extraction</head><p>Following the generation and initial validation of our dataset, detailed in previous section, and shown in pink boxes in Figure <ref type="figure" target="#fig_0">1</ref>, we proceeded with a knowledge extraction process to formalize and integrate the Force-related information into existing semantic resources.</p><p>We employed the AMR2FRED tool to process these sentences. AMR2FRED is a sophisticated natural language processing tool that converts text into Abstract Meaning Representation (AMR) graphs, and then passes the graph to FRED <ref type="bibr" target="#b33">[33]</ref> which performs several tasks, anong others: frame extraction, entity recognition, and entity alignment to DOLCE foundational ontology <ref type="bibr" target="#b34">[34]</ref>. This tool is particularly valuable for our purposes as it preserves the semantic richness of natural language while producing structured, machine-readable representations. Figure <ref type="figure" target="#fig_1">2</ref> shows the graph automatically generated from the sentence "The strong current ripped the swimmer's goggles off her face. "</p><p>The RDF graphs generated by AMR2FRED for each sentence are collected and stored in a dedicated knowledge base. This knowledge base serves as a centralized repository of formalized Force-related semantic structures derived from our curated examples. To extract relevant entities from this knowledge base, we developed a targeted SPARQL query, shown in Figure <ref type="figure">4</ref>.2. This query is designed to identify and extract entities from PropBank <ref type="bibr" target="#b35">[35]</ref>, a lexical resource that provides a frame like structure and semantic role labels for English lexicon. Since the AMR2FRED graph is well formed RDF graph, the verb used in the sentence is reified as an instatiation of a specific occurrence (represented as an individual), having rdf:type the PropBank entity on which it is disambiguated. The reasoning behind this is that an occurrence of a certain verb is the instatiation of the general concept of that verb (in our case), which is represented on the graph via subsumption relation. The extracted entities represent concepts and actions directly associated with the Force image schema as manifested in our dataset.</p><p>Finally, we enriched ImageSchemaNet, the existing ontology for image schemas, by adding these extracted entities as direct evocators of the Force image schema. This addition was implemented in a separate graph within ImageSchemaNet, allowing for clear provenance and easy integration or separation of our contribution. This process not only augments ImageSchemaNet with new, empirically derived Force-related concepts but also establishes a concrete link between abstract image schema theory and specific linguistic realizations of force.  The resulting enhanced ontology provides a valuable resource for researchers and practitioners working at the intersection of cognitive linguistics, natural language processing, and knowledge representation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>SPARQL Query</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions and Future Works</head><p>In this work, we have addressed the long-standing issue of underspecification in the Force image schema. Our research has made significant strides in bridging the gap between abstract theoretical constructs and practical, operational applications. By employing a novel hybrid pipeline that combines large language model interactions, domain experts linguistic analysis, and knowledge extraction techniques, we have expanded upon Johnson's initial categorization of Force types. Our methodology, which included domain exploration using Claude 3.5 Sonnet, generation of Force-related verbs and contextual sentences, expert validation, and integration with existing semantic resources, has yielded several key achievements. First, we have developed a more nuanced and comprehensive understanding of the Force image schema, expanding beyond the original eight categories to include a wider range of force manifestations in language. Second, our approach has resulted in a validated dataset of 100 force-expressing verbs and their contextual uses, providing a valuable resource for future research in this area. Third, through the use of AMR2FRED tool processing and SPARQL querying, we have successfully integrated our findings into ImageSchemaNet, enhancing this ontology with empirically derived Forcerelated concepts. This integration establishes a concrete link between image schema theory and specific linguistic realizations of Force, opening new avenues for applications in natural language processing, knowledge representation, and cognitive computing.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: FORCE knowledge enrichment hybrid pipeline.</figDesc><graphic coords="3,342.77,214.67,180.51,320.87" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: AMR2FRED image for the sentence "The strong current ripped the swimmer's goggles off her face. "</figDesc><graphic coords="6,72.00,65.61,451.28,104.87" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>4 } 4 . 2</head><label>442</label><figDesc>Box SPARQL query to retrieve entities generated out of the original lexical unit in the graph, and the PropBank entity on which it is disambiguated.</figDesc></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgment</head><p>This work was supported by the Future Artificial Intelligence Research (FAIR) project, code PE00000013 CUP 53C22003630006.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Johnson</surname></persName>
		</author>
		<title level="m">The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason</title>
				<meeting><address><addrLine>Chicago and London</addrLine></address></meeting>
		<imprint>
			<publisher>The University of Chicago Press</publisher>
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">On the unexpected abilities of large language models</title>
		<author>
			<persName><forename type="first">S</forename><surname>Nolfi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Adaptive Behavior</title>
		<imprint>
			<biblScope unit="page">10597123241256754</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Choosing the right path: image schema theory as a foundation for concept invention</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Neuhaus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Artificial General Intelligence</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="21" to="54" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Metaphors we live by</title>
		<author>
			<persName><forename type="first">G</forename><surname>Lakoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Johnson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1980">1980</date>
			<publisher>University of Chicago press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Lakoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Johnson</surname></persName>
		</author>
		<title level="m">Philosophy in the flesh: The embodied mind and its challenge to western thought</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Basic books</publisher>
			<date type="published" when="1999">1999</date>
			<biblScope unit="volume">640</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Mandler</surname></persName>
		</author>
		<title level="m">The foundations of mind: Origins of conceptual thought</title>
				<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Langacker</surname></persName>
		</author>
		<title level="m">Foundations of cognitive grammar: Theoretical prerequisites</title>
				<imprint>
			<publisher>Stanford university press</publisher>
			<date type="published" when="1987">1987</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Cognitive grammar</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Langacker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Basic Readings</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Image schemas: From linguistic analysis to neural grounding, From perception to meaning: Image schemas in cognitive linguistics</title>
		<author>
			<persName><forename type="first">E</forename><surname>Dodge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lakoff</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="57" to="91" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Corpus guided sense cluster analysis: a methodology for ontology development (with examples from the spatial domain)</title>
		<author>
			<persName><forename type="first">B</forename><surname>Bennett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cialone</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">FOIS</title>
		<imprint>
			<biblScope unit="page" from="213" to="226" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Image schemas and gesture, From perception to meaning: Image schemas in cognitive linguistics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Cienki</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="421" to="442" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">based on image-schemas?</title>
		<author>
			<persName><forename type="first">G</forename><surname>Lakoff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The invariance hypothesis: Is abstract reason</title>
				<imprint>
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The cognitive foundations of mathematics: The role of conceptual metaphor</title>
		<author>
			<persName><forename type="first">R</forename><surname>Núñez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lakoff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The handbook of mathematical cognition</title>
				<imprint>
			<publisher>Psychology Press</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="109" to="124" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">An image-schematic account of spatial categories</title>
		<author>
			<persName><forename type="first">W</forename><surname>Kuhn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Spatial Information Theory</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="152" to="168" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Image schema combinations and complex events</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Peñaloza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Guizzardi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">KI-Künstliche Intelligenz</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="279" to="291" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">An image schema language</title>
		<author>
			<persName><forename type="first">R</forename><surname>St</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">T</forename><surname>Amant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-H</forename><surname>Morrison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">R</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><surname>Beal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 7th Int. Conf. on Cognitive Modeling (ICCM)</title>
				<meeting>of the 7th Int. Conf. on Cognitive Modeling (ICCM)</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="292" to="297" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Between contact and support: Introducing a logic for image schemas and directed movement</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mossakowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Neuhaus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference of the Italian Association for Artificial Intelligence</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="256" to="268" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Asymmetric hybrids: Dialogues for computational concept combination</title>
		<author>
			<persName><forename type="first">G</forename><surname>Righetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Porello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Troquard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Galliani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Formal Ontology in Information Systems</title>
				<imprint>
			<publisher>IOS Press</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="81" to="96" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">The moving apple: An image-schematic investigation into the leuven concept database</title>
		<author>
			<persName><forename type="first">G</forename><surname>Righetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kutz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of The Seventh Image Schema Day co-located with The 20th International Conference on Principles of Knowledge Representation and Reasoning (KR 2023)</title>
				<meeting>The Seventh Image Schema Day co-located with The 20th International Conference on Principles of Knowledge Representation and Reasoning (KR 2023)<address><addrLine>Rhodes, Greece</addrLine></address></meeting>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2023-09-02">September 2nd, 2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Imageschemanet: Formalizing embodied commonsense knowledge providing an image-schematic layer to framester</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">De</forename><surname>Giorgis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gangemi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gromann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semantic Web Journal forthcoming</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">The diagrammatic image schema language (disl)</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Neuhaus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mossakowski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Spatial Cognition &amp; Computation</title>
				<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="1" to="38" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">When English proposes what Greek presupposes: The cross-linguistic encoding of motion events</title>
		<author>
			<persName><forename type="first">A</forename><surname>Papafragou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Massey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Gleitman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cognition</title>
		<imprint>
			<biblScope unit="volume">98</biblScope>
			<biblScope unit="page" from="B75" to="B87" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">The embodied nature of medical concepts: image schemas and language for pain</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Prieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Tercedor</forename><surname>Velasco</surname></persName>
		</author>
		<author>
			<persName><surname>Sánchez</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10339-013-0594-9</idno>
	</analytic>
	<monogr>
		<title level="j">Cognitive processing</title>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Body-mind-language: Multilingual knowledge extraction based on embodied cognition</title>
		<author>
			<persName><forename type="first">D</forename><surname>Gromann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AIC</title>
		<imprint>
			<biblScope unit="page" from="20" to="33" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Kinesthetic mind reader: A method to identify image schemas in natural language</title>
		<author>
			<persName><forename type="first">D</forename><surname>Gromann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Advancements in Cogntivie Systems</title>
				<meeting>Advancements in Cogntivie Systems</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Systematic analysis of image schemas in natural language through explainable multilingual neural language processing</title>
		<author>
			<persName><forename type="first">L</forename><surname>Wachowiak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gromann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 29th International Conference on Computational Linguistics</title>
				<meeting>the 29th International Conference on Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="5571" to="5581" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Retrieval-augmented generation for knowledge-intensive nlp tasks</title>
		<author>
			<persName><forename type="first">P</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Perez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Piktus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Petroni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Karpukhin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Küttler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>-T. Yih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Rocktäschel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="9459" to="9474" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Edge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Trinh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bradley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mody</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Truitt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Larson</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2404.16130</idno>
		<title level="m">From local to global: A graph rag approach to query-focused summarization</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Supporting user interface design with image schemas: The iscat database as a research tool</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Huber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Baur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ISD</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Amr2fred, a tool for translating abstract meaning representation to motifbased linguistic knowledge graph</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gangemi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Extended Semantic Web Conference (ESWC2017)</title>
				<meeting>the Extended Semantic Web Conference (ESWC2017)<address><addrLine>DEU</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="43" to="47" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Amr2fred, a tool for translating abstract meaning representation to motif-based linguistic knowledge graphs</title>
		<author>
			<persName><forename type="first">A</forename><surname>Meloni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Reforgiato Recupero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gangemi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web: ESWC</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<title level="m">Satellite Events: ESWC 2017 Satellite Events</title>
				<meeting><address><addrLine>Portorož, Slovenia</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2017-06-01">May 28-June 1, 2017. 2017</date>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="43" to="47" />
		</imprint>
	</monogr>
	<note>Revised Selected Papers</note>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">De</forename><surname>Giorgis</surname></persName>
		</author>
		<title level="m">Ethics in the flesh: formalizing moral values in embodied cognition</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Semantic web machine reading with fred</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gangemi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Presutti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Reforgiato Recupero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Nuzzolese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Draicchio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mongiovì</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semantic Web</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="873" to="893" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Dolce: A descriptive ontology for linguistic and cognitive engineering</title>
		<author>
			<persName><forename type="first">S</forename><surname>Borgo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ferrario</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gangemi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Guarino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Masolo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Porello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Sanfilippo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Vieu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied ontology</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="45" to="69" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Propbank comes of age-larger, smarter, and more diverse</title>
		<author>
			<persName><forename type="first">S</forename><surname>Pradhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bonn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Myers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Conger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>O'gorman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Wright-Bettner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Palmer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 11th joint conference on lexical and computational semantics</title>
				<meeting>the 11th joint conference on lexical and computational semantics</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="278" to="288" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
