<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>May the FORCE be with Semantics: exploiting LLMs to Image Schematic Knowledge Enrichment</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>StefanoDe Giorgis</string-name>
          <email>stefano.degiorgis@cnr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Cognitive Science and Technologies - National Research Council (ISTC-CNR)</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>terforce, Removal_Of_Restraint</institution>
          ,
          <addr-line>Enablement, Diversion, Attraction, and Repulsion, this</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <fpage>25</fpage>
      <lpage>28</lpage>
      <abstract>
        <p>This paper addresses the underspecification of the FORCE image schema. We present a novel hybrid pipeline that combines large language model interactions, linguistic analysis, and knowledge extraction techniques to expand upon Johnson's initial categorization of FORCE types. Our methodology employs Claude 3.5 Sonnet for domain exploration, generates a dataset of 100 force-expressing verbs with contextual sentences, and integrates findings into ImageSchemaNet through AMR2FRED processing and SPARQL querying. Key contributions include: (1) a more nuanced understanding of the FORCE image schema, (2) a validated dataset of force-related linguistic expressions, and (3) an enhanced ontology with empirically derived FORCE concepts. This work bridges the gap between abstract image schema theory and specific linguistic realizations of FORCE, ofering practical tools for natural language processing, knowledge representation, and cognitive computing.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Image schemas, as fundamental cognitive constr1u]c,thsa[ve been instrumental in our understanding
of embodied cognition and conceptual metaphor theory. However, while certain image schemas have
been extensively investigated, others remain ambiguous, and a comprehensive, agreed-upon list of
these schemas continues to elude researchers. This lack of consensus poses significant challenges for
advancing the field and applying image schema theory across various domains, including knowledge
representation, natural language processing, and cognitive robotics.</p>
      <p>Large Language Models (LLMs) ofer a promising avenue to address some of these challenges. As
the most extensive repositories of general approximate commonsense knowledge currently available,
LLMs have inadvertently internalized a degree of embodiment “by proxy” through the way language
is used to describe the worl2d].[This linguistic representation is inherently grounded in embodied
cognition, reflecting how humans conceptualize and interact with their environment. Consequently,
LLMs possess a substantial amount of knowledge that is implicitly grounded in their training data,
potentially ofering insights into image schemas that have yet to be fully explored or defined.</p>
      <p>
        In the realm of image schemaFso, rce remains notably underspecified compared to more thoroughly
explored schemas such aSsource_Path_Goal and its related families, as highlighte3d].byW[hile
Johnson [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] provided an initial distinctionFoorfce types, includingCompulsion, Blockage,
Coun
      </p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
      <p>This approach not only addresses the scarcity of annotated resources and comprehensive datasets
in the domain of image schemas but also opens up new possibilities for understanding how humans
conceptualize their experiences. By tapping into the implicit knowledge encoded in LLMs, we can
potentially bridge the gap between abstract image schema concepts and their concrete manifestations
in language and thought.</p>
      <p>The paper is organised as follows: Sect2iopnrovides useful references to IS literarture and in
particular aboFuotrce analysis; Sectio3ndetails the hybrid approach; Sect4ioshnows our results and
discuss them; finally Section5 envisions future works and concludes the paper.</p>
      <p>The dataset, knowledge base, scripts, and full prompts will be made fully available at “camera ready”
time.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        The concept of image schemas (IS), introduced by Lakof and Johnso1,n4[
        <xref ref-type="bibr" rid="ref5">, 5</xref>
        ], has evolved into a
foundational theory in cognitive linguistics. The process of perceptual meaning analy6s]isin(PMA) [
children has provided valuable insights into how knowledge can be acquired through sensorimotor
interactions with the environment, and specifically through image schemas. These schemas are now
understood as sensorimotor cognitive patterns that shape our perception of the world and establish
semantic relations based on bodily experien7c,e8s, [
        <xref ref-type="bibr" rid="ref10 ref11 ref9">9, 10, 11</xref>
        ].
      </p>
      <p>Furthermore, IS constitute a finite set of relational primitives that define the uses and afordances
of objects within their environments. Prominent examples inCcolnutdaeinment, which represents
the capacity of one object to be enclosed within anothSeoru,racned_Path_Goal, which describes
the potential or actual movement of objects along specific trajectories. These schemas are not merely
static representations but serve as dynamic cognitive structures that underpin more complex reasoning
processes.</p>
      <p>Indeed, image schemas are widely recognized as foundational elements in human rea1s2o]nainngd [
have been demonstrated to evolve into sophisticated cognitive functions, including natural language
processing and the conceptualization of abstract e1n,t1i3t]i.esTh[is evolution occurs through the
grounding of these schemas in experiential patterns, highlighting the embodied nature of cognition.</p>
      <p>
        Moreover, image schemas exhibit a remarkable capacity for combinatorial complexity. Consider the
concept of “transportation,” which can be abstracted beyond specific objects to represent the “movement
of object(s) from A to B.” In image-schematic terms, this can be formally described as a combination of
Source_Path_Goal with eitheSrupport orContainment [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. This combinatorial property allows
for the formal description of increasingly complex events through the construction of constellation
and sequences of image schemas, efectively creating state spaces of conceptual str1u5c,t1u6r].es [
      </p>
      <p>
        Recent advancements in image schema research have investigated the capabilities of this combinatorial
capacity, leading to the development of sophisticated analytical tools and frameworks. Notable among
these are the Image Schema LogISicL [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], which explores the schemas’ compositional nature, studies
on their role in conceptual blend1i8n,g19[], and the ImageSchemaNet ontolog2y0][. Additionally,
a diagrammatic image schema language has been proposed for visual represen2t1a]t,ifounrt[her
expanding the field’s analytical capabilities.
      </p>
      <p>While corpus-based studie2s2[, 23] and machine learning approach2e4s,[25, 26] have made
significant strides in identifying image schemas in natural language, the challenge of comprehensive image
schema coverage remains an active area of research.</p>
      <p>On the other side, the unique position of LLMs as both products and reflectors of human language use
makes them valuable tools for investigating image schemas. By analyzing the patterns and structures
within LLM outputs, researchers may uncover new image schemas, clarify ambiguous ones, and
potentially work towards a more comprehensive list. Moreover, LLMs could be leveraged to generate
synthetic data that captures the nuances of image schemas in linguistic expressions, rapidly expanding
the available resources for studying these cognitive patterns.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>
        Retrieval Augmented Generation (RA2G7)] [is a cross-disciplinary research topic which focuses on
refining models to make them able to retrieve precise information from big compressed amount of
(usually textual) data, traditionally in the form of vector embeddings. Recent advancements in knowledge
extraction have made graph generation from text a relatively straightforward process. The recent
emergence of Graph-RAG2[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and parallel techniques has demonstrated the feasibility of generating
generic subject-predicate-object triples from textual input, even when using “smaller” language models.
This capability has opened up new possibilities for automatically triplify information extracted from
unstructured text.
      </p>
      <p>However, the true challenge lies in aligning this
extracted information with existing knowledge structures,
such as ontologies or conceptual schemas in knowledge
bases, and efectively leveraging existing semantic web
resources. To address this challenge, our
methodology, shown in Figure1, builds upon previous work,
including ImageSchemaNet20[], and enriches the
formalized knowledge through two primary approaches. First,
we employ a chain of thoughts prompting technique
for knowledge elicitation from large language models
(LLMs), as detailed in subsequent sections. This process
allows us to tap into the vast knowledge encoded in LLMs
while maintaining a structured approach to information
extraction.</p>
      <p>
        Our methodology also yields a valuable by-product:
synthetic data augmentation for the Image Schema (IS)
Catalogue2[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], which serves as the primary resource
for image schemas. We process the generated examples
through AMR2FRED3[0, 31], a tool capable of creating
proper RDF graphs from text via Abstract Meaning
Representation, and then use SPARQL queries to extract
entities that can be declared as triggers foForrtcheeimage
schema in existing repositories. This approach results in
a hybrid pipeline that combines LLM-generated
information (pink boxes in Figur1e) with symbolic knowledge
extraction (light blue boxes). To ensure the quality and
Figure 1: FORCE knowledge enrichment
relevance of our results, the final synthetic dataset of 100
sentences undergoes manual validation by domain ex- hybrid pipeline.
perts, providing a robust foundation for further research
and applications in the field of image schemas and its real world application.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments and Discussion</title>
      <p>This section details our experimental approach to exploring and formalFizoirncgetihmeage schema,
combining LLM interactions, linguistic analysis, and knowledge extraction techniques. Our
methodology encompasses initial domain exploration, focused data generation, and the integration of derived
knowledge into existing semantic web resources.</p>
      <sec id="sec-4-1">
        <title>4.1. Generative Knowledge Enrichment</title>
        <p>For the generative task it has been used a state of the art model: Claude 3.5 Sonnet, a large language
model known for its comprehensive knowledge base and nuanced ability in describing even complex
concepts. The whole generation process is freely replicable since it is done via open chat interface. Our
experimental approach commenced with a broad domain exploration, centered around the fundamental
question: “List which kind of forces can activaFtoertcheeimage schema.”</p>
        <p>List which kind of forces can activate the FORCE image schema.</p>
        <p>The model provided a detailed list of genFeorrice types, which included: physical forces,
psychological forces, social forces, emotional forces, gravitational forces, electromagnetic forces, nuclear forc
(strong and weak), frictional forces, tensile and compressive forces, and centripetal and centrifugal
forces. This initial output served as a foundation for our subsequent investigation, ofering a diverse
range of force categories that could potentially actFiovracteeitmhaege schema.</p>
        <p>Building upon this initial output, we employed a chain-of-thought prompting technique to generate
a controlled vocabulary for each of Ftohrecsee types. This method involved asking the model to
elaborate on each force category, providing examples and related concepts.</p>
        <p>Now for each of these points generate a list of terms which can be used as controled</p>
        <p>vocabulary to generate a knowledge base.</p>
        <p>
          After careful analysis of the results, we made a strategic decision to focus specifically on physical
forces for several compelling reasons. Firstly, we observed that some of the other categories, such as
psychological and social forces, often represented metaphorical extensions of physical forces. These
metaphorical uses, while interesting, were already grounded in image-schematic concepts and would
potentially introduce complexity in distinguishing between literal and figurative applFicoartcieo.ns of
Additionally, certain categories, such as emotional forces, were deemed too generic and abstract for
our purposes. Moreover, these emotional aspects had been previously addressed in existing literature,
notably in3[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], which provided a treatment of emotional forces in relation to image schemas.
        </p>
        <p>Our next step involved a more focused prompt aimed at extracting a “complete list of verbs expressing
Forces.” We instructed the model to concentrate solely on physical forces and provide a comprehensive
list of verbs that evoke aFonryce idea.</p>
        <p>Ok, now focus only on physical forces and provide a list of all verbs which evokes any</p>
        <p>FORCE idea.</p>
        <p>The prompt was carefully crafted to elicit a wide range of verbs while maintaining relevance to
physical manifestationsFoofrce. This process resulted in a collection of 100 verbs expressing various
types ofForces, ranging from common actions like “push” and “pull” to more specific verbs like “torque”
and “propel.” To contextualize these verbs and ensure their applicability, we then requested linguistic
examples that demonstrate specific occurrenceFsoorfce for each item on the list.</p>
        <p>Now provide a sentence as example for each item it this list, which realizes a</p>
        <p>situation of FORCE.</p>
        <p>The prompt for this stage was designed to generate diverse, realistic sentences that clearly illustrated
theForce concept embodied by each verb.</p>
        <p>This iterative prompting process yielded a final output of 100 lines, each containing a lexical unit
pointing to a type oFforce, accompanied by a sentence illustrating its realization in context. For
instance, one entry might include the verb “compress” along with the example sentence “The hydraulic
press compressed the metal sheet into a thin disc.” This comprehensive collection serves as a valuable
resource for further analysis and applicationFoofrctehe(and possibly other co-occurring) image
Lexical Trigger Example Sentence</p>
        <sec id="sec-4-1-1">
          <title>Push</title>
        </sec>
        <sec id="sec-4-1-2">
          <title>Pull</title>
        </sec>
        <sec id="sec-4-1-3">
          <title>Shove</title>
        </sec>
        <sec id="sec-4-1-4">
          <title>The firefighter pushed against the heavy door with all his might to rescue those</title>
          <p>trapped inside.</p>
        </sec>
        <sec id="sec-4-1-5">
          <title>With a strong pull on the rope, the sailor raised the mainsail against the wind’s</title>
          <p>resistance.</p>
        </sec>
        <sec id="sec-4-1-6">
          <title>In the crowded subway, an impatient commuter shoved his way through the mass</title>
          <p>of people.
schema(s) in the field. To ensure the quality and relevance of our dataset, each entry was reviewed by
domain experts with previous background in image schemas. The experts evaluated the entries based
on criteria such as claritFyoorfce representation, diversityForfce types, and linguistic naturalness
of the example sentences. Any disagreements were resolved through discussion, and entries that did
not meet the quality standards were replaced or refined through additional prompting sessions with
the language model. This results in a synthetic dataset, manually curated, listing 100 verbs expressing
Force.</p>
          <p>Table?? presents an excerpt from this curated dataset, showcasing three representative lexical units
associated with diferent types Foofrce and their corresponding example sentences. This table not
only demonstrates the diversitFyoorfce-related verbs captured in our study but also illustrates how
these verbs are contextualized in natural language use. The full dataset of 100 entries provides a useful
extension of the IS Catalogue, with a focFuosrocne image schema and its varied manifestations in
language.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Knowledge Extraction</title>
        <p>Following the generation and initial validation of our dataset, detailed in previous section, and shown
in pink boxes in Figure1, we proceeded with a knowledge extraction process to formalize and integrate
theForce-related information into existing semantic resources.</p>
        <p>We employed the AMR2FRED tool to process these sentences. AMR2FRED is a sophisticated natural
language processing tool that converts text into Abstract Meaning Representation (AMR) graphs, and
then passes the graph to FRE3D3][ which performs several tasks, anong others: frame extraction,
entity recognition, and entity alignment to DOLCE foundational o3n4t].oTlohgisyt[ool is particularly
valuable for our purposes as it preserves the semantic richness of natural language while producing
structured, machine-readable representations. F2isghuorwes the graph automatically generated from
the sentence “The strong current ripped the swimmer’s goggles of her face.”</p>
        <p>The RDF graphs generated by AMR2FRED for each sentence are collected and stored in a dedicated
knowledge base. This knowledge base serves as a centralized repository of forFmoracleiz-erdelated
semantic structures derived from our curated examples. To extract relevant entities from this knowledge
base, we developed a targeted SPARQL query, shown in Fig4u.2r.eThis query is designed to identify
and extract entities from PropBa3n5]k, [a lexical resource that provides a frame like structure and
semantic role labels for English lexicon. Since the AMR2FRED graph is well formed RDF graph, the verb
used in the sentence is reified as an instatiation of a specific occurrence (represented as an individual),
havingrdf:type the PropBank entity on which it is disambiguated. The reasoning behind this is that
an occurrence of a certain verb is the instatiation of the general concept of that verb (in our case), whic
is represented on the graph via subsumption relation. The extracted entities represent concepts and
actions directly associated withFotrhcee image schema as manifested in our dataset.</p>
        <p>Finally, we enriched ImageSchemaNet, the existing ontology for image schemas, by adding these
extracted entities as direct evocators Foofrtcheeimage schema. This addition was implemented
in a separate graph within ImageSchemaNet, allowing for clear provenance and easy integration or
separation of our contribution. This process not only augments ImageSchemaNet with new, empirically
derivedForce-related concepts but also establishes a concrete link between abstract image schema
theory and specific linguistic realizations of force.</p>
        <p>SPARQL Query
1 SELECT DISTINCT ?e
2 WHERE {
3 ?e rdf:type ?pb_entity .
4 }</p>
        <p>Box 4.2 SPARQL query to retrieve entities generated out of the original lexical unit in the graph, and the
PropBank entity on which it is disambiguated.</p>
        <p>The resulting enhanced ontology provides a valuable resource for researchers and practitioners
working at the intersection of cognitive linguistics, natural language processing, and knowledge
representation.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Works</title>
      <p>In this work, we have addressed the long-standing issue of underspecificationFoinrctehiemage schema.
Our research has made significant strides in bridging the gap between abstract theoretical constructs
and practical, operational applications. By employing a novel hybrid pipeline that combines large
language model interactions, domain experts linguistic analysis, and knowledge extraction techniques,
we have expanded upon Johnson’s initial categorizatiFoonrcoef types. Our methodology, which
included domain exploration using Claude 3.5 Sonnet, generatFioorncoe-frelated verbs and contextual
sentences, expert validation, and integration with existing semantic resources, has yielded several
key achievements. First, we have developed a more nuanced and comprehensive understanding of
theForce image schema, expanding beyond the original eight categories to include a wider range
of force manifestations in language. Second, our approach has resulted in a validated dataset of 100
force-expressing verbs and their contextual uses, providing a valuable resource for future research in this
area. Third, through the useAMoRf2FRED tool processing anSdPARQL querying, we have successfully
integrated our findings into ImageSchemaNet, enhancing this ontology with empiricallyFodrecreiv-ed
related concepts. This integration establishes a concrete link between image schema theory and specific
linguistic realizationsFoorfce, opening new avenues for applications in natural language processing,
knowledge representation, and cognitive computing.
This work was supported by the Future Artificial Intelligence Research (FAIR) project, code PE00000013
CUP 53C22003630006.
[22] A. Papafragou, C. Massey, L. Gleitman, When English proposes what Greek presupposes: The
cross-linguistic encoding of motion events, Cognition 98 (2006) B75–B87.
[23] J. A. Prieto Velasco, M. Tercedor Sánchez, The embodied nature of medical concepts: image
schemas and language for pain., Cognitive processing (20141).0d.o1i0:07/s10339-013-0594-9.
[24] D. Gromann, M. M. Hedblom, Body-mind-language: Multilingual knowledge extraction based on
embodied cognition, in: AIC, 2017, pp. 20–33.
[25] D. Gromann, M. M. Hedblom, Kinesthetic mind reader: A method to identify image schemas in
natural language, in: Proceedings of Advancements in Cogntivie Systems, 2017.
[26] L. Wachowiak, D. Gromann, Systematic analysis of image schemas in natural language through
explainable multilingual neural language processing, in: Proceedings of the 29th International
Conference on Computational Linguistics, 2022, pp. 5571–5581.
[27] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih,
T. Rocktäschel, et al., Retrieval-augmented generation for knowledge-intensive nlp tasks, Advances
in Neural Information Processing Systems 33 (2020) 9459–9474.
[28] D. Edge, H. Trinh, N. Cheng, J. Bradley, A. Chao, A. Mody, S. Truitt, J. Larson, From local to global:</p>
      <p>A graph rag approach to query-focused summarization, arXiv preprint arXiv:2404.16130 (2024).
[29] J. Hurtienne, S. Huber, C. Baur, Supporting user interface design with image schemas: The iscat
database as a research tool., in: ISD, 2022.
[30] A. GANGEMI, et al., Amr2fred, a tool for translating abstract meaning representation to
motifbased linguistic knowledge graph, in: Proceedings of the Extended Semantic Web Conference
(ESWC2017), DEU, 2017, pp. 43–47.
[31] A. Meloni, D. Reforgiato Recupero, A. Gangemi, Amr2fred, a tool for translating abstract meaning
representation to motif-based linguistic knowledge graphs, in: The Semantic Web: ESWC 2017
Satellite Events: ESWC 2017 Satellite Events, Portorož, Slovenia, May 28–June 1, 2017, Revised
Selected Papers 14, Springer, 2017, pp. 43–47.
[32] S. De Giorgis, Ethics in the flesh: formalizing moral values in embodied cognition (2023).
[33] A. Gangemi, V. Presutti, D. Reforgiato Recupero, A. G. Nuzzolese, F. Draicchio, M. Mongiovì,</p>
      <p>Semantic web machine reading with fred, Semantic Web 8 (2017) 873–893.
[34] S. Borgo, R. Ferrario, A. Gangemi, N. Guarino, C. Masolo, D. Porello, E. M. Sanfilippo, L. Vieu,
Dolce: A descriptive ontology for linguistic and cognitive engineering, Applied ontology 17 (2022)
45–69.
[35] S. Pradhan, J. Bonn, S. Myers, K. Conger, T. O’gorman, J. Gung, K. Wright-Bettner, M. Palmer,
Propbank comes of age—larger, smarter, and more diverse, in: Proceedings of the 11th joint
conference on lexical and computational semantics, 2022, pp. 278–288.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <article-title>The Body in the Mind: The Bodily Basis of Meaning, Imagination, and</article-title>
          <string-name>
            <surname>Reason</surname>
          </string-name>
          , The University of Chicago Press, Chicago and London,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nolfi</surname>
          </string-name>
          ,
          <article-title>On the unexpected abilities of large language models</article-title>
          ,
          <source>Adaptive Behavior</source>
          (
          <year>2023</year>
          )
          <fpage>10597123241256754</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>M. M. Hedblom</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Neuhaus</surname>
          </string-name>
          ,
          <article-title>Choosing the right path: image schema theory as a foundation for concept invention</article-title>
          ,
          <source>Journal of Artificial General Intelligence</source>
          <volume>6</volume>
          (
          <year>2015</year>
          )
          <fpage>21</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lakof</surname>
          </string-name>
          , M. Johnson, Metaphors we live by, University of Chicago press,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lakof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , et al.,
          <article-title>Philosophy in the flesh: The embodied mind and its challenge to western thought</article-title>
          , volume
          <volume>640</volume>
          ,
          <string-name>
            <surname>Basic</surname>
          </string-name>
          books New York,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Mandler</surname>
          </string-name>
          ,
          <article-title>The foundations of mind: Origins of conceptual thought</article-title>
          , Oxford University Press,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Langacker</surname>
          </string-name>
          ,
          <article-title>Foundations of cognitive grammar: Theoretical prerequisites</article-title>
          , volume
          <volume>1</volume>
          , Stanford university press,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Langacker</surname>
          </string-name>
          , Cognitive grammar,
          <source>Basic Readings</source>
          <volume>29</volume>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Dodge</surname>
          </string-name>
          , G. Lakof,
          <article-title>Image schemas: From linguistic analysis to neural grounding, From perception to meaning: Image schemas in cognitive linguistics (</article-title>
          <year>2005</year>
          )
          <fpage>57</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cialone</surname>
          </string-name>
          ,
          <article-title>Corpus guided sense cluster analysis: a methodology for ontology development (with examples from the spatial domain)</article-title>
          ., in: FOIS,
          <year>2014</year>
          , pp.
          <fpage>213</fpage>
          -
          <lpage>226</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Cienki</surname>
          </string-name>
          ,
          <article-title>Image schemas and gesture, From perception to meaning: Image schemas in cognitive linguistics 29 (</article-title>
          <year>2005</year>
          )
          <fpage>421</fpage>
          -
          <lpage>442</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>G. Lakof,</surname>
          </string-name>
          <article-title>The invariance hypothesis: Is abstract reason based on image-schemas? (</article-title>
          <year>1990</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Núñez</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Lakof,</surname>
          </string-name>
          <article-title>The cognitive foundations of mathematics: The role of conceptual metaphor, in: The handbook of mathematical cognition</article-title>
          , Psychology Press,
          <year>2005</year>
          , pp.
          <fpage>109</fpage>
          -
          <lpage>124</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>W.</given-names>
            <surname>Kuhn</surname>
          </string-name>
          ,
          <article-title>An image-schematic account of spatial categories</article-title>
          ,
          <source>in: International Conference on Spatial Information Theory</source>
          , Springer,
          <year>2007</year>
          , pp.
          <fpage>152</fpage>
          -
          <lpage>168</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>M. M. Hedblom</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Peñaloza</surname>
          </string-name>
          , G. Guizzardi,
          <article-title>Image schema combinations and complex events</article-title>
          ,
          <source>KI-Künstliche Intelligenz</source>
          <volume>33</volume>
          (
          <year>2019</year>
          )
          <fpage>279</fpage>
          -
          <lpage>291</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>R.</given-names>
            <surname>St. Amant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. T.</given-names>
            <surname>Morrison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-H.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Beal</surname>
          </string-name>
          ,
          <article-title>An image schema language</article-title>
          ,
          <source>in: Proc. of the 7th Int. Conf. on Cognitive Modeling (ICCM)</source>
          ,
          <year>2006</year>
          , pp.
          <fpage>292</fpage>
          -
          <lpage>297</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>M. M. Hedblom</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Mossakowski</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Neuhaus</surname>
          </string-name>
          ,
          <article-title>Between contact and support: Introducing a logic for image schemas and directed movement</article-title>
          ,
          <source>in: Conference of the Italian Association for Artificial Intelligence</source>
          , Springer,
          <year>2017</year>
          , pp.
          <fpage>256</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Hedblom</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Galliani</surname>
          </string-name>
          ,
          <article-title>Asymmetric hybrids: Dialogues for computational concept combination</article-title>
          ,
          <source>in: Formal Ontology in Information Systems</source>
          , IOS Press,
          <year>2021</year>
          , pp.
          <fpage>81</fpage>
          -
          <lpage>96</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <surname>O. Kutz,</surname>
          </string-name>
          <article-title>The moving apple: An image-schematic investigation into the leuven concept database</article-title>
          ,
          <source>in: Proceedings of The Seventh Image Schema Day co-located with The 20th International Conference on Principles of Knowledge Representation and Reasoning (KR</source>
          <year>2023</year>
          ), Rhodes, Greece,
          <year>September 2nd</year>
          ,
          <year>2023</year>
          , CEUR-WS,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>De Giorgis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gangemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gromann</surname>
          </string-name>
          , Imageschemanet:
          <article-title>Formalizing embodied commonsense knowledge providing an image-schematic layer to framester, Semantic Web Journal forthcoming (</article-title>
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>M. M. Hedblom</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Neuhaus</surname>
          </string-name>
          , T. Mossakowski,
          <article-title>The diagrammatic image schema language (disl), Spatial Cognition</article-title>
          &amp;
          <string-name>
            <surname>Computation</surname>
          </string-name>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>