<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>What It Tends to Do: Defining Qualitative Parameter Regions by Their Efects on Physical Behavior</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mihai Pomarlan</string-name>
          <email>pomarlan@uni-bremen.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guendalina Righetti</string-name>
          <email>guendalina.righetti@stud-inf.unibz.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>John A. Bateman</string-name>
          <email>bateman@uni-bremen.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bolzano</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Italy</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Linguistics, University of Bremen</institution>
          ,
          <addr-line>Bibliothekstraße 1, 28359 Bremen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>KRDB Research Centre for Knowledge and Data, Free University of Bozen-Bolzano</institution>
          ,
          <addr-line>piazza Domenicani 3, 39100</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>The Sixth Image Schema Day</institution>
          ,
          <addr-line>ISD6</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <abstract>
        <p>Qualitative descriptions of object arrangements and behaviors (e.g. terms such as “near” or “fast”) can be interpreted as placing constraints on the values of the parameters they refer to, but they do so in a context-dependent way. As such, it is problematic to define such qualitative terms by value regions. Instead, in this paper we propose “functional” definitions: a qualitative term's meaning is defined by the efect on behavior of having that quality. The resulting functional definitions are context-dependent. To obtain a more general semantics for a qualitative term, we use concept blending to combine its functional definitions coming from several contexts. We then describe how a general, functional definition can be specialized to a new context, how this can be useful in transferring existing knowledge, and illustrate this with an example, raising the possibility that appropriate specializations may be supported with image schemas.</p>
      </abstract>
      <kwd-group>
        <kwd>embodied cognition</kwd>
        <kwd>cognitive robotics</kwd>
        <kwd>concept blending</kwd>
        <kwd>commonsense reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Interest in commonsense reasoning has grown in the past few years, especially with autonomous
artificial agents moving into the physical world at scales which may pose dangers to human
beings. We want our artificial agents to predict the outcome of their actions, to understand the
world around them, and to do so in a way that we can check and understand ourselves.</p>
      <p>Such computational models remain elusive, largely because it is dificult to elicit commonsense
knowledge from humans – it is too obvious and too ingrained. However, we suspect there is
also another problem coming from a fundamental tension between requirements placed on
commonsense reasoning, whatever it turns out to be. On the one hand, it should “generalize”:
commonsense principles should allow an agent mastering them to cope with situations it has
not encountered before. This is, after all, the main point of investing in cognition. On the other
hand, commonsense inference is extremely situated and dependent on context.
nEvelop-O</p>
      <p>Language provides many examples: a spoon may be near a cup, which contains cofee from
a nearby store, which reflects the light coming from a near star called the Sun. Many orders
of magnitude separate the distances mentioned, yet each was described as “near”. The same
applies to every word we use to refer to some set of parameter values. “Hot” means one thing
when applied to the weather, and another when applied to the cores of stars.</p>
      <p>
        It has been argued that attempts to interpret spatial prepositions as describing, even
qualitatively, an arrangement of objects in a context-free manner will quickly encounter problems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
But surrendering to context entirely and treating words such as “near” or “hot” as meaning
entirely diferent things in diferent situations wastes commonalities between the various uses
of a word and appears to miss cognitively relevant generalizations. We do not say “the spoon is
near the cup” and “the Earth is near the Sun” merely by linguistic accident. More likely is that
a familiarity with a kind of situation, such as arrangements of tableware, is leveraged to help
understand another situation, such as the relative placements of celestial bodies. This transfer
of knowledge from one situation to another suggests that qualitative labels for a parameter
are paired with mechanisms to pick situation-appropriate regions of the parameter’s possible
values. In this paper, we describe one method such a mechanism might use.
      </p>
      <p>To begin, we will assume an agent has obtained, from (potentially simulated) experience,
symbolic rules to predict behaviors of some situations. Such predictive symbolic rules map
qualitative descriptions of initial arrangements to qualitative descriptions of observed behaviors,
and provide an interpretable mechanism to predict the environment and adjust action. However,
the rules ofer a second benefit: a “functional” definition for a qualitative label, in the situations
to which the rules apply. The qualitative label represents the parameter values that will
make an arrangement, obeying these other qualitative constraints, behave in this qualitative
way. Diferent situations, diferent arrangements and behaviors, will result in diferent, and
potentially incompatible, functional definitions of the same qualitative label. To resolve these
incompatibilities, we use concept-blending techniques and axiom weakening, to produce more
general definitions that remain “functional” in the above sense: they define a qualitative label
in terms of the more general behavior of a more general class of arrangements.</p>
      <p>The more general functional definition of a qualitative label can then be rendered more
specific once the particularities of a new situation are known. We conjecture that this will
result in quicker learning for situations that are novel but in some sense similar to previously
encountered ones, because the more general, functional definitions for qualitative labels and
their efects on behavior will make knowledge transfer easier.</p>
      <p>
        We illustrate our proposal with a running example, interpreting the qualitative label “heavy”.
We assume a set of symbolic rules describing the behavior of heavy objects are already known
to the agent, and describe how, via an algorithm similar to dialog-based concept blending [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ],
these rules can be combined into one more general definition “heavy”. We then produce a new,
more specialized definition, intended to apply to a new kind of situation, i.e. one for which the
agent does not yet possess a definition of heavy, nor a predictive rule.
      </p>
      <p>
        Our example uses the Description Logic ALC, on which the current implementation concept
blending operates, and so, due to expressivity limitations, does not then quite capture human
intuitions. However, our approach is not limited to ALC, because the concept-blending algorithm
is not limited to ALC [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. To address the expressivity limitation, we intend to eventually employ
Image Schema Logic (ISL) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] for the writing of the predictive symbolic rules.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Axiom Weakening and Concept Blending</title>
        <p>
          Axiom weakening is a recently introduced technique for repairing inconsistent ontologies
by weakening, instead of removing, their axioms. The main advantage of this technique is it
preserves as much information as possible while maintaining the ontologies’ consistency [
          <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
          ].
Often, axiom weakening relies on refinement operators, such as specialisation and generalisation.
In [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] a concept refinement operator is introduced to generalise  ++ concepts in the context of
concept invention. In [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] a similar line of work was extended to  axioms, proposed diferent
algorithms to repair inconsistent ontologies, and analyzed their computational complexity.
In [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], similar refinement operators are exploited for ontology aggregation. The previous
definitions have also been extended to deal with   constructs [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          In what follows, we refer to the refinement operators formally introduced in [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Informally,
a generalisation (resp. specialisation) operator is a function that takes a concept  and returns
the set of the super-concepts (resp. sub-concepts) of  , given a reference ontology.
        </p>
        <p>In our context, axiom weakening is taken into account to resolve incompatibilities that
may arise when merging, or blending, functional symbolic rules learned in diferent,
potentially conflicting, contexts. Since the functional rules are expressed in  axioms, when an
incompatibility arises, we exploit axiom weakening techniques to restore consistency.</p>
        <p>As an example, our system may individuate diferent functional rules concerning the concept
heavy in diferent situations – for instance, keeping paper on a table so that the air won’t blow it
away (stabilizing, preventing motion) vs. using something heavy to drive a nail through a piece
of wood (causing/enabling motion against some resistance). When trying to merge functional
definitions, inconsistencies may then arise in the definition of the concept heavy.</p>
        <p>
          Related issues are analysed in the context of computational conceptual blending (CCB). As a
theory of cognition, conceptual blending was proposed to model conceptual integration and
creativity, which is seen as arising from the conceptual blend of diferent input spaces through
analogical mapping [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Computational conceptual blending studies formal strategies to allow
the integration of possibly conflicting input spaces. This process may be easy for human beings,
thanks to the flexibility of human concepts, but is not trivial in AI (for an extreme case, see [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]).
CCB often relies on the identification of shared structures between diferent input spaces, and
on the identification of a generic space to steer the blending process (see e.g. [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]).
        </p>
        <p>
          As observed above, refinement operators have been applied to conceptual blending and
concept invention in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] further carries out this line of research, by proposing a general
workflow and a formal reconstruction of the conceptual blending process, including axiom
weakening in the picture. Relatedly, but focusing on the literature of noun-noun combination
in the context of cognitive linguistics, [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] proposes a computational treatment of impossible
combinations in the context of formal ontologies through the procedure of axiom weakening.
In line with this work, we sketch an extension of the algorithm proposed there, to manage,
through generalisations allowed by the axiom weakening procedure, definitions of concepts
emerging from diferent learning contexts (see section 5).
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Natural Language Understanding and Commonsense Reasoning</title>
        <p>
          There is an active search for computational implementations of commonsense reasoning, to
enable artificial agents to cope with the physical and social world of humans in a human-like
way. This goal however remains elusive. Even when considering simple uses of spatial language,
one discovers an apparent need for extremely expressive logical techniques [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Further, if it is
to be useful for the activities of a physical agent, symbolic inference must connect somehow to
numeric descriptions of the agent’s environment and actions. In the case of spatial language,
such links can be provided via potential fields or probability distributions which are meant
to capture the appropriateness of pairing a qualitative label (e.g. “left of”) with a numerical
values [
          <xref ref-type="bibr" rid="ref15 ref16 ref17">15, 16, 17</xref>
          ]. However, such probability distributions need to be tuned or learned [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]
and they are highly context dependent. Despite several proposals in the literature, it remains
unclear how they should best be adjusted when context changes.
        </p>
        <p>
          There is also good evidence that human spatial descriptions are sensitive to functional aspects
of arrangements of objects [
          <xref ref-type="bibr" rid="ref19 ref20">19, 20</xref>
          ], or at least, that functional aspects are a more reliable guide
to predicting human language use than references to context-independent spatial arrangements,
even when these are allowed to be probabilistic. This has led to eforts to formally model spatial
relations at a fairly abstract, functional level [
          <xref ref-type="bibr" rid="ref1 ref21">1, 21</xref>
          ], and it has been argued that, by separating
an abstract, semantic level of formal modelling from contextualization, one can avoid some of
the apparent dificulties in formalizing commonsense inference [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ].
        </p>
        <p>Although the works cited here refer to spatial language, it appears to us that similar
conclusions apply to qualitative labels in general: functional characterizations, rather than references
to context-independent regions of possible values, are a better fit to how humans use language.
Further, such functional characterizations open avenues for generalizing, and transferring,
situation-dependent knowledge to new situations, which we investigate here.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. From Simulated Experience to Qualitative Behavior</title>
    </sec>
    <sec id="sec-4">
      <title>Prediction Rules</title>
      <p>
        Here we will briefly summarize our previous work [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] on how an agent can acquire symbolic
prediction rules. One prerequisite is ability to act on or simulate an environment. The agent must
have some vocabulary with which to describe object properties, arrangements, and behaviors,
and concepts in this vocabulary are linked to “generative models” understood as joint probability
distributions over qualitative labels and values for the parameter associated to the labels.
      </p>
      <p>
        Generative models allow a bridge between symbolic, qualitative descriptions of an
arrangement or behavior, and numeric descriptions of same. This is necessary because to run a
simulation an agent needs to provide exact values for all parameters, and likewise the behavior
observed in the (more or less continuous) world of the agent will be reported as numeric data.
By using generative models, it is possible to go from qualitative to numeric descriptions via
sampling, and vice-versa by asking what qualitative hypothesis best fits the observed numeric
evidence. Our method in [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] samples parameter values and runs simulations for an “antecedent”
– a qualitative description of an arrangement and action – and obtains “consequents” –
qualitative descriptions of behavior. Antecedents are mapped to consequents resulting from the
simulations sampled from those antecedents, yielding a set of predictive symbolic rules. A
heuristic selects which antecedents to sample next, based on expectations of new behavior.
      </p>
      <p>The generative models themselves are not updated at any point of the method, and so they
are the kind of context-independent interpretations of qualitative labels that we wish to go
beyond. A problem with context-independent interpretations of qualitative labels is they make
predictive symbolic rules fragile: a rule may not apply for the entire region of values that are
deemed likely by the generative models attached to the qualitative terms in the rule’s antecedent.
Therefore, the rules and the generative models must be learned together, which is what inspired
us to treat the learned predictive rules also as a way to functionally define the qualitative labels.</p>
      <p>
        Another apparent limitation of the method from [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] is the use of simulation. Critics of
simulation as a tool for cognition exist [
        <xref ref-type="bibr" rid="ref24 ref25">24, 25</xref>
        ], and while we do not entirely agree with them,
we acknowledge that, among other problems, there are limits to what an agent can simulate.
However, our method in this paper does not depend on simulation; the experience from which
predictive rules have been obtained can be real experience in the physical world.
      </p>
      <sec id="sec-4-1">
        <title>3.1. Running Example: Predictive Symbolic Rules involving ‘Heavy’</title>
        <p>
          Suppose we have a robotic agent attempting to understand how the physics of a household
environment operates. It may be interested in cooking popcorn, or making sure papers don’t
get blown by the wind, or driving nails into wooden boards. Through its own experimentation,
or perhaps from being told by someone else, the agent arrives at the following predictive rules,
which we first express here informally for the various situations:
• (Lid) A heavy lid placed on the opening of a pot will stop popcorn from getting out;
• (Paperweight) A heavy item placed on a sheet of paper will prevent it from being blown
by the wind;
• (Hammer) A hit from a heavy tool, like a hammer, will drive a nail into a wooden board.
To capture the above by expressions in a formal language is not trivial, and may require some
fairly expressive logics such as ISL [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. For our examples here we will present approximations
as prediction rules formalised in ALCI (ALC with inverse roles). The inverse properties in the
prediction rules are inverted again, i.e. be used as themselves, in the functional definitions in
the next section, which we then feed into the concept blending algorithm.
        </p>
        <p>Let us then consider the following predictive rules for the situations described above (and we
assume the agent was either capable of discovering them from learning from experience and
combining those results with some prior knowledge, or by being instructed):
“Lid: if something is a place where a heavy object is put, then it permits only motions that keep
still relative to some container”:
∃ − .(∃  − .   ) ⊑ ∀  .</p>
        <p>(∃    .(∃  . ))
“(Paperweight): an object upon which a heavy object is placed will not be immersed and
moved by a physical medium”:
∃ −.(∃  −.   ) ⊑ (¬(∃     .(∃   )))
“Hammer: something hit by a heavy object is not kept still by some physical medium”:
∃−.(∃  −.   ) ⊑ ¬(∃     .(∃  . ))
We assume the agent has prior knowledge, e.g. that objects exist and may play diferent
roles in a situation, qualities belong to objects, heavy is a quality and in particular a quality
relating to mass and so on. The OWL files for our running example are available online. 1</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Numeric vs. Functional Definitions of Qualitative Labels</title>
      <p>As briefly discussed in section 3, one can define the meaning of a qualitative label for a parameter
(such as “heavy” for mass) in terms of a probability distribution over a space of possible values.
Clearly however, such an understanding of a qualitative label would not match human intuition.
We take spatial language to be best understood in functional terms, and intuitively the same
applies also to qualitative attributes describing other physical parameters. For a human being,
and one supposes, for other agents acting in the world, the important information communicated
by a qualitative label is not a range of values but a disposition for a particular behavior. The
qualitative label answers a “what does it do?”, rather than “how much is it?” question.</p>
      <p>Therefore, it appears useful to define qualitative labels for parameters in functional terms, i.e.
of resulting behaviors of arrangements in which some object is described by that qualitative
label. A generic form for the predictive rules of section would be 3:</p>
      <p>∀, , ∶ (  (, ...) ∧   (, ) ∧ ()) → ℎ  (, ...)
where    and ℎ  are logical formulas describing constraints on the initial state
of some objects and their ensuing behavior; these formulas can have several variables but here
we focus on one of them,  , which stands for one of the objects which has a quality  of type  .</p>
      <p>To obtain a functional definition for a qualitative label, we first rearrange the predictive rule:
∀,  ∶ () → ((  (, ) ∧   (, ...)) → ℎ  (, ...))
and then stipulate that this is a definition for the label  :</p>
      <p>∀,  ∶ () ↔ ((  (, ) ∧   (, ...)) → ℎ  (, ...))
Such a functional definition should be interpreted as situation specific, and then for that situation
it serves as a guide to select, or learn, what parameter values correspond to the qualitative label.</p>
      <sec id="sec-5-1">
        <title>4.1. Running Example: Situation-Dependent Functional Definitions</title>
        <p>The predictive symbolic rules of section 3.1 constrain what behavior should be observed given
some arrangement and object properties. By rewriting the OWL axioms such that they are now
about the quality of an object, and stipulating that if an object placed in an arrangement results
1https://github.com/mpomarlan/ISD6_HeavyBlends
in the associated behavior, then the object is heavy, we obtain the following definitions:
“Lid: heavy is the quality of objects such that wherever they are placed, that place allows only
motions that keep still relative to some container”:
   ≡    ⊓ ∀  .(∀.</p>
        <p>(∀  .(∃    .(∃  . ))))
“Paperweight: heavy is the quality of objects such that whatever they are placed on, that object
is not immersed and moved by a physical medium”:
   ≡    ⊓ (∀  .(∀.</p>
        <p>(¬(∃     .(∃   )))))
“Hammer: heavy is the quality of objects such that whatever they collide with, that collided
object is not kept still by a physical medium”:
   ≡    ⊓ ∀  .(∀.</p>
        <p>¬(∃     .(∃  . )))
The above functional definitions for ‘heavy’ are intended as situation specific, not as axioms
that can coexist in a single, situation-independent, ontology.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. General Functional Definitions from Concept Blending</title>
      <p>Supposing an agent has experienced various situations, and obtained from each a definition of
some qualitative label, it might ask itself whether some more general, situation-independent
definition exists. It is interesting to look at combinations of the concept definitions, but not
all combinations are interesting. A mere enumeration of known definitions is not interesting,
because it says nothing about a situation not yet on the list. A simple conjunction of definitions
may either result in an unsatisfiable concept, or at least in an unduly restricted one.</p>
      <p>
        We have looked at concept blending for this, in the dialog-based approach of Righetti et
al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In brief, by starting from two “incompatible” ontologies, the algorithm finds a blend (or a
combined ontology) through a turn-based procedure which allows combining the axioms of the
diferent ontologies according to a given preference order, and to weaken them until the result
is satisfiable. In our case, the axioms to combine are the various situation-dependent definitions
of a qualitative label known to an agent; they will all be of the form  ≡  , where  is a concept
name for the qualitative label we wish to define, and  is a concept expression. The set of these
axioms is denoted by  . By following [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the agent’s background knowledge is denoted by   .
      </p>
      <p>Because we are looking to combine only axioms of a given structure, as opposed to several
ontologies that may contain any number of axioms, some simplifications to the concept blending
algorithm are needed. In particular, we do not need preference orders on the axioms in  . Also,
while several ways of weakening an axiom are possible, we are only interested in weakenings
obtained by a generalization operator   applied to the concept expressions that define  .</p>
      <p>However, inconsistency is too strong a test in our application. Consistent situation-dependent
definitions are unlikely to be equivalent, and a conjunction of them will remove possibilities
from the agent’s consideration that were nonetheless feasible in particular situations. We
therefore weaken to obtain more permissive axioms. That is:
• if we must combine axioms  ⊑  and  ⊑  ,
• and  ⊓ ¬ or  ⊓ ¬  are satisfiable under the conditions imposed by   ,
• then we replace the two axioms by  ⊑  , where  is a generalization of  ⊔ 
respect to the background knowledge encoded in ontology  
With the above modifications, the algorithm becomes the one shown in Algorithm 1.
with
Algorithm 1 Combination(  , ,  )
while 1 &lt; |Q| do
{ ⊑ ;  ⊑ } ← RandomPickAxiomPair()
if Satisfiable Oinit( ⊓ ¬) or Satisfiable Oinit( ⊓ ¬) then</p>
      <p>E ←    ( ⊔ )
else</p>
      <p>← 
end if
 ← ( − { ⊑ ;  ⊑ }) ∪ { ⊑ }
end while
return   ∪ 
▷ Assumption: Q contains only axioms of the form  ≡ 
▷  =</p>
      <p>() is s.t.  ⊧  ⊑ 
▷ If we get here,  ≡</p>
      <sec id="sec-6-1">
        <title>5.1. Running Example: a General Definition of ‘Heavy’</title>
        <p>The diferent definitions of “heavy” given in section 4.1 give us the set of axioms to combine
using the modified concept blending algorithm, and the ontology relative to which axiom
weakening is performed will be the agent’s other knowledge. We note that there is still a
random component in the blending algorithm in terms of what weakening to select through
the generalization operator  , because there are many possible generalizations.</p>
        <p>For our running example – which includes the axioms from previous sections as well as a
few axioms about how certain behaviors arise from the interaction of forces, axioms which are
listed in in the example’s github repository – a generic definition of “heavy” comes out as
   ≡    ⊓ (∀  .(∀.(∃   .   )))
or, informally, “heavy is the quality of objects that exert significant force wherever they are
placed”. This definition subsumes the situation dependent ones, and can be obtained from the
background knowledge in our example OWL files about when a force is significant.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>6. Obtaining New Situation-Specific Functional Definitions from</title>
    </sec>
    <sec id="sec-8">
      <title>General Ones</title>
      <p>A benefit of having a general definition for a qualitative label is the possibility to specialize it
once a new situation is encountered. The specialized functional definition can then be employed
to inform selection of parameter values for one’s own actions or understand what other agents
describing this situation mean. It also can be used to generate new prediction rules for the
new situation, by reversing the rewrite procedure described in section 4 and thus, one hopes,
accelerate an agent’s learning of how to deal with a situation.</p>
      <p>In our case we start from some axiom  ≡  and replace it by  ≡  where  ⊑  as given
by some ontology  describing the agent’s background knowledge and whatever knowledge of
the new situation it has obtained so far.</p>
      <sec id="sec-8-1">
        <title>6.1. Running Example: Specializing Heavy to New Situations</title>
        <p>So far, our example agent has encountered situations where heavy objects were used as lids to
keep contents in, as weights to keep objects put, or as colliders to drive objects through media,
and arrived at the conclusion that heavy is a quality of objects such that wherever they are
placed, some significant force is exerted.</p>
        <p>Suppose it then is told about two new situations, one in which the significant force to be
considered is one that impedes the motion of a trajector, and one in which the significant force
is one that prevents an object from containing parts in a stable way. Loosely speaking, these
are the situations of a traveller who may be encumbered by their luggage, or of an object that
may be destroyed by impact with or pressure from another. Via specialization, the following
situation-specific definitions are obtained:
   ≡    ⊓ ∀  .(∀.</p>
        <p>(∃ℎ .(∃    .(∃  . ))))
and respectively
   ≡    ⊓ (∀  .(∀.</p>
        <p>(¬(∃  .(∃  .)))))
The details are available in our example files.</p>
      </sec>
      <sec id="sec-8-2">
        <title>6.2. The potential role of Image Schemas</title>
        <p>
          Image schemas are recurring structures establishing fundamental patterns of cognition, formed
since childhood from our bodily experience. In Cognitive Science, image schemas are identified
as conceptual building blocks which allow reasoning about the world and move therein [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ].
        </p>
        <p>
          The role of image schema in the context of Computational Conceptual Blending has been
analyzed in [
          <xref ref-type="bibr" rid="ref27 ref4">4, 27</xref>
          ]. According to [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], a blend is constructed by mapping the shared features
of diferent (mental) input spaces into a common, generic space in a selective way. The blend
will then develop its emergent structure, which derives from the combination of the projected
features. While the selection of the relevant features is done by humans in a seemingly efortless
way, it is a non-trivial problem when applied in the context of automated Computational
Conceptual Blending. The selection of diferent features can indeed lead to quite diferent
outcomes, and diferent projections can thus generate better or worst blends. Image Schemas
have been thus applied in this context to steer the search for generic spaces, by helping in
identifying the relevant features in the input spaces [
          <xref ref-type="bibr" rid="ref27 ref4">4, 27</xref>
          ].
        </p>
        <p>The approach proposed in this paper difers from the standard approach to computational
conceptual blending, because it does not rely on the identification of a shared structure between
the input spaces - i.e. it does not require the identification of a generic space to steer the
combination process. However, as already mentioned, the axiom weakening procedure exploited
here relies on the random selection of diferent possible generalizations choices, and can thus
lead to diferent outcomes. Of course, among diferent possibilities, certain weakenings are more
interesting than others, but at the moment the algorithm is subject to a random selection. Similar
to what is done in the context of standard conceptual blending for the selection of meaningful
generic spaces, image schemas could play a role here to guide the weakening procedure in the
selection of interesting generalisations. This is however matter for future work.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>7. Conclusions and Future Work</title>
      <p>We have argued in this paper for the benefit of what we call functional definitions of qualitative
labels for physical parameters. Rather than committing to a context- and situation-independent
numerical region, functional definitions describe, in a situation-dependent way, what parameter
values are appropriate. This comes from the relationship between functional definitions for a
qualitative label, and behavior prediction rules where that qualitative label appears.</p>
      <p>Further, we have illustrated how situation-dependent functional definitions for a qualitative
label can be combined into a more general one via concept blending, and in turn this general
definition be specialized for new situations. This allows us to formally model how an agent
might, through experience, acquire something resembling human commonsense knowledge
– that is, knowledge about how its environment behaves, knowledge that is situated and
context-dependent, but which nevertheless allows itself to be adapted to new situations.</p>
      <p>This process of generalization is not entirely deterministic – the concept combination
algorithm often has several, equally feasible choices available during its operation. Perhaps this is to
be expected, and it would be an interesting line of research to look into whether human beings
can agree on general-purpose meanings for qualitative labels. We suspect that they cannot, and
that some variation will exist precisely because generalization is underconstrained.</p>
      <p>Nevertheless, generalized, functional understandings of labels such as “heavy”, “near”, “fast”
etc. can be useful even if several agents do not agree on these definitions’ exact content. This is
because the important feature of functional definitions as pursued in this paper is that they ask
questions – e.g., if “heavy” is that which exerts a significant force, what is a significant force?
These questions are the means by which the general definition can attach to aspects of a new
situation and give an agent a tentative understanding of qualities that would be appropriate for
it. In future work, we will look into ways to quantify this knowledge transfer, in particular with
regards to how well can an agent predict behaviors in a new situation, and select parameter
values for its own actions to cope with it. This can be done by comparing an agent’s performance
when starting to learn to cope with a situation from scratch versus when making a tentative
guess based on specializing a functional definition.</p>
      <p>
        A limiting factor in the examples we have shown here is the ALC formalism, and we plan
to address this by investigating how Image Schema Logic [
        <xref ref-type="bibr" rid="ref28 ref4">28, 4</xref>
        ] could be used to write the
prediction rules and functional definitions for qualitative labels. Further, we expect that
connecting our approach to existing ontologies of commonsense knowledge, spatial language [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ],
and image schemas will provide a wealth of good material for our method, as opposed to the
admittedly somewhat artificial running example shown here.
      </p>
    </sec>
    <sec id="sec-10">
      <title>Acknowledgments</title>
      <p>The research reported in this paper has been supported by the FET-Open Project 951846
“MUHAI – Meaning and Understanding for Human-centric AI” (http://www.muhai.org/) funded
by the EU Program Horizon 2020 as well as the German Research Foundation DFG, as part
of Collaborative Research Center (Sonderforschungsbereich) 1320 “EASE - Everyday Activity
Science and Engineering”, University of Bremen (http://www.ease-crc.org/). The research
was conducted in sub-projects P01 Embodied semantics for the language of action and change:
Combining analysis, reasoning and simulation</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Bateman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hois</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tenbrink</surname>
          </string-name>
          ,
          <article-title>A linguistic ontology of space for natural language processing</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>174</volume>
          (
          <year>2010</year>
          )
          <fpage>1027</fpage>
          -
          <lpage>1071</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hedblom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <article-title>Asymmetric hybrids: Dialogues for computational concept combination</article-title>
          ,
          <source>in: Formal Ontology in Information Systems: Proc. of the 12th International Conference (FOIS</source>
          <year>2021</year>
          ), IOS Press,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          , G. Righetti,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <article-title>Towards even more irresistible axiom weakening</article-title>
          , in: S. Borgwardt, T. Meyer (Eds.),
          <source>Proc. of the 33rd International Workshop on Description Logics (DL</source>
          <year>2020</year>
          ), volume
          <volume>2663</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>M. M. Hedblom</surname>
          </string-name>
          , Image Schemas and Concept Invention: Cognitive, Logical, and Linguistic Investigations, Cognitive Technologies, Springer Computer Science,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <article-title>Repairing ontologies via axiom weakening</article-title>
          , in: S. A.
          <string-name>
            <surname>McIlraith</surname>
            ,
            <given-names>K. Q.</given-names>
          </string-name>
          <string-name>
            <surname>Weinberger</surname>
          </string-name>
          (Eds.),
          <source>Proc. of the Thirty-Second AAAI Conference on Artificial Intelligence</source>
          , AAAI Press,
          <year>2018</year>
          , pp.
          <fpage>1981</fpage>
          -
          <lpage>1988</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nuradiansyah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <article-title>Repairing description logic ontologies by weakening axioms</article-title>
          , CoRR abs/
          <year>1808</year>
          .00248 (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <article-title>A practical fine-grained approach to resolving incoherent OWL 2 DL terminologies</article-title>
          , in: J. L. et al. (Ed.),
          <source>Proc. of the 23rd ACM International Conference on Conference on Information and Knowledge Management</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>2014</year>
          , pp.
          <fpage>919</fpage>
          -
          <lpage>928</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Eppe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schorlemmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          , E. Plaza,
          <article-title>Upward refinement operators for conceptual blending in the description logic ℰ ℒ++</article-title>
          , Ann. Math. Artif. Intell.
          <volume>82</volume>
          (
          <year>2018</year>
          )
          <fpage>69</fpage>
          -
          <lpage>99</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <article-title>Two approaches to ontology aggregation based on axiom weakening</article-title>
          , in: J.
          <string-name>
            <surname>Lang</surname>
          </string-name>
          (Ed.),
          <source>Proc. of the TwentySeventh International Joint Conference on Artificial Intelligence, IJCAI</source>
          <year>2018</year>
          ,
          <article-title>ijcai</article-title>
          .org,
          <year>2018</year>
          , pp.
          <fpage>1942</fpage>
          -
          <lpage>1948</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fauconnier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Turner</surname>
          </string-name>
          ,
          <article-title>Conceptual integration networks</article-title>
          ,
          <source>Cognitive science 22</source>
          (
          <year>1998</year>
          )
          <fpage>133</fpage>
          -
          <lpage>187</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hedblom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <article-title>Deciphering the cookie monster: A case study in impossible combinations</article-title>
          , in: ICCC,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Eppe</surname>
          </string-name>
          , E. Maclean,
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schorlemmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Plaza</surname>
          </string-name>
          , K.-U. Kühnberger,
          <article-title>A computational framework for conceptual blending</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>256</volume>
          (
          <year>2018</year>
          )
          <fpage>105</fpage>
          -
          <lpage>129</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          , Blending under deconstruction, Ann. Math. Artif. Intell.
          <volume>88</volume>
          (
          <year>2020</year>
          )
          <fpage>479</fpage>
          -
          <lpage>516</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <article-title>Qualitative spatial reasoning in interpreting text and narrative. spatial cognition and computation</article-title>
          . forthcoming,
          <year>2013</year>
          . doi:10.1.1.295.4482.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>K.-P. Gapp</surname>
          </string-name>
          ,
          <article-title>Basic meanings of spatial relations: Computation and evaluation in 3d space</article-title>
          ,
          <source>in: Proceedings of the Twelfth AAAI National Conference on Artificial Intelligence</source>
          , AAAI'
          <fpage>94</fpage>
          , AAAI Press,
          <year>1994</year>
          , p.
          <fpage>1393</fpage>
          -
          <lpage>1398</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mösenlechner</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Tenorth, CRAM - A Cognitive Robot Abstract Machine for Everyday Manipulation in Human Environments</article-title>
          ,
          <source>in: Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>1012</fpage>
          -
          <lpage>1017</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pomarlan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Bateman</surname>
          </string-name>
          ,
          <article-title>Embodied functional relations: a formal account combining abstract logical theory with grounding in simulation</article-title>
          ,
          <source>in: Formal Ontology in Information Systems</source>
          , IOS Press, Amsterdam,
          <year>2020</year>
          . To appear.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>F.</given-names>
            <surname>Stulp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fedrizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mösenlechner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Beetz</surname>
          </string-name>
          ,
          <article-title>Learning and Reasoning with ActionRelated Places for Robust Mobile Manipulation</article-title>
          ,
          <source>Journal of Artificial Intelligence Research (JAIR) 43</source>
          (
          <year>2012</year>
          )
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Coventry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Garrod</surname>
          </string-name>
          , Saying, seeing, and
          <article-title>acting: The psychological semantics of spatial prepositions</article-title>
          , Psychology Press, New York,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L.</given-names>
            <surname>Carlson</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. van der Zee</surname>
          </string-name>
          ,
          <article-title>Functional Features in Language and Space: Insights from Perception, Categorization, and</article-title>
          <string-name>
            <surname>Development</surname>
          </string-name>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Bateman</surname>
          </string-name>
          ,
          <article-title>Gum: The generalized upper model</article-title>
          , Applied Ontology (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .3233/ AO- 210258.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Bateman</surname>
          </string-name>
          , Space, Language and Ontology: A Response to Davis,
          <source>Spatial Cognition &amp; Computation</source>
          <volume>13</volume>
          (
          <year>2013</year>
          )
          <fpage>295</fpage>
          -
          <lpage>314</lpage>
          . doi:
          <volume>10</volume>
          .1080/13875868.
          <year>2013</year>
          .
          <volume>808491</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pomarlan</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Hedblom</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Porzel</surname>
          </string-name>
          ,
          <article-title>Panta rhei: Curiosity-driven exploration to learn the image-schematic afordances of pouring liquids</article-title>
          ,
          <source>in: 29th Irish Conference on Artificial Intelligence and Cognitive Science (AICS)</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>E.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Marcus,</surname>
          </string-name>
          <article-title>The scope and limits of simulation in cognitive models</article-title>
          ,
          <source>CoRR abs/1506</source>
          .04956 (
          <year>2015</year>
          ). URL: http://arxiv.org/abs/1506.04956. arXiv:
          <volume>1506</volume>
          .
          <fpage>04956</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>E.</given-names>
            <surname>Davis</surname>
          </string-name>
          , G. Marcus,
          <source>The scope and limits of simulation in automated reasoning, Artificial Intelligence</source>
          <volume>233</volume>
          (
          <year>2016</year>
          )
          <fpage>60</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>M.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <article-title>The body in the mind: The bodily basis of meaning, imagination</article-title>
          , and reason, University of Chicago press,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>M. M. Hedblom</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Neuhaus</surname>
          </string-name>
          ,
          <article-title>Image schemas in computational conceptual blending</article-title>
          ,
          <source>Cognitive Systems Research</source>
          <volume>39</volume>
          (
          <year>2016</year>
          )
          <fpage>42</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>M. M. Hedblom</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Mossakowski</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Neuhaus</surname>
          </string-name>
          ,
          <article-title>Between Contact and Support: Introducing a Logic for Image Schemas</article-title>
          and Directed Movement, Springer International Publishing,
          <year>2017</year>
          , pp.
          <fpage>256</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>