<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Correct Match
Correct match of Image Schema Icons and Image Schema Objects in total numbers and percentage.
Image Schema Icons Icons % Objects Objects %
TOTAL</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Multimodal meets Intuitive? Comparing Visual and Tangible Image Schema Representations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Cordula Baur</string-name>
          <email>cordula.baur@uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fredrik Stamm</string-name>
          <email>fredrikstamm@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carolin Wienrich</string-name>
          <email>carolin.wienrich@uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jörn Hurtienne</string-name>
          <email>joern.hurtienne@uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Julius-Maximilians-Universität Würzburg, Chair of Psychological Ergonomics</institution>
          ,
          <addr-line>Oswald-Külpe-Weg 82, 97074 Würzburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Julius-Maximilians-Universität Würzburg, Human-Technology-Systems</institution>
          ,
          <addr-line>Oswald-Külpe-Weg 82, 97074 Würzburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Wie würden Sie Ihre Englischkenntnisse einordnen? How would you categorise your English language skills? A1</institution>
          ,
          <addr-line>A2, B1, B2, C1, C2</addr-line>
        </aff>
      </contrib-group>
      <volume>630</volume>
      <issue>90</issue>
      <abstract>
        <p>Image schemas are abstract representations of recurring multimodal experiences in the world. Together with image-schematic metaphors, which connect image schemas with abstract domains, they support the design process and foster more inclusive, intuitive, and innovative designs. However, using image schemas in the design process requires extra effort and actual image schema repositories do not meet designers' requirements. Alternative forms of representation like visualisations or physicalisations of image schemas can increase their accessibility. This work presents an empirical study that evaluates Image Schema Icons and Image Schema Objects in terms of their intuitive use, comprehensibility, and participants' preference. Correct matches of representations to image-schematic metaphors were recorded, interactions were observed, and the representations were evaluated by questionnaires. The results showed that visual representations are more intuitive and achieved more correct matches, but tangible representations were preferred. This directs further investigation and the further development of image-schema-based design tools.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Image Schemas</kwd>
        <kwd>Design</kwd>
        <kwd>Design Research</kwd>
        <kwd>Evaluation</kwd>
        <kwd>Intuitive Use 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Image schemas are representations of repeated, multimodal experiences aiding our understanding of
the environment [
        <xref ref-type="bibr" rid="ref4">4, 35, 45</xref>
        ]. Image-schematic metaphors emerge when image schemas are connected
with subjective experiences or judgments [36]. These metaphors assist in organising and structuring the
comprehension of abstract concepts [
        <xref ref-type="bibr" rid="ref10 ref17">10, 17, 27, 32, 33, 35, 39, 40</xref>
        ]. In Human-Computer Interaction
image schemas and their metaphors have been used for interface design and showed to foster more
inclusive, intuitive, and innovative designs [21, 26]. However, utilising image schemas for design
demands extra effort and time [
        <xref ref-type="bibr" rid="ref19">19, 38, 47</xref>
        ]. To tackle this, previous work recommended to use existing
image schema lists [21, 24, 47]. However, actual repositories are extensive databases [26] that lack
accessibility and applicability in the design process. Researchers in cognitive linguistics and
HumanComputer Interaction proposed visual representations of image schemas [
        <xref ref-type="bibr" rid="ref11 ref12 ref13 ref14 ref4">4, 11–14, 32, 41, 44, 46</xref>
        ] to
enhance the understanding of image schema theory. Additionally, tangible and visual representations
of FORCE image schemas have been suggested to support the design process [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. In previous work we
described an iterative Research through Design process to create tangible and visual image schema
representations which aim at fostering the design of data physicalisations [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. While the feedback during
the design process was positive, further evaluation was required. In this paper we present an empiric
evaluation study where participants matched the representations to image-schematic metaphors, rated
intuitive use and comprehensibility, and stated preference.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
    </sec>
    <sec id="sec-3">
      <title>2.1. Image Schemas</title>
      <p>
        Initially rooted in cognitive linguistics [21] image schemas were introduced by Johnson [32] and Lakoff [34]
as “recurring, dynamic pattern[s] of perceptual interactions and motor programs that give coherence and
structure to our experience” [32] (p. xiv). Image schemas link embodied experiences and mental
representations [32, 34] to provide structure to human perception and experiences, foster representation in
mind, and aid in understanding our surrounding world [
        <xref ref-type="bibr" rid="ref4 ref6 ref9">4, 6, 9, 35, 45</xref>
        ]. For instance, when a baby’s beloved
stuffed animal drops to the ground, it experiences gravity. The baby being repeatedly lifted or placed in a
pushchair or crib reinforces the experience of up and down movements. The repetition of such experiences
leads to the formulation of the UP-DOWN image schema. Image schemas as abstract concepts [21, 23, 27, 32]
do not refer to specific objects [21]. Image schemas are multimodal [
        <xref ref-type="bibr" rid="ref18 ref4">4, 18, 21, 22, 32</xref>
        ], integrating
experiences from multiple modalities [
        <xref ref-type="bibr" rid="ref17 ref18">17, 18, 27, 28</xref>
        ] and can be represented visually, haptically,
kinaesthetically or acoustically [
        <xref ref-type="bibr" rid="ref17">17, 27, 28</xref>
        ]. They are analogue [21, 23] and function subconsciously [21,
23, 27, 28], encoding and retrieving information from memory repeatedly [21]. Additionally, image schemas
proved to be largely cultural- and language-independent [39].
      </p>
    </sec>
    <sec id="sec-4">
      <title>2.2. Image-schematic Metaphors</title>
      <p>
        When an abstract concept that lacks sensory-motor experiences is assigned to a particular image
schema, an image-schematic metaphor emerges [
        <xref ref-type="bibr" rid="ref18">18, 22, 25, 45</xref>
        ]. This helps to organise and structure
the understanding of abstract concepts [
        <xref ref-type="bibr" rid="ref10 ref17">10, 17, 27, 32, 33, 35, 39, 40</xref>
        ] and supports the transfer of
information between different domains [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Projecting image schemas onto various abstract domains
enables reasoning about these domains [32]. The UP-DOWN image schema is associated with the
judgement of good and bad, forming the image-schematic metaphor UP IS GOOD – BAD IS DOWN.
Additionally, the UP-DOWN image schema is also linked to quantity (MORE IS UP – LESS IS DOWN) and
emotions (HAPPY IS UP-SAD IS DOWN). Linguistic analyses have identified over 250 metaphorical
extensions [24, 25]. These image-schematic metaphors are universal, are shared by a wide range of
people [
        <xref ref-type="bibr" rid="ref19">19, 25</xref>
        ], and were found to overlap across various languages and cultures [
        <xref ref-type="bibr" rid="ref5 ref8">5, 8, 39, 42</xref>
        ].
Furthermore they are automatically and intuitively understood [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
    </sec>
    <sec id="sec-5">
      <title>2.3. Image Schemas for Design</title>
      <p>
        Image schemas and their accompanying metaphors foster inclusive, intuitive, and innovative
designs. Inclusive design is fostered as image schemas are promising to work universally across user
groups with varying levels of technical proficiency and cultural backgrounds, because of their
connection to fundamental multimodal experiences [
        <xref ref-type="bibr" rid="ref19">19, 21</xref>
        ]. Furthermore, metaphor processing should
not be affected by a decline in conscious cognitive abilities of elderly, because it relies on automatic
and unconscious memory recall [21, 24, 26]. This makes image schemas universally applicable across
age groups [24]. Their multimodal nature enables also more inclusive design for people with
sensorimotor deficiencies [21, 26].
      </p>
      <p>
        Image schemas promise to support the intuitive use of interfaces due to their relation to fundamental
human mental models and their subconscious appliance [23]. When designs are informed by image
schemas and their metaphoric extensions, they reflect the user’s mental models [38]. Furthermore,
image schemas and metaphors are readily available for human information processing due to their
frequent and continual repetition [
        <xref ref-type="bibr" rid="ref16">16, 27</xref>
        ].
      </p>
      <p>
        Additionally, image schemas can help to identify essential aspects in design while keeping the
concrete instantiation open [26]. Image schemas do not propose a specific design solution, instead they
leave room for the designer to decide the implementation and create innovative solutions that go beyond
current standards [
        <xref ref-type="bibr" rid="ref19">19, 21</xref>
        ], therefore fostering more innovative designs.
      </p>
      <p>
        Image schemas and their accompanying metaphors were successfully used to provide inspiration
and to generate novel design ideas [
        <xref ref-type="bibr" rid="ref18 ref19">18, 19, 23, 28, 38, 39, 45</xref>
        ]. They can also structure the design process
[45] and be used to describe affordances and design solutions [
        <xref ref-type="bibr" rid="ref19">19, 23, 27</xref>
        ]. Additionally, they can
support deeper thought about design decisions [39] and help to justify them [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>
        However, it needs to be considered that using image schemas and metaphors in the design process
requires extra effort [
        <xref ref-type="bibr" rid="ref19">19, 38, 47</xref>
        ]. To address this, utilising established image schema lists is most
promising [21, 24, 47]. Such a list is provided by the Image Schema Catalogue (ISCAT) [26], but this
database does not serve as design tool, as it lacks easy accessibility and intuitive use, due to its large
volume and complex structure.
2.4.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Visual Representations of Image Schemas</title>
      <p>
        In cognitive linguistics illustrations were used to explain image schemas by highlighting their salient
characteristics [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Johnson suggested using diagrams to intuitively demonstrate how image schemas
operate periconceptually and has developed a notational system [32]. Talmy [44] depicted FORCE image
schemas by a system which consists of Agonist and Antagonist. Mandler [41] created a series of
pictorial representations to depict nonverbal concepts instead of exact interpretations. In
HumanComputer Interaction, Wilkie et al. [46] proposed visual representations of image schemas Besold et al.
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], Hedblom et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], and Hedblom [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], provided sequences of visualisations to show a process.
Hedblom and Neuhaus [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] later proposed a Diagrammatic Image Schema Language, a holistic system
to visually represent image schemas. This language provides organised and systematic representations
of abstract concepts. Furthermore, Hedblom [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and Hedblom and Kutz [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] examined the relationship
between everyday objects and image schemas, using illustrations and names of image schemas. In this
work the authors stated the challenge of creating visuals that capture all characteristics of an image
schema.
2.5.
      </p>
    </sec>
    <sec id="sec-7">
      <title>Image Schema Representations to Support Design</title>
      <p>
        Previous approaches applying image schemas to the design process required too much time and
effort [
        <xref ref-type="bibr" rid="ref19">19, 38, 47</xref>
        ]. In contrast, Hurtienne et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] proposed visual as well as tangible representations
of FORCE image schemas. The characteristics of FORCE image schemas informed icons, while the notion
that a tangible representation might convey FORCE image schemas more effectively encouraged the
design of tangible representations. Image Schemas were instantiated as interactive physical rotatory
dials.
      </p>
      <p>
        Both sets were tested for their effectiveness in identifying and distinguishing the represented image
schemas as well as their usefulness in the brainstorming process. The icons were identified more
frequently correctly than the tangible representations. Additionally, the visual representations were
mentioned to foster the generation of more ideas and in this condition, participants appreciated FORCE
image schemas to be more crucial and beneficial for design. Design ideas created using tangible
representations were perceived as more qualitative: ideas were considered to be more interactive, haptic
and visual [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        In previous work [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] we used an iterative research-oriented Design process [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] to develop icons
(called Image Schema Icons) and clay objects (called Image Schema Objects) that represent image
schemas. We propose the use of tangible representations to facilitate data physicalisation design, as
these representations are more similar to the desired design outcome, which represents abstract data
through shape or material properties [30]. Designers no longer need to handle descriptions and textual
definitions of image schemas. The representations make image schemas easier to examine, contrast and
compare, to figure out which one works best for the actual design task. Additionally, specific examples
for including image schemas in a data physicalisation are provided by the tangible representations. This
might address the identified challenges of extra time and effort when using image schemas in the design
process. The process of designing the image schema representations provided promising feedback and
the tangible representations were already tested in a workshop setting [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] but a comprehensive
evaluation of their effectiveness is required. Before testing image schema representations in the design
process, it is necessary to assess their comprehensiveness and intuitive use and choose one of the
instantiation types. Therefore, we investigate in this work the research question, whether Image Schema
Icons or Image Schema Objects depict image schemas in a more intuitive and comprehensive manner.
Additionally, we explore user preferences.
      </p>
      <p>
        Tangible representations may be appropriate for image schemas, because they are able to represent
the multimodality of experiences incorporated in image schemas [
        <xref ref-type="bibr" rid="ref17 ref18">27, 28, 18, 17</xref>
        ]. Hurtienne et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
assessed visual and tangible representations of FORCE image schemas and found tangible
representations to encourage the formation of more interactive, visual, and haptic ideas, while visual
instantiations were more precisely identified and fostered a greater quantity of ideas. It needs to be
considered that FORCE image schemas are a special subset of image schemas. Because of their
temporary, abstract, and dynamic nature, they can be hard to recognise and categorise [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. This work
focuses on different image schemas. When creating icons and clay objects to represent image schemas
we identified some image schemas being easier to recognise and represent in visual form, other image
schemas in tangible form [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Therefore, Hurtienne et al.’s [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] findings might not be generalisable for
all image schemas. In some cases, the tangible representation may be identified as well or even better.
It is necessary to evaluate the intuitive use and comprehensiveness of different representation
modalities. Our explorative hypothesis is that visual and tangible representations differ in terms of
intuitive use, comprehensiveness, and accurate matches of representations to image-schematic
metaphors, as well as in preference ratings.
      </p>
    </sec>
    <sec id="sec-8">
      <title>3. Method</title>
      <p>To evaluate the intuitive use and the comprehensibility of the image schema representations we
conducted a within-subject design study. Randomly assigned to groups, participants of group one began
with visual representations and continued with the objects, while group two followed the reverse order.
This setup was intended to avoid cross-over effects. Participants matched image-schematic metaphors
to the presented Image Schema Icons or Image Schema Objects and rated Intuitive Use and
Comprehensibility in questionnaires. In the end, they were asked for their preference. Interaction with
the representations was observed and correct matches were counted.
3.1.</p>
    </sec>
    <sec id="sec-9">
      <title>Participants</title>
      <p>
        Recruited from the universities’ participant pool, participants received 0.5 credit points as
compensation. No exclusion criteria were applied, as image schemas claim being universal across
cultural backgrounds and age [
        <xref ref-type="bibr" rid="ref18">18, 20, 21, 23, 28, 38, 39, 45</xref>
        ]. The study was conducted in German but
to avoid altering their meaning through translation, we presented the image schematic metaphors in
their original language (English). To avoid confusion, we provided a list of English-German translations
for the terms used. Additionally, participants were asked about their English proficiency level and prior
experience with image schemas.
      </p>
      <p>A total of fifty participants (n = 50), with an average age of 21.22 years (Standard Deviation (SD) =
1.36) participated. None of them had any prior experience with image schemas. Ten participants (20
%) rated their English at C1 level, 29 (58 %) at B2 level, eight (16 %) at B1 level, three participants (6
%) at A2 level, and no participants rated their English level A1. In the following the participants are
identified as P4 to P54; P1 to P4 were not included in the data analysis but took part in pilot testing to
improve the research design.
3.2.</p>
    </sec>
    <sec id="sec-10">
      <title>Procedure</title>
      <p>The study lasted for approximately 30 minutes. After welcoming and conducting informed consent,
a demographic questionnaire was completed and participants were given written instructions
(Supplementary Material 1). Participants were asked to read statements (image-schematic metaphors)
presented on A5 printouts and to select the icon or icon pair or object or object-pair best fitting the
metaphor. Fourteen image-schematic metaphors were conducted in total. After completing the task,
participants filled in questionnaires (Supplementary Material 2) to rate the intuitive use and
comprehensibility of the stimuli. This procedure was repeated with 14 new metaphors and the other
representation modality. Participants who first used objects, now used icons, and vice versa. Intuitive
use and comprehensiveness were rated again using the same questionnaires. Additionally, participants
were asked which stimuli they preferred and why. During the matching task, the researchers observed
whether the participants interacted with the objects physically and recorded correct matches.
3.3.</p>
    </sec>
    <sec id="sec-11">
      <title>Material and Setup</title>
      <p>
        In a previous phase of this project, we selected a subset of image schemas to be represented in visual
and tangible way, regarding their potential to support data physicalisation design. This decision was
informed by analyses of existing data physicalisations regarding incorporated image schemas [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and
the potential for improvement through additional image schemas [under review]. Furthermore,
recommendations made in literature, which image schemas serve to foster the design of tangible user
interfaces [25, 28] informed our selection. For more details regarding our selection criteria for image
schemas see [under review].
      </p>
      <p>
        A6 cards were used to display the Image Schema Icons (Figure 1), while the objects (Figure 2) were
already crafted in a Research-oriented Design process [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. For a detailed description of the design
process of the Image Schema Icons and Image Schema Objects see [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>For the matching task, well-established image-schematic metaphors were selected based on high
confirmation rates [24, 25, 29, 40] or their well-documented linguistic findings. For image schemas
where this was not possible, a metaphor was chosen from the ISCAT database [26]. The metaphors,
accompanied with selection criteria, and alternative image schemas are provided as supplementary
material 3. For each metaphor, we presented the correct image schema representation and two incorrect
options. To be able to show all three choices simultaneously and to avoid presenting the different
choices for different duration, we used a cardboard cover while arranging the stimuli. We varied the
position of the correct choice for each metaphor. The study setup is depicted in Figure 3.
Representations that are easily confused, such as HARD-SOFT, SMOOTH-ROUGH, or STRAIGHT-CROOKED,
or those with similar characteristics, like STRONG-WEAK, or HEAVY-LIGHT were presented together.
OBJECT, LINKAGE, and PAINFUL, each consisting of only one term, were presented as alternatives to
avoid the lack of bi-dimensional structure being used as exclusion criteria. To ensure clarity for the
researcher who conducted the data collection, the metaphors were presented in the same order for each
participant.</p>
    </sec>
    <sec id="sec-12">
      <title>Collection and Analysis of Data</title>
      <p>
        The representations’ intuitive use and comprehensibility were evaluated using the Modular
Extension of the User Experience Questionnaire (UEQ+) [43]. The 7-point subscale intuitive use
measures the ease of use with the items difficult-easy, illogical-logical, not plausible-plausible, and
inconclusive-conclusive. Comprehensibility is measured with the items complicated-simple,
unambiguous-ambiguous, inaccurate-accurate, and enigmatic-explainable. The UEQ+ is a
wellestablished questionnaire, frequently used to evaluate products’ user experience and therefor it was
deemed appropriate to use it for assessing the experience with prototypical design tools. To evaluate
how well the presented image schemas can be identified, we recorded correct matches of
imageschematic metaphors to image schema representations. To determine whether the choice was solely
informed by the visual appearance of the objects, we observed whether participants physically
interacted with the tangible image schema representations. Furthermore, participants indicated their
preference for icons or clay objects. Data was collected using LimeSurvey [37] and analysed using the
statistics software JASP [31], which was also used to provide values for Mean (M) and Standard
Deviation (SD). The qualitative data was analysed by creating an Affinity Diagram, loosely applying
the Contextual Design Approach [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] for data evaluation. From the participants’ answers we created
Affinity Notes and organised them into groups based on inductive reasoning.
      </p>
    </sec>
    <sec id="sec-13">
      <title>4. Results</title>
      <p>To compare visual and tangible representations, we conducted dependant t-tests. We chose this test,
as it is a often used und reliable test for within-design study setups. No outliers were excluded, no data
values were missing. Even when the data showed no normal distribution, we proceeded with the
analysis, because our sample (n) is bigger than 30 and therefore the data is robust against violation of
the normal distribution. The significance level α describes the maximum probability that a null
hypothesis (no difference) is incorrectly rejected. It was set at alpha = .05.</p>
      <p>In terms of intuitive use the icons (M = 6.11, SD = 0.67) and objects (M = 5.71, SD = 0.89) showed
significant difference (t(49) = 3.239, p = .002, d = .162). Here t describes the t-value which is used to
define the p-value; p shows the significance; d describes Cohens’d and shows the effect size, which can
be used to compare the results with studies measuring the same dependent variable. The rating of
comprehensibility showed no significant difference (t(49) = .509, p = .613, d = .072) between icons (M
= 5.56, SD = 0.94) and objects (M = 5.49, SD = 1.01). Counting the number of correct matches showed
that for 630 times (90.00 %) the correct icons were selected, but only for 571 times (81.57 %) the correct
objects were selected. This is a significant difference t(699) = 4.982, p &lt; .001, d = .188. However,
Image Schema Icons and Image Schema Objects both showed a high number of correct matches. The
visual representations of STRONG-WEAK and CONTENT-CONTAINER, as well as the tangible
representations of HEAVY-LIGHT and STRONG-WEAK showed the lowest number of correct matches.
Figure 4 shows the correct matches per image schema. The full data is provided as supplementary
material 4.</p>
      <p>Sixteen participants (32.00 %) preferred icons, while 34 participants (68.00 %) preferred objects.
The participants stated that the icons are more intuitive (P8, P25, P34, P46, P49, P52) and less difficult
to match (P16, P20, P25, P33, P37, P45, P52). Some appreciated the icons for their details (P35, P45),
others the room for interpretation they provide (P19, P40). However, the majority preferred the Image
Schema Objects which were experienced as easier to comprehend (P12, P21, P23, P26, P28, P32, P38,
P42, P43, P48) and better suited for matching metaphors due to their three-dimensional shape (P10,
P14, P24, P32, P48, P54). Participants stated the objects show a higher aesthetic quality (P9, P18, P45,
P50, P51). Furthermore, they highlighted the objects as being more graspable (P9, P12, P22, P29, P38,
P39, P54) and liked the opportunity of touching and interacting with them (P4, P27, P29, P31, P32,
P36, P41, P42, P51).</p>
      <p>Our observation revealed that most participants made their choice and expressed their preference
solely based on the visual appearance of the stimuli. Only 14 participants (28.00 %) showed physical
interaction. Of 48 interactions (excluding 10 interactions with the wrong objects), 41 interactions (85.00
%) resulted in a correct match. The most frequently interacted objects were, HEAVY-LIGHT and
STRONGWEAK, followed by CONTENT-CONTAINER. Conversely, the objects that showed least correct matches
were most frequently interacted with. Figure 5 shows the interactions per Image Schema Object.</p>
    </sec>
    <sec id="sec-14">
      <title>5. Discussion</title>
      <p>
        Utilising image schemas showed to foster more inclusive, intuitive, and innovative designs and to
aid the design process. However, the use of image schemas demands extra effort and time. Currently
available image schema repositories do not provide an easily applicable design-tool. To address this
issue, we developed visual and tangible representations to make image schemas accessible and
incorporable in the data physicalisation design process. In this study we evaluated these representations
to determine if they convey image schemas in an intuitive and comprehensive way and which modality
of representation works best. Participants matched image-schematic metaphors to visual or tangible
image schema representations and rated intuitive use, comprehensiveness, and their preference for one
representation modality (visual or tangible). The study utilised questionnaires, recorded correct
matches, and observed interactions with the tangible representations. In previous research [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], visual
representations of FORCE image schemas were more often identified correctly. However, due to the
special character of FORCE image schemas, these findings may not be generalisable for all image
schemas. Investigating different representation modalities of other image schemas in an explorative
way, this work provides evidence for the Image Schema Icons being more intuitive and resulting in
more correct matches, than the Image Schema Objects. However, participants preferred the tangible
representations more often.
      </p>
      <p>
        Previous research already highlighted that the way how image schemas are instantiated is important
for their comprehensiveness [25, 29, 40]. Consistent with previous work, which demonstrated that
visual representations were more accurately identified [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], this study also found that the Image Schema
Icons resulted in more correct matches and were perceived as more intuitive (qualitative and
quantitative). They were also rated as more comprehensive, but without significant evidence. Previous
findings were confirmed and showed to be applicable also to other image schemas. However, it should
be noted that participants showed limited interaction with the tangible instantiations and their ratings
were primarily based on the objects’ visual appearance rather than a tangible experience. Therefore a
reason could be that since childhood people are trained in educational but also exhibition settings, not
to touch physical artifacts. The tangible characteristics of the Image Schema Objects may not have been
experienced and the objects didn’t realise their full potential. Therefore, they might have influenced the
participants’ rating only to a small extent.
      </p>
      <p>
        Both visual and tangible representations achieved high numbers of correct matches. Only
HEAVYLIGHT and STRONG-WEAK showed a major difference between conditions. For both image schemas the
correct matches of tangible representations were much lower than for visual representations. Most
Image Schema Objects’ visual appearance is similar to the Image Schema Icons, but not for
HEAVYLIGHT and STRONG-WEAK. The design process [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] showed that finding appropriate visual and tangible
representations for HEAVY-LIGHT and STRONG-WEAK was difficult and participants struggled with their
recognition. The final tangible representations require tangible interaction and exploration to fully
convey the image schemas’ characteristics and to be identified correctly. In fact, the tangible
representations of these image schemas showed the highest interaction. However, in total only a
minority of participants interacted with the objects. Therefore, for most participants the tangible
representations of HEAVY-LIGHT and STRONG-WEAK remained concealed which impeded a correct
choice.
      </p>
      <p>In the qualitative data, the icons were stated to be more intuitive to understand (P8, P25, P34, P46, P49,
P52) and easier to match to metaphors (P16, P20, P25, P33, P37, P45, P52). Although visual
representations lead to more correct matches and were rated to be more intuitive, participants preferred
more often the Image Schema Objects. Even they showed only a limited number of tangible interactions,
they stated to appreciate the opportunity of touching and interacting with the objects (P4, P27, P29,
P31, P32, P36, P41, P42, P51). Furthermore, the tangible representations were preferred because of
their three-dimensionality, which supports matching to metaphors (P10, P14, P24, P32, P48, P54), and
were experienced as more graspable (P9, P12, P22, P29, P38, P39, P54). Additionally, some
participants stated they are easy to understand (P12, P21, P23, P26, P28, P32, P38, P42, P43, P48),
while others found the objects aesthetically pleasing (P9, P18, P45, P51).
5.1.</p>
    </sec>
    <sec id="sec-15">
      <title>Limitations</title>
      <p>Participants may have recognised similar visual appearances of Image Schema Objects and Image
Schema Icons among conditions, which could have caused a learning effect. However, the
crossoverdesign was implemented to prevent this from confounding the results.</p>
      <p>Another potential limitation of this work is that for STRAIGHT-CROOKED the same metaphor was
used in both conditions. However, as this was one of 14 metaphors presented per condition, it is unlikely
that participants noticed this and referred to their choice made in the previous condition.</p>
      <p>A more crucial aspect is participants’ English proficiency. The majority stated their English level
higher than A1 and only one participant used the provided translation sheet. However, some participants
appeared to be confused or uncertain regarding the meaning of some metaphors. It is possible that they
felt embarrassed to admit a lack of English knowledge and therefore didn’t use the translation sheet.
This may have led to misunderstandings of the image-schematic metaphors and affected the accuracy
of the matches and ratings.</p>
      <p>Furthermore, instructing participants to make intuitive decisions may have influenced their choices.
Some participants stated in retrospect that if they had invested more time, they would have chosen
different icons or objects. The instructions aimed to encourage intuitive decision-making and prevent
participants from overthinking their choices. This raises the question of whether a more deliberate
decision would increase or decrease the number of correct matches. Furthermore, the instructions
prevented participants from taking the time to explore and interact with the objects more intensely.
Allowing more time could promote more intense interaction and with this a more multimodal
experience of the objects. These aspects, both worth further research.</p>
    </sec>
    <sec id="sec-16">
      <title>6. Conclusion</title>
      <p>Image schemas enhance both, design outcome and the design process. To overcome the additional
effort and time for using image schemas in design, a more accessible way to represent and utilise them
is required. This work compared and evaluated visual and tangible representations of image schemas to
determine which modality conveys image schemas best. Therefore, an empiric study was conducted,
where participants matched image-schematic metaphors to visual and tangible representations, rated
intuitive use and comprehensibility and indicated their preference. The Image Schema Icons showed
higher ratings for intuitive use and a higher number of correct matches. The Image Schema Objects also
showed high numbers of correct matches and were preferred more often due to their opportunity for
physical interaction.
6.1.</p>
    </sec>
    <sec id="sec-17">
      <title>Outlook</title>
      <p>In the next step, we are going to evaluate image schema representations’ effectiveness for designing
data physicalisations. Further work could explore the transferability of Image Schema Icons and Image
Schema Objects and their usefulness for other design tasks, such as tangible interfaces. Previous
research has already highlighted image schemas’ potential for tangible user interface design [25, 28],
which could be further reinforced by our proposed image schema representations.</p>
    </sec>
    <sec id="sec-18">
      <title>7. References</title>
      <p>[20] Hurtienne, J. et al. 2015. Designing with Image Schemas: Resolving the Tension Between
Innovation, Inclusion and Intuitive Use. Interacting with Computers. 27, 3 (May 2015), 235–255.</p>
      <p>DOI:https://doi.org/10.1093/iwc/iwu049.
[21] Hurtienne, J. 2016. How Cognitive Linguistics Inspires HCI: Image Schemas and
ImageSchematic Metaphors. International Journal of Human-Computer Interaction. 33, 1 (Sep. 2016),
1–20. DOI:https://doi.org/10.1080/10447318.2016.1232227.
[22] Hurtienne, J. et al. 2007. Image schemas: a new language for user interface design? Prospektive</p>
      <p>Gestaltung von Mensch-Technik-Interaktion. M. Rötting et al., eds. VDI Verlag. 167–172.
[23] Hurtienne, J. 2011. Image Schemas and Design for Intuitive Use. Technische Universität Berlin.
[24] Hurtienne, J. et al. 2010. Physical gestures for abstract concepts: Inclusive design with primary
metaphors. Interacting with Computers. 22, 6 (Nov. 2010), 475–484.</p>
      <p>DOI:https://doi.org/10.1016/j.intcom.2010.08.009.
[25] Hurtienne, J. et al. 2009. Sad is heavy and happy is light: population stereotypes of tangible object
attributes. TEI ’09: Proceedings of the 3rd International Conference on Tangible and Embedded
Interaction (Cambridge United Kingdom, Feb. 2009), 61–68.
[26] Hurtienne, J. et al. 2022. Supporting User Interface Design with Image Schemas: The ISCAT
Database as a Research Tool. Proceedings of the Sixth Image Schema Day (Jönköping Sweden,
Mar. 2022).
[27] Hurtienne, J. and Blessing, L. 2007. Design for intuitive use - Testing Image Schema Theory for
User Interface Design. DS 42: Proceedings of ICED 2007, the 16th International Conference on
Engineering Design (Paris France, Jul. 2007), 829–830.
[28] Hurtienne, J. and Israel, J.H. 2007. Image schemas and their metaphorical extensions: intuitive
patterns for tangible interaction. TEI ’07: Proceedings of the 1st international conference on
Tangible and embedded interaction (Baton Rouge Lousiana, Feb. 2007), 127–134.
[29] Hurtienne, J. and Meschke, O. 2016. Soft Pillows and the Near and Dear: Physical-to-Abstract
Mappings with Image-Schematic Metaphors. TEI ’16: Proceedings of the TEI ’16: Tenth
International Conference on Tangible, Embedded, and Embodied Interaction (Eindhoven
Netherlands, Feb. 2016), 324–331.
[30] Jansen, Y. et al. 2015. Opportunities and Challenges for Data Physicalization. CHI ’15:
Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems
(Seoul Republic of Korea, Apr. 2015), 3227–3236.
[31] JASP Team 2024. JASP.
[32] Johnson, M. 1987. The body in the mind: The bodily basis of meaning, imagination, and reason.</p>
      <p>University of Chicago Press.
[33] Johnson, M. 2005. The philosophical significance of image schemas. From Perception to</p>
      <p>Meaning: Image Schemas in Cognitive Linguistics. B. Hampe, ed. De Gruyter Mouton. 15–34.
[34] Lakoff, G. 1987. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind.</p>
      <p>University of Chicago Press.
[35] Lakoff, G. and Johnson, M. 1980. Metaphors we live by. University of Chicago Press.
[36] Lakoff, G. and Johnson, M. 1999. Philosophy in The Flesh: The Embodied Mind And Its</p>
      <p>Challenge To Western Thought. Basic Books.
[37] Limesurvey GmbH 2023. LimeSurvey: An Open Source survey Tool.
[38] Löffler, D. et al. 2013. Developing Intuitive User Interfaces by Integrating Users’ Mental Models
into Requirements Engineering. BCS-HCI ’13: Proceedings of the 27th International BCS Human
Computer Interaction Conference (London, UK, Sep. 2013), 1–10.
[39] Löffler, D. et al. 2014. Mixing Languages’: image schema inspired designs for rural Africa. CHI
EA ’14: CHI ’14 Extended Abstracts on Human Factors in Computing Systems (Toronto Ontario
Canada, May 2014), 1999–2004.
[40] Macaranas, A. et al. 2012. Bridging the gap: attribute and spatial metaphors for tangible interface
design. TEI ’12: Proceedings of the Sixth International Conference on Tangible, Embedded and
Embodied Interaction (Kingston Ontario Canada, Feb. 2012), 161–168.
[41] Mandler, J.M. 1992. How to build a baby: II. Conceptual primitives. Psychological Review. 99, 4
(Nov. 1992), 587–604. DOI:https://doi.org/10.1037/0033-295X.99.4.587.
[42] Neumann, C. 2001. Is Metaphor Universal? Cross-Language Evidence From German and
Japanese. Metaphor and Symbol - METAPHOR SYMB. 16, (Apr. 2001), 123–142.</p>
      <p>DOI:https://doi.org/10.1207/S15327868MS1601&amp;2_9.
[43] Schrepp, M. and Thomaschewski, J. 2019. Eine modulare Erweiterung des User Experience</p>
      <p>Questionnaire. Usability Professionals (UP19) (2019), 148–156.
[44] Talmy, L. 1988. Force Dynamics in Language and Cognition. Cognitive Science. 12, 1 (Jan.</p>
      <p>1988), 49–100. DOI:https://doi.org/10.1207/s15516709cog1201_2.
[45] Tscharn, R. 2017. Design of Age-Inclusive Tangible User Interfaces Using Image-Schematic
Metaphors. TEI ’17: Proceedings of the Eleventh International Conference on Tangible,
Embedded, and Embodied Interaction (Yokohama Japan, Mar. 2017), 693–696.
[46] Wilkie, K. et al. 2009. Evaluating Musical Software Using Conceptual Metaphors. BCS-HCI ’09:
Proceedings of the 23rd British HCI Group Annual Conference on People and Computers:
Celebrating People and Technology (Cambridge Great Britain, Sep. 2009), 232–237.
[47] Winkler, A. et al. 2016. Evaluation of an Application Based on Conceptual Metaphors for Social
Interaction Between Vehicles. DIS ’16: Proceedings of the 2016 ACM Conference on Designing
Interactive Systems (Brisbane QLD Australia, Jun. 2016), 1148–1159.</p>
    </sec>
    <sec id="sec-19">
      <title>Supplementary Material</title>
      <p>You will be given a total of 14 numbered sheets of paper. On the back of each sheet there is a short
sentence. You will read each sentence. The experimenter will then show you three icons or pairs of
icons. Choose the icon or pair of icons that you think is in the sentence. Don't think too long, make an
intuitive decision.</p>
      <p>Note: The sentences are in English. If you need a translation list, you can click 'Next' once in the
questionnaire.</p>
      <p>Sie bekommen insgesamt 14 nummerierte Zettel. Auf der Rückseite der nummerierten Zettel steht
jeweils ein kurzer Satz. Sie lesen immer den jeweiligen Satz. Anschließend zeigt Ihnen die
Versuchsleitung drei Objekte bzw. Objekt-Paare. Wählen Sie das Objekt bzw. Objekt-Paar aus, dass
Ihrer Meinung nach im Satz enthalten ist. Denken Sie nicht zu lange nach, entscheiden Sie intuitiv aus
dem Bauch heraus.</p>
      <p>Hinweis: Die Sätze sind auf Englisch. Wenn Sie eine Übersetzungsliste brauchen, können Sie im
Fragebogen einmal auf „Weiter“ klicken.</p>
      <p>You will be given a total of 14 numbered sheets of paper. On the back of each sheet there is a short
sentence. You will read each sentence. The experimenter will then show you three objects or pairs of
objects. Choose the object or pair of objects that you think is in the sentence. Don't think too long, make
an intuitive decision.</p>
      <p>Note: The sentences are in English. If you need a translation list, you can click 'Next' once in the
questionnaire.</p>
      <sec id="sec-19-1">
        <title>2 Questionnaires</title>
        <p>Demographic Data
• Welches ist Ihr bisher höchster Bildungsabschluss?</p>
        <p>What is your highest educational qualification to date?
•
•
•
•</p>
        <p>Welches ist Ihr Geschlecht?
What is your gender?</p>
        <sec id="sec-19-1-1">
          <title>Wie alt sind Sie gemessen in Jahren? How old are you in years?</title>
        </sec>
        <sec id="sec-19-1-2">
          <title>Welches ist Ihre Muttersprache?</title>
          <p>What is your mother tongue?</p>
          <p>Welcher beruflichen oder berufsqualifizierenden Tätigkeit gehen Sie derzeit hauptsächlich
nach?
What is your main professional or vocational activity at present?
Haben Sie bereits Vorerfahrung im Themengebiet Image Schemas?
Do you have any previous experience in the field of image schemas?
Welche Erfahrungen im Themengebiet Image Schemas haben Sie?</p>
          <p>What experience do you have in the field of image schemas?</p>
        </sec>
        <sec id="sec-19-1-3">
          <title>UEQ+: Intuitive Bedienung</title>
          <p>UEQ+: Intuitive Use
• Die Zuordnung der Icons/Objekte war für mich …
The assignment of the icons/objects was …
o mühevoll – mühelos</p>
          <p>difficult – easy
o
o
o
o
o
o
unlogisch – logisch
illogical – logical
nicht einleuchtend – einleuchtend
not plausible – plausible
nicht schlüssig – schlüssig
inconclusive – conclusive</p>
        </sec>
        <sec id="sec-19-1-4">
          <title>UEQ+: Verständnis</title>
          <p>UEQ+: Comprehensibility
• Die Icons/Objekte sind für mich …
The icons/objects are …
o kompliziert – einfach
complicated – simple
ungenau – genau
unambiguous – ambiguous
nicht eindeutig – eindeutig
inaccurate – accurate
rätselhaft – erklärbar
enigmatic – explainable</p>
        </sec>
        <sec id="sec-19-1-5">
          <title>Präferenz</title>
          <p>Preference
• Welche Darstellungsform hat Ihnen insgesamt besser gefallen und warum?</p>
          <p>Which form of representation did you like better? Why?
•</p>
          <p>Gibt es zum Schluss noch etwas, dass Sie uns mitteilen möchten? (optional)
Finally, is there anything else you would like to tell us? (optional)</p>
        </sec>
      </sec>
      <sec id="sec-19-2">
        <title>3 Image Schematic Metaphors and Selection Criteria</title>
        <p>Presented image schemas, metaphors, selection criteria and presented alternatives for task one
(group one: icons, group two: objects).</p>
        <p>Image Metaphor Selection criteria Presented
schema alternatives
1
2
3
4
5
6
7
8
9</p>
        <p>UP-DOWN
THE BODY/MIND/A ISCAT: metaphor which refers
PERSON IS A to both, content, and container
CONTAINER FOR THE
SELF
ABILITIES ARE THE
CONTENT OF A
PERSON-CONTAINER
THE PRESENT IS NEAR [29]
– THE PAST IS FAR
IMPORTANCE IS ISCAT: metaphor which refers
CENTRALITY to both, centre and periphery
UNIMPORTANT ISSUES
ARE GIVEN
PERIPHERAL
POSITIONS
MUCH IS STRONG – [29]
LITTLE IS WEAK &gt;&gt;
MORE IS STRONG –
LESS IS WEAK
FEAR/BEING AFRAID IS ISCAT: only two metaphors in</p>
        <p>PAIN English available
STRAIGHT- MORAL IS STRAIGHT – [29]
CROOKED CORRUPT IS CROOKED
HARD-SOFT INTENSIVE IS HARD – [29]</p>
        <p>SENSITIVE IS SOFT
SMOOTHROUGH</p>
        <p>POLITE IS SMOOTH –
IMPOLITE IS ROUGH</p>
        <p>[25]
11 LINKAGE</p>
        <p>LOVE IS A BOND
10
LEFT</p>
        <p>RIGHT
12
PART</p>
        <p>WHOLE
13
HEAVY</p>
        <p>LIGHT
14 OBJECT
ISCAT: metaphor which only
refers to object, not further
attributes or context</p>
        <p>CENTRE-PERIPHERY
STRAIGHTCROOKED
LEFT-RIGHT
PART-WHOLE
HEAVY-LIGHT
CENTRE-PERIPHERY
PART-WHOLE
LEFT-RIGHT
UP-DOWN
HARD-SOFT
LINKAGE
OBJECT
SMOOTH-ROUGH
HARD-SOFT
SMOOTH-ROUGH
STRAIGHTCROOKED
CONTENTCONTAINER
STRONG-WEAK
NEAR-FAR
STRONG-WEAK
PAINFUL
OBJECT
NEAR-FAR
HEAVY-LIGHT
UP-DOWN
CONTENTCONTAINER
LINKAGE
PAINFUL
1
2
3
4
5
6
7
8
9
10
11
12
13
14</p>
        <p>UP-DOWN
CONTENTCONTAINER
NEAR-FAR
CENTREPERIPHERY
STRONGWEAK
PAINFUL
STRAIGHTCROOKED
HARD-SOFT
SMOOTHROUGH
LEFT-RIGHT</p>
        <p>LINKAGE
Presented image schemas, metaphors, selection criteria and presented alternatives for task two
(group one: icons, group two: objects).</p>
        <p>Image Metaphor Selection criteria Presented
schema alternatives
happy is up – sad is
down
the mind
(consciousness) is a
container (for idea
objects)
emotional is near –
unemotional is far
identity is central
powerful is strong –
powerless is weak
disgust/being
disgusted is pain
moral is straight –
corrupt is crooked
stressful is hard –
relaxing is soft
boring is smooth –
dangerous is rough
moral is right –
immoral is left</p>
        <sec id="sec-19-2-1">
          <title>ISCAT: metaphor which refers to both, content, and container</title>
          <p>CENTREPERIPHERY
STRAIGHTCROOKED
LEFT-RIGHT
PART-WHOLE
[29] HEAVY-LIGHT
CENTREPERIPHERY
ISCAT: most striking/easy PART-WHOLE
to understand LEFT-RIGHT
[29] UP-DOWN
HARD-SOFT
ISCAT: only two metaphors LINKAGE
in English available OBJECT
[29] SMOOTH-ROUGH
HARD-SOFT
[29] SMOOTH-ROUGH
STRAIGHTCROOKED
[25]
CONTENTCONTAINER
STRONG-WEAK
ISCAT: metaphor which NEAR-FAR
clearly maps left/right STRONG-WEAK
social relationships
are links</p>
        </sec>
        <sec id="sec-19-2-2">
          <title>ISCAT: most striking/easy</title>
          <p>to understand
PART-WHOLE coherent is whole ISCAT: metaphor which
contains at least the term
whole
HEAVY-LIGHT more is heavy–less is [25]</p>
          <p>light
OBJECT
opportunities are
objects</p>
        </sec>
        <sec id="sec-19-2-3">
          <title>ISCAT: metaphor which only refers to object, not further attributes or context</title>
          <p>PAINFUL
OBJECT
NEAR-FAR
HEAVY-LIGHT
UP-DOWN
CONTENTCONTAINER
LINKAGE
PAINFUL</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Baur</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          et al.
          <year>2022</year>
          .
          <article-title>Designing Data Physicalisations with Physical Image Schema Instantiations</article-title>
          .
          <source>Short Paper Proceedings of the 5th European Tangible Interaction Studio</source>
          (Toulouse France, Nov.
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Baur</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          et al.
          <year>2022</year>
          .
          <article-title>Form Follows Mental Models: Finding Instantiations of Image Schemas Using a Design Research Approach</article-title>
          .
          <source>DIS '22: Proceedings of the 2022 ACM Designing Interactive Systems Conference (Virtual Event Australia, Jun</source>
          .
          <year>2022</year>
          ),
          <fpage>586</fpage>
          -
          <lpage>598</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Baur</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          et al.
          <year>2023</year>
          .
          <article-title>Image Schemas as Tool for Exploring the Design Space of Data Physicalisations</article-title>
          .
          <source>Proceedings of The Seventh Image Schema Day (Rhodes Greece, Sep</source>
          .
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Besold</surname>
            ,
            <given-names>T.R.</given-names>
          </string-name>
          et al.
          <year>2017</year>
          .
          <article-title>A narrative in three acts: Using combinations of image schemas to model events</article-title>
          .
          <source>Biologically Inspired Cognitive Architectures</source>
          .
          <volume>19</volume>
          , (
          <year>2017</year>
          ),
          <fpage>10</fpage>
          -
          <lpage>20</lpage>
          . DOI:https://doi.org/10.1016/j.bica.
          <year>2016</year>
          .
          <volume>11</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Cienki</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>1998</year>
          .
          <article-title>STRAIGHT: An image schema and its metaphorical extensions</article-title>
          . (
          <year>1998</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Dimitra</given-names>
            <surname>Bourou</surname>
          </string-name>
          et al.
          <year>2021</year>
          .
          <article-title>Image Schemas and Conceptual Blending in Diagrammatic Reasoning: The Case of Hasse Diagrams</article-title>
          .
          <source>Diagrammatic Representation and Inference. Amrita Basu</source>
          et al., eds. Springer.
          <fpage>297</fpage>
          -
          <lpage>314</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Fallman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>Why Research-Oriented Design Isn't Design-Oriented Research: On the Tensions Between Design and Research in an Implicit Design Discipline</article-title>
          .
          <source>Knowledge, Technology &amp; Policy. 20</source>
          ,
          <issue>3</issue>
          (Oct.
          <year>2007</year>
          ),
          <fpage>193</fpage>
          -
          <lpage>200</lpage>
          . DOI:https://doi.org/10.1007/s12130-007-9022-8.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Forceville</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2006</year>
          .
          <article-title>Metaphor in Culture: Universality and Variation, Zoltán Kövecses</article-title>
          . Cambridge University Press, Cambridge (
          <year>2005</year>
          ), 314 pp.,
          <source>ISBN 0 521</source>
          <volume>84447 9</volume>
          (
          <issue>hardback</issue>
          ).
          <source>Journal of Pragmatics</source>
          .
          <volume>38</volume>
          ,
          <string-name>
            <surname>(Sep</surname>
          </string-name>
          .
          <year>2006</year>
          ),
          <fpage>1528</fpage>
          -
          <lpage>1531</lpage>
          . DOI:https://doi.org/10.1016/j.pragma.
          <year>2006</year>
          .
          <volume>03</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Gibbs</surname>
            ,
            <given-names>R.W.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Colston</surname>
            ,
            <given-names>H.L.</given-names>
          </string-name>
          <year>1995</year>
          .
          <article-title>The cognitive psychological reality of image schemas and their transformations</article-title>
          .
          <source>Cognitive Linguistics. 6</source>
          ,
          <issue>4</issue>
          (
          <year>1995</year>
          ),
          <fpage>347</fpage>
          -
          <lpage>378</lpage>
          . DOI:https://doi.org/10.1515/cogl.
          <year>1995</year>
          .
          <volume>6</volume>
          .4.347.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Grady</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          <year>1997</year>
          .
          <article-title>Foundations of meaning: Primary metaphors and primary scenes</article-title>
          . Department of Linguistics, University of California at Berkeley.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Hedblom</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          et al.
          <year>2017</year>
          .
          <article-title>Between Contact and Support: Introducing a Logic for Image Schemas and Directed Movement</article-title>
          .
          <source>AI*IA 2017 Advances in Artificial Intelligence, Proceedings of the XVIth International Conference of the Italian Association for Artificial Intelligence (Nov</source>
          .
          <year>2017</year>
          ),
          <fpage>256</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Hedblom</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Image Schemas and Concept Invention: Cognitive, Logical,</article-title>
          and Linguistic Investigations. Springer.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Hedblom</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>M. and</article-title>
          <string-name>
            <surname>Kutz</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Conceptual Puzzle Pieces. Modeling and Using Context</article-title>
          .
          <source>CONTEXT 2019 (Cham</source>
          ,
          <year>2019</year>
          ),
          <fpage>98</fpage>
          -
          <lpage>111</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Hedblom</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>M. and</article-title>
          <string-name>
            <surname>Neuhaus</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <year>2022</year>
          .
          <article-title>Visualising Image Schemas: A Preliminary Look at the Diagrammatic Image Schema Language (DISL)</article-title>
          .
          <source>Proceedings of the Sixth Image Schema Day (Jönköping Sweden, Mar</source>
          .
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Holtzblatt</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Beyer</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <year>2014</year>
          . Contextual Design: Evolved. Morgan &amp; Claypool Publishers.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Hurtienne</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Cognition in HCI: An Ongoing Story</article-title>
          .
          <source>Human Technology. 5</source>
          ,
          <issue>1</issue>
          (May
          <year>2009</year>
          ),
          <fpage>12</fpage>
          -
          <lpage>28</lpage>
          . DOI:https://doi.org/10.17011/ht/urn.20094141408.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Hurtienne</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          et al.
          <year>2015</year>
          .
          <article-title>Comparing Pictorial and Tangible Notations of Force Image Schemas</article-title>
          .
          <source>TEI '15: Proceedings of the Ninth International Conference on Tangible, Embedded</source>
          , and
          <string-name>
            <surname>Embodied Interaction (Stanford California</surname>
            <given-names>USA</given-names>
          </string-name>
          , Jan.
          <year>2015</year>
          ),
          <fpage>249</fpage>
          -
          <lpage>256</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Hurtienne</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          et al.
          <year>2008</year>
          .
          <article-title>Cooking up real world business applications combining physicality, digitality, and image schemas</article-title>
          .
          <source>TEI '08: Proceedings of the 2nd international conference on Tangible and embedded interaction (Bonn Germany, Feb</source>
          .
          <year>2008</year>
          ),
          <fpage>239</fpage>
          -
          <lpage>246</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Hurtienne</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          et al.
          <year>2015</year>
          .
          <article-title>Designing with Image Schemas: Resolving the Tension Between Innovation, Inclusion</article-title>
          and
          <string-name>
            <given-names>Intuitive</given-names>
            <surname>Use</surname>
          </string-name>
          .
          <source>Interacting with Computers</source>
          .
          <volume>27</volume>
          ,
          <string-name>
            <surname>(Apr</surname>
          </string-name>
          .
          <year>2015</year>
          ). DOI:https://doi.org/10.1093/iwc/iwu049.
          <article-title>1 Instructions Sie bekommen insgesamt 14 nummerierte Zettel</article-title>
          .
          <article-title>Auf der Rückseite der nummerierten Zettel steht jeweils ein kurzer Satz. Sie lesen immer den jeweiligen Satz. Anschließend zeigt Ihnen die Versuchsleitung drei Icons bzw</article-title>
          . Icon-Paare.
          <article-title>Wählen Sie das Icon bzw. Icon-Paar aus, dass Ihrer Meinung nach im Satz enthalten ist. Denken Sie nicht zu lange nach, entscheiden Sie intuitiv aus dem Bauch heraus</article-title>
          . Hinweis:
          <article-title>Die Sätze sind auf Englisch. Wenn Sie eine Übersetzungsliste brauchen, können Sie im Fragebogen einmal auf „Weiter“ klicken.</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>