<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Multimodal meets Intuitive? Comparing Visual and Tangible Image Schema Representations</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Cordula</forename><surname>Baur</surname></persName>
							<email>cordula.baur@uni-wuerzburg.de</email>
							<affiliation key="aff0">
								<orgName type="department">Chair of Psychological Ergonomics</orgName>
								<orgName type="institution">Julius-Maximilians-Universität Würzburg</orgName>
								<address>
									<addrLine>Oswald-Külpe-Weg 82</addrLine>
									<postCode>97074</postCode>
									<settlement>Würzburg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Fredrik</forename><surname>Stamm</surname></persName>
							<email>fredrikstamm@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Chair of Psychological Ergonomics</orgName>
								<orgName type="institution">Julius-Maximilians-Universität Würzburg</orgName>
								<address>
									<addrLine>Oswald-Külpe-Weg 82</addrLine>
									<postCode>97074</postCode>
									<settlement>Würzburg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Carolin</forename><surname>Wienrich</surname></persName>
							<email>carolin.wienrich@uni-wuerzburg.de</email>
							<affiliation key="aff1">
								<orgName type="department">Human-Technology-Systems</orgName>
								<orgName type="institution">Julius-Maximilians-Universität Würzburg</orgName>
								<address>
									<addrLine>Oswald-Külpe-Weg 82</addrLine>
									<postCode>97074</postCode>
									<settlement>Würzburg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jörn</forename><surname>Hurtienne</surname></persName>
							<email>joern.hurtienne@uni-wuerzburg.de</email>
							<affiliation key="aff0">
								<orgName type="department">Chair of Psychological Ergonomics</orgName>
								<orgName type="institution">Julius-Maximilians-Universität Würzburg</orgName>
								<address>
									<addrLine>Oswald-Külpe-Weg 82</addrLine>
									<postCode>97074</postCode>
									<settlement>Würzburg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Proceedings of The Eight Image Schema Day (ISD8)</orgName>
								<address>
									<addrLine>November 27-28</addrLine>
									<postCode>2024</postCode>
									<settlement>Bozen-Bolzano</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Multimodal meets Intuitive? Comparing Visual and Tangible Image Schema Representations</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">DB989615B36E1C502EAAA2E86E977A2E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:33+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Image Schemas</term>
					<term>Design</term>
					<term>Design Research</term>
					<term>Evaluation</term>
					<term>Intuitive Use</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Image schemas are abstract representations of recurring multimodal experiences in the world. Together with image-schematic metaphors, which connect image schemas with abstract domains, they support the design process and foster more inclusive, intuitive, and innovative designs. However, using image schemas in the design process requires extra effort and actual image schema repositories do not meet designers' requirements. Alternative forms of representation like visualisations or physicalisations of image schemas can increase their accessibility. This work presents an empirical study that evaluates Image Schema Icons and Image Schema Objects in terms of their intuitive use, comprehensibility, and participants' preference. Correct matches of representations to image-schematic metaphors were recorded, interactions were observed, and the representations were evaluated by questionnaires. The results showed that visual representations are more intuitive and achieved more correct matches, but tangible representations were preferred. This directs further investigation and the further development of image-schema-based design tools.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Image schemas are representations of repeated, multimodal experiences aiding our understanding of the environment <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b35">35,</ref><ref type="bibr" target="#b45">45]</ref>. Image-schematic metaphors emerge when image schemas are connected with subjective experiences or judgments <ref type="bibr" target="#b36">[36]</ref>. These metaphors assist in organising and structuring the comprehension of abstract concepts <ref type="bibr" target="#b10">[10,</ref><ref type="bibr" target="#b17">17,</ref><ref type="bibr" target="#b27">27,</ref><ref type="bibr" target="#b32">32,</ref><ref type="bibr" target="#b33">33,</ref><ref type="bibr" target="#b35">35,</ref><ref type="bibr" target="#b39">39,</ref><ref type="bibr" target="#b40">40]</ref>. In Human-Computer Interaction image schemas and their metaphors have been used for interface design and showed to foster more inclusive, intuitive, and innovative designs <ref type="bibr" target="#b21">[21,</ref><ref type="bibr" target="#b26">26]</ref>. However, utilising image schemas for design demands extra effort and time <ref type="bibr" target="#b19">[19,</ref><ref type="bibr" target="#b38">38,</ref><ref type="bibr" target="#b47">47]</ref>. To tackle this, previous work recommended to use existing image schema lists <ref type="bibr" target="#b21">[21,</ref><ref type="bibr" target="#b24">24,</ref><ref type="bibr" target="#b47">47]</ref>. However, actual repositories are extensive databases <ref type="bibr" target="#b26">[26]</ref> that lack accessibility and applicability in the design process. Researchers in cognitive linguistics and Human-Computer Interaction proposed visual representations of image schemas <ref type="bibr">[4, 11-14, 32, 41, 44, 46]</ref> to enhance the understanding of image schema theory. Additionally, tangible and visual representations of FORCE image schemas have been suggested to support the design process <ref type="bibr" target="#b17">[17]</ref>. In previous work we described an iterative Research through Design process to create tangible and visual image schema representations which aim at fostering the design of data physicalisations <ref type="bibr" target="#b1">[2]</ref>. While the feedback during the design process was positive, further evaluation was required. In this paper we present an empiric evaluation study where participants matched the representations to image-schematic metaphors, rated intuitive use and comprehensibility, and stated preference.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background 2.1. Image Schemas</head><p>Initially rooted in cognitive linguistics <ref type="bibr" target="#b21">[21]</ref> image schemas were introduced by Johnson <ref type="bibr" target="#b32">[32]</ref> and Lakoff <ref type="bibr" target="#b34">[34]</ref> as "recurring, dynamic pattern[s] of perceptual interactions and motor programs that give coherence and structure to our experience" <ref type="bibr" target="#b32">[32]</ref> (p. xiv). Image schemas link embodied experiences and mental representations <ref type="bibr" target="#b32">[32,</ref><ref type="bibr" target="#b34">34]</ref> to provide structure to human perception and experiences, foster representation in mind, and aid in understanding our surrounding world <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b9">9,</ref><ref type="bibr" target="#b35">35,</ref><ref type="bibr" target="#b45">45]</ref>. For instance, when a baby's beloved stuffed animal drops to the ground, it experiences gravity. The baby being repeatedly lifted or placed in a pushchair or crib reinforces the experience of up and down movements. The repetition of such experiences leads to the formulation of the UP-DOWN image schema. Image schemas as abstract concepts <ref type="bibr" target="#b21">[21,</ref><ref type="bibr" target="#b23">23,</ref><ref type="bibr" target="#b27">27,</ref><ref type="bibr" target="#b32">32]</ref> do not refer to specific objects <ref type="bibr" target="#b21">[21]</ref>. Image schemas are multimodal <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b18">18,</ref><ref type="bibr" target="#b21">21,</ref><ref type="bibr" target="#b22">22,</ref><ref type="bibr" target="#b32">32]</ref>, integrating experiences from multiple modalities <ref type="bibr" target="#b17">[17,</ref><ref type="bibr" target="#b18">18,</ref><ref type="bibr" target="#b27">27,</ref><ref type="bibr" target="#b28">28]</ref> and can be represented visually, haptically, kinaesthetically or acoustically <ref type="bibr" target="#b17">[17,</ref><ref type="bibr" target="#b27">27,</ref><ref type="bibr" target="#b28">28]</ref>. They are analogue <ref type="bibr" target="#b21">[21,</ref><ref type="bibr" target="#b23">23]</ref> and function subconsciously <ref type="bibr" target="#b21">[21,</ref><ref type="bibr" target="#b23">23,</ref><ref type="bibr" target="#b27">27,</ref><ref type="bibr" target="#b28">28]</ref>, encoding and retrieving information from memory repeatedly <ref type="bibr" target="#b21">[21]</ref>. Additionally, image schemas proved to be largely cultural-and language-independent <ref type="bibr" target="#b39">[39]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Image-schematic Metaphors</head><p>When an abstract concept that lacks sensory-motor experiences is assigned to a particular image schema, an image-schematic metaphor emerges <ref type="bibr" target="#b18">[18,</ref><ref type="bibr" target="#b22">22,</ref><ref type="bibr" target="#b25">25,</ref><ref type="bibr" target="#b45">45]</ref>. This helps to organise and structure the understanding of abstract concepts <ref type="bibr" target="#b10">[10,</ref><ref type="bibr" target="#b17">17,</ref><ref type="bibr" target="#b27">27,</ref><ref type="bibr" target="#b32">32,</ref><ref type="bibr" target="#b33">33,</ref><ref type="bibr" target="#b35">35,</ref><ref type="bibr" target="#b39">39,</ref><ref type="bibr" target="#b40">40]</ref> and supports the transfer of information between different domains <ref type="bibr" target="#b3">[4]</ref>. Projecting image schemas onto various abstract domains enables reasoning about these domains <ref type="bibr" target="#b32">[32]</ref>. The UP-DOWN image schema is associated with the judgement of good and bad, forming the image-schematic metaphor UP IS GOOD -BAD IS DOWN. Additionally, the UP-DOWN image schema is also linked to quantity (MORE IS UP -LESS IS DOWN) and emotions (HAPPY IS UP-SAD IS DOWN). Linguistic analyses have identified over 250 metaphorical extensions <ref type="bibr" target="#b24">[24,</ref><ref type="bibr" target="#b25">25]</ref>. These image-schematic metaphors are universal, are shared by a wide range of people <ref type="bibr" target="#b19">[19,</ref><ref type="bibr" target="#b25">25]</ref>, and were found to overlap across various languages and cultures <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b39">39,</ref><ref type="bibr" target="#b42">42]</ref>. Furthermore they are automatically and intuitively understood <ref type="bibr" target="#b18">[18]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Image Schemas for Design</head><p>Image schemas and their accompanying metaphors foster inclusive, intuitive, and innovative designs. Inclusive design is fostered as image schemas are promising to work universally across user groups with varying levels of technical proficiency and cultural backgrounds, because of their connection to fundamental multimodal experiences <ref type="bibr" target="#b19">[19,</ref><ref type="bibr" target="#b21">21]</ref>. Furthermore, metaphor processing should not be affected by a decline in conscious cognitive abilities of elderly, because it relies on automatic and unconscious memory recall <ref type="bibr" target="#b21">[21,</ref><ref type="bibr" target="#b24">24,</ref><ref type="bibr" target="#b26">26]</ref>. This makes image schemas universally applicable across age groups <ref type="bibr" target="#b24">[24]</ref>. Their multimodal nature enables also more inclusive design for people with sensorimotor deficiencies <ref type="bibr" target="#b21">[21,</ref><ref type="bibr" target="#b26">26]</ref>.</p><p>Image schemas promise to support the intuitive use of interfaces due to their relation to fundamental human mental models and their subconscious appliance <ref type="bibr" target="#b23">[23]</ref>. When designs are informed by image schemas and their metaphoric extensions, they reflect the user's mental models <ref type="bibr" target="#b38">[38]</ref>. Furthermore, image schemas and metaphors are readily available for human information processing due to their frequent and continual repetition <ref type="bibr" target="#b16">[16,</ref><ref type="bibr" target="#b27">27]</ref>.</p><p>Additionally, image schemas can help to identify essential aspects in design while keeping the concrete instantiation open <ref type="bibr" target="#b26">[26]</ref>. Image schemas do not propose a specific design solution, instead they leave room for the designer to decide the implementation and create innovative solutions that go beyond current standards <ref type="bibr" target="#b19">[19,</ref><ref type="bibr" target="#b21">21]</ref>, therefore fostering more innovative designs.</p><p>Image schemas and their accompanying metaphors were successfully used to provide inspiration and to generate novel design ideas <ref type="bibr" target="#b18">[18,</ref><ref type="bibr" target="#b19">19,</ref><ref type="bibr" target="#b23">23,</ref><ref type="bibr" target="#b28">28,</ref><ref type="bibr" target="#b38">38,</ref><ref type="bibr" target="#b39">39,</ref><ref type="bibr" target="#b45">45]</ref>. They can also structure the design process <ref type="bibr" target="#b45">[45]</ref> and be used to describe affordances and design solutions <ref type="bibr" target="#b19">[19,</ref><ref type="bibr" target="#b23">23,</ref><ref type="bibr" target="#b27">27]</ref>. Additionally, they can support deeper thought about design decisions <ref type="bibr" target="#b39">[39]</ref> and help to justify them <ref type="bibr" target="#b19">[19]</ref>.</p><p>However, it needs to be considered that using image schemas and metaphors in the design process requires extra effort <ref type="bibr" target="#b19">[19,</ref><ref type="bibr" target="#b38">38,</ref><ref type="bibr" target="#b47">47]</ref>. To address this, utilising established image schema lists is most promising <ref type="bibr" target="#b21">[21,</ref><ref type="bibr" target="#b24">24,</ref><ref type="bibr" target="#b47">47]</ref>. Such a list is provided by the Image Schema Catalogue (ISCAT) <ref type="bibr" target="#b26">[26]</ref>, but this database does not serve as design tool, as it lacks easy accessibility and intuitive use, due to its large volume and complex structure.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Visual Representations of Image Schemas</head><p>In cognitive linguistics illustrations were used to explain image schemas by highlighting their salient characteristics <ref type="bibr" target="#b14">[14]</ref>. Johnson suggested using diagrams to intuitively demonstrate how image schemas operate periconceptually and has developed a notational system <ref type="bibr" target="#b32">[32]</ref>. Talmy <ref type="bibr" target="#b44">[44]</ref> depicted FORCE image schemas by a system which consists of Agonist and Antagonist. Mandler <ref type="bibr" target="#b41">[41]</ref> created a series of pictorial representations to depict nonverbal concepts instead of exact interpretations. In Human-Computer Interaction, Wilkie et al. <ref type="bibr" target="#b46">[46]</ref> proposed visual representations of image schemas Besold et al. <ref type="bibr" target="#b3">[4]</ref>, Hedblom et al. <ref type="bibr" target="#b11">[11]</ref>, and Hedblom <ref type="bibr" target="#b12">[12]</ref>, provided sequences of visualisations to show a process. Hedblom and Neuhaus <ref type="bibr" target="#b14">[14]</ref> later proposed a Diagrammatic Image Schema Language, a holistic system to visually represent image schemas. This language provides organised and systematic representations of abstract concepts. Furthermore, Hedblom <ref type="bibr" target="#b12">[12]</ref> and Hedblom and Kutz <ref type="bibr" target="#b13">[13]</ref> examined the relationship between everyday objects and image schemas, using illustrations and names of image schemas. In this work the authors stated the challenge of creating visuals that capture all characteristics of an image schema.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5.">Image Schema Representations to Support Design</head><p>Previous approaches applying image schemas to the design process required too much time and effort <ref type="bibr" target="#b19">[19,</ref><ref type="bibr" target="#b38">38,</ref><ref type="bibr" target="#b47">47]</ref>. In contrast, Hurtienne et al. <ref type="bibr" target="#b17">[17]</ref> proposed visual as well as tangible representations of FORCE image schemas. The characteristics of FORCE image schemas informed icons, while the notion that a tangible representation might convey FORCE image schemas more effectively encouraged the design of tangible representations. Image Schemas were instantiated as interactive physical rotatory dials.</p><p>Both sets were tested for their effectiveness in identifying and distinguishing the represented image schemas as well as their usefulness in the brainstorming process. The icons were identified more frequently correctly than the tangible representations. Additionally, the visual representations were mentioned to foster the generation of more ideas and in this condition, participants appreciated FORCE image schemas to be more crucial and beneficial for design. Design ideas created using tangible representations were perceived as more qualitative: ideas were considered to be more interactive, haptic and visual <ref type="bibr" target="#b17">[17]</ref>.</p><p>In previous work <ref type="bibr" target="#b1">[2]</ref> we used an iterative research-oriented Design process <ref type="bibr" target="#b6">[7]</ref> to develop icons (called Image Schema Icons) and clay objects (called Image Schema Objects) that represent image schemas. We propose the use of tangible representations to facilitate data physicalisation design, as these representations are more similar to the desired design outcome, which represents abstract data through shape or material properties <ref type="bibr" target="#b30">[30]</ref>. Designers no longer need to handle descriptions and textual definitions of image schemas. The representations make image schemas easier to examine, contrast and compare, to figure out which one works best for the actual design task. Additionally, specific examples for including image schemas in a data physicalisation are provided by the tangible representations. This might address the identified challenges of extra time and effort when using image schemas in the design process. The process of designing the image schema representations provided promising feedback and the tangible representations were already tested in a workshop setting <ref type="bibr" target="#b0">[1]</ref> but a comprehensive evaluation of their effectiveness is required. Before testing image schema representations in the design process, it is necessary to assess their comprehensiveness and intuitive use and choose one of the instantiation types. Therefore, we investigate in this work the research question, whether Image Schema Icons or Image Schema Objects depict image schemas in a more intuitive and comprehensive manner. Additionally, we explore user preferences.</p><p>Tangible representations may be appropriate for image schemas, because they are able to represent the multimodality of experiences incorporated in image schemas <ref type="bibr" target="#b27">[27,</ref><ref type="bibr" target="#b28">28,</ref><ref type="bibr" target="#b18">18,</ref><ref type="bibr" target="#b17">17]</ref>. Hurtienne et al. <ref type="bibr" target="#b17">[17]</ref> assessed visual and tangible representations of FORCE image schemas and found tangible representations to encourage the formation of more interactive, visual, and haptic ideas, while visual instantiations were more precisely identified and fostered a greater quantity of ideas. It needs to be considered that FORCE image schemas are a special subset of image schemas. Because of their temporary, abstract, and dynamic nature, they can be hard to recognise and categorise <ref type="bibr" target="#b17">[17]</ref>. This work focuses on different image schemas. When creating icons and clay objects to represent image schemas we identified some image schemas being easier to recognise and represent in visual form, other image schemas in tangible form <ref type="bibr" target="#b1">[2]</ref>. Therefore, Hurtienne et al.'s <ref type="bibr" target="#b17">[17]</ref> findings might not be generalisable for all image schemas. In some cases, the tangible representation may be identified as well or even better. It is necessary to evaluate the intuitive use and comprehensiveness of different representation modalities. Our explorative hypothesis is that visual and tangible representations differ in terms of intuitive use, comprehensiveness, and accurate matches of representations to image-schematic metaphors, as well as in preference ratings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Method</head><p>To evaluate the intuitive use and the comprehensibility of the image schema representations we conducted a within-subject design study. Randomly assigned to groups, participants of group one began with visual representations and continued with the objects, while group two followed the reverse order. This setup was intended to avoid cross-over effects. Participants matched image-schematic metaphors to the presented Image Schema Icons or Image Schema Objects and rated Intuitive Use and Comprehensibility in questionnaires. In the end, they were asked for their preference. Interaction with the representations was observed and correct matches were counted.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Participants</head><p>Recruited from the universities' participant pool, participants received 0.5 credit points as compensation. No exclusion criteria were applied, as image schemas claim being universal across cultural backgrounds and age <ref type="bibr" target="#b18">[18,</ref><ref type="bibr" target="#b20">20,</ref><ref type="bibr" target="#b21">21,</ref><ref type="bibr" target="#b23">23,</ref><ref type="bibr" target="#b28">28,</ref><ref type="bibr" target="#b38">38,</ref><ref type="bibr" target="#b39">39,</ref><ref type="bibr" target="#b45">45]</ref>. The study was conducted in German but to avoid altering their meaning through translation, we presented the image schematic metaphors in their original language (English). To avoid confusion, we provided a list of English-German translations for the terms used. Additionally, participants were asked about their English proficiency level and prior experience with image schemas.</p><p>A total of fifty participants (n = 50), with an average age of 21.22 years (Standard Deviation (SD) = 1.36) participated. None of them had any prior experience with image schemas. Ten participants (20 %) rated their English at C1 level, 29 (58 %) at B2 level, eight (16 %) at B1 level, three participants (6 %) at A2 level, and no participants rated their English level A1. In the following the participants are identified as P4 to P54; P1 to P4 were not included in the data analysis but took part in pilot testing to improve the research design.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Procedure</head><p>The study lasted for approximately 30 minutes. After welcoming and conducting informed consent, a demographic questionnaire was completed and participants were given written instructions (Supplementary Material 1). Participants were asked to read statements (image-schematic metaphors) presented on A5 printouts and to select the icon or icon pair or object or object-pair best fitting the metaphor. Fourteen image-schematic metaphors were conducted in total. After completing the task, participants filled in questionnaires (Supplementary Material 2) to rate the intuitive use and comprehensibility of the stimuli. This procedure was repeated with 14 new metaphors and the other representation modality. Participants who first used objects, now used icons, and vice versa. Intuitive use and comprehensiveness were rated again using the same questionnaires. Additionally, participants were asked which stimuli they preferred and why. During the matching task, the researchers observed whether the participants interacted with the objects physically and recorded correct matches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Material and Setup</head><p>In a previous phase of this project, we selected a subset of image schemas to be represented in visual and tangible way, regarding their potential to support data physicalisation design. This decision was informed by analyses of existing data physicalisations regarding incorporated image schemas <ref type="bibr" target="#b2">[3]</ref> and the potential for improvement through additional image schemas [under review]. Furthermore, recommendations made in literature, which image schemas serve to foster the design of tangible user interfaces <ref type="bibr" target="#b25">[25,</ref><ref type="bibr" target="#b28">28]</ref> informed our selection. For more details regarding our selection criteria for image schemas see [under review].</p><p>A6 cards were used to display the Image Schema Icons (Figure <ref type="figure" target="#fig_0">1</ref>), while the objects (Figure <ref type="figure" target="#fig_1">2</ref>) were already crafted in a Research-oriented Design process <ref type="bibr" target="#b6">[7]</ref>. For a detailed description of the design process of the Image Schema Icons and Image Schema Objects see <ref type="bibr" target="#b1">[2]</ref>.  For the matching task, well-established image-schematic metaphors were selected based on high confirmation rates <ref type="bibr" target="#b24">[24,</ref><ref type="bibr" target="#b25">25,</ref><ref type="bibr" target="#b29">29,</ref><ref type="bibr" target="#b40">40]</ref> or their well-documented linguistic findings. For image schemas where this was not possible, a metaphor was chosen from the ISCAT database <ref type="bibr" target="#b26">[26]</ref>. The metaphors, accompanied with selection criteria, and alternative image schemas are provided as supplementary material 3. For each metaphor, we presented the correct image schema representation and two incorrect options. To be able to show all three choices simultaneously and to avoid presenting the different choices for different duration, we used a cardboard cover while arranging the stimuli. We varied the position of the correct choice for each metaphor. The study setup is depicted in Figure <ref type="figure" target="#fig_2">3</ref>. Representations that are easily confused, such as HARD-SOFT, SMOOTH-ROUGH, or STRAIGHT-CROOKED, or those with similar characteristics, like STRONG-WEAK, or HEAVY-LIGHT were presented together. OBJECT, LINKAGE, and PAINFUL, each consisting of only one term, were presented as alternatives to avoid the lack of bi-dimensional structure being used as exclusion criteria. To ensure clarity for the researcher who conducted the data collection, the metaphors were presented in the same order for each participant. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Collection and Analysis of Data</head><p>The representations' intuitive use and comprehensibility were evaluated using the Modular Extension of the User Experience Questionnaire (UEQ+) <ref type="bibr" target="#b43">[43]</ref>. The 7-point subscale intuitive use measures the ease of use with the items difficult-easy, illogical-logical, not plausible-plausible, and inconclusive-conclusive. Comprehensibility is measured with the items complicated-simple, unambiguous-ambiguous, inaccurate-accurate, and enigmatic-explainable. The UEQ+ is a wellestablished questionnaire, frequently used to evaluate products' user experience and therefor it was deemed appropriate to use it for assessing the experience with prototypical design tools. To evaluate how well the presented image schemas can be identified, we recorded correct matches of imageschematic metaphors to image schema representations. To determine whether the choice was solely informed by the visual appearance of the objects, we observed whether participants physically interacted with the tangible image schema representations. Furthermore, participants indicated their preference for icons or clay objects. Data was collected using LimeSurvey <ref type="bibr" target="#b37">[37]</ref> and analysed using the statistics software JASP <ref type="bibr" target="#b31">[31]</ref>, which was also used to provide values for Mean (M) and Standard Deviation (SD). The qualitative data was analysed by creating an Affinity Diagram, loosely applying the Contextual Design Approach <ref type="bibr" target="#b15">[15]</ref> for data evaluation. From the participants' answers we created Affinity Notes and organised them into groups based on inductive reasoning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head><p>To compare visual and tangible representations, we conducted dependant t-tests. We chose this test, as it is a often used und reliable test for within-design study setups. No outliers were excluded, no data values were missing. Even when the data showed no normal distribution, we proceeded with the analysis, because our sample (n) is bigger than 30 and therefore the data is robust against violation of the normal distribution. The significance level α describes the maximum probability that a null hypothesis (no difference) is incorrectly rejected. It was set at alpha = .05.</p><p>In terms of intuitive use the icons (M = 6.11, SD = 0.67) and objects (M = 5.71, SD = 0.89) showed significant difference (t(49) = 3.239, p = .002, d = .162). Here t describes the t-value which is used to define the p-value; p shows the significance; d describes Cohens'd and shows the effect size, which can be used to compare the results with studies measuring the same dependent variable. The rating of comprehensibility showed no significant difference (t(49) = .509, p = .613, d = .072) between icons (M = 5.56, SD = 0.94) and objects (M = 5.49, SD = 1.01). Counting the number of correct matches showed that for 630 times (90.00 %) the correct icons were selected, but only for 571 times (81.57 %) the correct objects were selected. This is a significant difference t(699) = 4.982, p &lt; .001, d = .188. However, Image Schema Icons and Image Schema Objects both showed a high number of correct matches. The visual representations of STRONG-WEAK and CONTENT-CONTAINER, as well as the tangible representations of HEAVY-LIGHT and STRONG-WEAK showed the lowest number of correct matches. Figure <ref type="figure" target="#fig_3">4</ref> shows the correct matches per image schema. The full data is provided as supplementary material 4. Sixteen participants (32.00 %) preferred icons, while 34 participants (68.00 %) preferred objects. The participants stated that the icons are more intuitive (P8, P25, P34, P46, P49, P52) and less difficult to match (P16, P20, P25, P33, P37, P45, P52). Some appreciated the icons for their details (P35, P45), others the room for interpretation they provide (P19, P40). However, the majority preferred the Image Schema Objects which were experienced as easier to comprehend (P12, P21, P23, P26, P28, P32, P38, P42, P43, P48) and better suited for matching metaphors due to their three-dimensional shape (P10, P14, P24, P32, P48, P54). Participants stated the objects show a higher aesthetic quality (P9, P18, P45, P50, P51). Furthermore, they highlighted the objects as being more graspable (P9, P12, P22, P29, P38, P39, P54) and liked the opportunity of touching and interacting with them (P4, P27, P29, P31, P32, P36, P41, P42, P51).</p><p>Our observation revealed that most participants made their choice and expressed their preference solely based on the visual appearance of the stimuli. Only 14 participants (28.00 %) showed physical interaction. Of 48 interactions (excluding 10 interactions with the wrong objects), 41 interactions (85.00 %) resulted in a correct match. The most frequently interacted objects were, HEAVY-LIGHT and STRONG-WEAK, followed by CONTENT-CONTAINER. Conversely, the objects that showed least correct matches were most frequently interacted with. Figure <ref type="figure" target="#fig_4">5</ref> shows the interactions per Image Schema Object. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussion</head><p>Utilising image schemas showed to foster more inclusive, intuitive, and innovative designs and to aid the design process. However, the use of image schemas demands extra effort and time. Currently available image schema repositories do not provide an easily applicable design-tool. To address this issue, we developed visual and tangible representations to make image schemas accessible and incorporable in the data physicalisation design process. In this study we evaluated these representations to determine if they convey image schemas in an intuitive and comprehensive way and which modality of representation works best. Participants matched image-schematic metaphors to visual or tangible image schema representations and rated intuitive use, comprehensiveness, and their preference for one representation modality (visual or tangible). The study utilised questionnaires, recorded correct matches, and observed interactions with the tangible representations. In previous research <ref type="bibr" target="#b17">[17]</ref>, visual representations of FORCE image schemas were more often identified correctly. However, due to the special character of FORCE image schemas, these findings may not be generalisable for all image schemas. Investigating different representation modalities of other image schemas in an explorative way, this work provides evidence for the Image Schema Icons being more intuitive and resulting in more correct matches, than the Image Schema Objects. However, participants preferred the tangible representations more often.</p><p>Previous research already highlighted that the way how image schemas are instantiated is important for their comprehensiveness <ref type="bibr" target="#b25">[25,</ref><ref type="bibr" target="#b29">29,</ref><ref type="bibr" target="#b40">40]</ref>. Consistent with previous work, which demonstrated that visual representations were more accurately identified <ref type="bibr" target="#b17">[17]</ref>, this study also found that the Image Schema Icons resulted in more correct matches and were perceived as more intuitive (qualitative and quantitative). They were also rated as more comprehensive, but without significant evidence. Previous findings were confirmed and showed to be applicable also to other image schemas. However, it should be noted that participants showed limited interaction with the tangible instantiations and their ratings were primarily based on the objects' visual appearance rather than a tangible experience. Therefore a reason could be that since childhood people are trained in educational but also exhibition settings, not to touch physical artifacts. The tangible characteristics of the Image Schema Objects may not have been experienced and the objects didn't realise their full potential. Therefore, they might have influenced the participants' rating only to a small extent.</p><p>Both visual and tangible representations achieved high numbers of correct matches. Only HEAVY-LIGHT and STRONG-WEAK showed a major difference between conditions. For both image schemas the correct matches of tangible representations were much lower than for visual representations. Most Image Schema Objects' visual appearance is similar to the Image Schema Icons, but not for HEAVY-LIGHT and STRONG-WEAK. The design process <ref type="bibr" target="#b1">[2]</ref> showed that finding appropriate visual and tangible representations for HEAVY-LIGHT and STRONG-WEAK was difficult and participants struggled with their recognition. The final tangible representations require tangible interaction and exploration to fully convey the image schemas' characteristics and to be identified correctly. In fact, the tangible representations of these image schemas showed the highest interaction. However, in total only a minority of participants interacted with the objects. Therefore, for most participants the tangible representations of HEAVY-LIGHT and STRONG-WEAK remained concealed which impeded a correct choice.</p><p>In the qualitative data, the icons were stated to be more intuitive to understand (P8, P25, P34, P46, P49, P52) and easier to match to metaphors (P16, P20, P25, P33, P37, P45, P52). Although visual representations lead to more correct matches and were rated to be more intuitive, participants preferred more often the Image Schema Objects. Even they showed only a limited number of tangible interactions, they stated to appreciate the opportunity of touching and interacting with the objects (P4, P27, P29, P31, P32, P36, P41, P42, P51). Furthermore, the tangible representations were preferred because of their three-dimensionality, which supports matching to metaphors (P10, P14, P24, P32, P48, P54), and were experienced as more graspable (P9, P12, P22, P29, P38, P39, P54). Additionally, some participants stated they are easy to understand (P12, P21, P23, P26, P28, P32, P38, P42, P43, P48), while others found the objects aesthetically pleasing (P9, P18, P45, P51).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Limitations</head><p>Participants may have recognised similar visual appearances of Image Schema Objects and Image Schema Icons among conditions, which could have caused a learning effect. However, the crossoverdesign was implemented to prevent this from confounding the results.</p><p>Another potential limitation of this work is that for STRAIGHT-CROOKED the same metaphor was used in both conditions. However, as this was one of 14 metaphors presented per condition, it is unlikely that participants noticed this and referred to their choice made in the previous condition.</p><p>A more crucial aspect is participants' English proficiency. The majority stated their English level higher than A1 and only one participant used the provided translation sheet. However, some participants appeared to be confused or uncertain regarding the meaning of some metaphors. It is possible that they felt embarrassed to admit a lack of English knowledge and therefore didn't use the translation sheet. This may have led to misunderstandings of the image-schematic metaphors and affected the accuracy of the matches and ratings.</p><p>Furthermore, instructing participants to make intuitive decisions may have influenced their choices. Some participants stated in retrospect that if they had invested more time, they would have chosen different icons or objects. The instructions aimed to encourage intuitive decision-making and prevent participants from overthinking their choices. This raises the question of whether a more deliberate decision would increase or decrease the number of correct matches. Furthermore, the instructions prevented participants from taking the time to explore and interact with the objects more intensely. Allowing more time could promote more intense interaction and with this a more multimodal experience of the objects. These aspects, both worth further research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>Image schemas enhance both, design outcome and the design process. To overcome the additional effort and time for using image schemas in design, a more accessible way to represent and utilise them is required. This work compared and evaluated visual and tangible representations of image schemas to determine which modality conveys image schemas best. Therefore, an empiric study was conducted, where participants matched image-schematic metaphors to visual and tangible representations, rated intuitive use and comprehensibility and indicated their preference. The Image Schema Icons showed higher ratings for intuitive use and a higher number of correct matches. The Image Schema Objects also showed high numbers of correct matches and were preferred more often due to their opportunity for physical interaction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.">Outlook</head><p>In the next step, we are going to evaluate image schema representations' effectiveness for designing data physicalisations. Further work could explore the transferability of Image Schema Icons and Image Schema Objects and their usefulness for other design tasks, such as tangible interfaces. Previous research has already highlighted image schemas' potential for tangible user interface design <ref type="bibr" target="#b25">[25,</ref><ref type="bibr" target="#b28">28]</ref> </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Visual representations of image schemas, printed on A6 cards. From left to right, first row: STRAIGHT-CROOKED, SMOOTH-ROUGH, UP-DOWN, PAINFUL; second row: CONTENT-CONTAINER, OBJECT, HEAVY-LIGHT, HARD-SOFT; third row: CENTRE-PERIPHERY, LEFT-RIGHT, PART-WHOLE, NEAR-FAR; fourth row: LINKAGE, STRONG-WEAK.</figDesc><graphic coords="6,103.25,147.89,388.30,242.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Handcrafted tangible representations of image schemas made of clay. From left to right, last row: STRAIGHT-CROOKED, SMOOTH-ROUGH, UP-DOWN, PAINFUL; second last row: CONTENT-CONTAINER, OBJECT, HEAVY-LIGHT, HARD-SOFT; second front row: CENTRE-PERIPHERY, LEFT-RIGHT, PART-WHOLE, NEAR-FAR; front row: LINKAGE, STRONG-WEAK.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Study setup with cardboard coverage and cardboard area to present stimuli.</figDesc><graphic coords="7,145.85,72.00,302.35,177.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Number of correct matches of visual and tangible representations to image-schematic metaphors.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Number of physical interactions per Image Schema Object.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="5,103.25,491.05,388.30,274.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Image Schematic Metaphors and Selection Criteria</head><label></label><figDesc>, which could be further reinforced by our proposed image schema representations. Presented image schemas, metaphors, selection criteria and presented alternatives for task one (group one: icons, group two: objects).Presented image schemas, metaphors, selection criteria and presented alternatives for task two (group one: icons, group two: objects).</figDesc><table><row><cell>Image</cell><cell></cell><cell>Metaphor</cell><cell></cell><cell>Selection criteria</cell><cell>Presented</cell></row><row><cell cols="2">Image schema</cell><cell>Metaphor</cell><cell cols="2">Selection criteria</cell><cell>Presented alternatives</cell></row><row><cell cols="2">schema UP-DOWN</cell><cell>happy is up -sad is</cell><cell></cell><cell>[29]</cell><cell>alternatives CENTRE-</cell></row><row><cell>UP-DOWN</cell><cell></cell><cell>POWERFUL IS UP -down</cell><cell>[29]</cell><cell>CENTRE-PERIPHERY PERIPHERY</cell></row><row><cell></cell><cell></cell><cell>POWERLESS IS DOWN</cell><cell></cell><cell>STRAIGHT-STRAIGHT-</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>CROOKED CROOKED</cell></row><row><cell cols="2">CONTENT-CONTENT-</cell><cell>THE BODY/MIND/A the mind</cell><cell cols="2">ISCAT: metaphor which refers ISCAT: metaphor which</cell><cell>LEFT-RIGHT LEFT-RIGHT</cell></row><row><cell cols="3">CONTAINER CONTAINER PERSON IS A (consciousness) is a</cell><cell cols="2">to both, content, and container refers to both, content, and</cell><cell>PART-WHOLE PART-WHOLE</cell></row><row><cell></cell><cell></cell><cell>CONTAINER FOR THE container (for idea</cell><cell></cell><cell>container</cell></row><row><cell cols="3">UEQ+: Intuitive Bedienung SELF objects)</cell><cell></cell></row><row><cell cols="3">UEQ+: Intuitive Use ABILITIES ARE THE NEAR-FAR emotional is near -</cell><cell></cell><cell>[29]</cell><cell>HEAVY-LIGHT</cell></row><row><cell cols="5">• Die Zuordnung der Icons/Objekte war für mich … CONTENT OF A unemotional is far</cell><cell>CENTRE-</cell></row><row><cell cols="5">The assignment of the icons/objects was … PERSON-CONTAINER</cell><cell>PERIPHERY</cell></row><row><cell cols="3">o mühevoll -mühelos NEAR-FAR THE PRESENT IS NEAR CENTRE-identity is central</cell><cell>[29]</cell><cell>ISCAT: most striking/easy</cell><cell>HEAVY-LIGHT PART-WHOLE</cell></row><row><cell cols="3">difficult -easy -THE PAST IS FAR PERIPHERY</cell><cell></cell><cell>to understand</cell><cell>CENTRE-PERIPHERY LEFT-RIGHT</cell></row><row><cell cols="2">CENTRE-STRONG-</cell><cell cols="3">IMPORTANCE IS powerful is strong -ISCAT: metaphor which refers [29]</cell><cell>PART-WHOLE UP-DOWN</cell></row><row><cell cols="3">o unlogisch -logisch PERIPHERY CENTRALITY WEAK powerless is weak</cell><cell cols="2">to both, centre and periphery</cell><cell>LEFT-RIGHT HARD-SOFT</cell></row><row><cell cols="3">illogical -logical UNIMPORTANT ISSUES PAINFUL disgust/being</cell><cell></cell><cell>ISCAT: only two metaphors</cell><cell>LINKAGE</cell></row><row><cell></cell><cell></cell><cell>ARE GIVEN disgusted is pain</cell><cell></cell><cell>in English available</cell><cell>OBJECT</cell></row><row><cell cols="4">o nicht einleuchtend -einleuchtend PERIPHERAL STRAIGHT-moral is straight -</cell><cell>[29]</cell><cell>SMOOTH-ROUGH</cell></row><row><cell cols="3">not plausible -plausible POSITIONS CROOKED corrupt is crooked</cell><cell></cell><cell>HARD-SOFT</cell></row><row><cell cols="3">STRONG-HARD-SOFT MUCH IS STRONG -stressful is hard -</cell><cell>[29]</cell><cell>[29]</cell><cell>UP-DOWN SMOOTH-ROUGH</cell></row><row><cell cols="3">o nicht schlüssig -schlüssig WEAK LITTLE IS WEAK &gt;&gt; relaxing is soft</cell><cell></cell><cell>HARD-SOFT STRAIGHT-</cell></row><row><cell></cell><cell cols="2">inconclusive -conclusive MORE IS STRONG -</cell><cell></cell><cell>CROOKED</cell></row><row><cell cols="2">SMOOTH-</cell><cell>LESS IS WEAK boring is smooth -</cell><cell></cell><cell>[25]</cell><cell>CONTENT-</cell></row><row><cell cols="2">PAINFUL ROUGH</cell><cell>FEAR/BEING AFRAID IS dangerous is rough</cell><cell cols="2">ISCAT: only two metaphors in</cell><cell>LINKAGE CONTAINER</cell></row><row><cell cols="3">UEQ+: Verständnis PAIN</cell><cell cols="2">English available</cell><cell>OBJECT STRONG-WEAK</cell></row><row><cell cols="3">UEQ+: Comprehensibility LEFT-RIGHT moral is right -</cell><cell></cell><cell>ISCAT: metaphor which</cell><cell>NEAR-FAR</cell></row><row><cell cols="3">• Die Icons/Objekte sind für mich … STRAIGHT-MORAL IS STRAIGHT -immoral is left</cell><cell>[29]</cell><cell>clearly maps left/right</cell><cell>SMOOTH-ROUGH STRONG-WEAK</cell></row><row><cell cols="3">The icons/objects are … CROOKED CORRUPT IS CROOKED</cell><cell></cell><cell>HARD-SOFT</cell></row><row><cell cols="3">o kompliziert -einfach HARD-SOFT INTENSIVE IS HARD -LINKAGE social relationships</cell><cell>[29]</cell><cell>ISCAT: most striking/easy</cell><cell>SMOOTH-ROUGH PAINFUL</cell></row><row><cell></cell><cell cols="2">complicated -simple SENSITIVE IS SOFT are links</cell><cell></cell><cell>to understand</cell><cell>STRAIGHT-OBJECT</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>CROOKED</cell></row><row><cell cols="3">o ungenau -genau SMOOTH-POLITE IS SMOOTH -PART-WHOLE coherent is whole</cell><cell>[25]</cell><cell>ISCAT: metaphor which</cell><cell>CONTENT-NEAR-FAR</cell></row><row><cell>ROUGH</cell><cell cols="2">unambiguous -ambiguous IMPOLITE IS ROUGH</cell><cell></cell><cell>contains at least the term</cell><cell>CONTAINER HEAVY-LIGHT</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>whole</cell><cell>STRONG-WEAK</cell></row><row><cell cols="5">o nicht eindeutig -eindeutig CONSERVATIVE IS HEAVY-LIGHT more is heavy-less is LEFT-ISCAT: metaphor which [25]</cell><cell>NEAR-FAR UP-DOWN</cell></row><row><cell>RIGHT</cell><cell cols="2">inaccurate -accurate RIGHT -SOCIAL light</cell><cell cols="2">clearly maps left</cell><cell>STRONG-WEAK CONTENT-</cell></row><row><cell></cell><cell></cell><cell>DEMOCRATIC IS LEFT</cell><cell></cell><cell>CONTAINER</cell></row><row><cell cols="3">o rätselhaft -erklärbar OBJECT opportunities are</cell><cell></cell><cell>ISCAT: metaphor which</cell><cell>LINKAGE</cell></row><row><cell>LINKAGE</cell><cell cols="2">enigmatic -explainable LOVE IS A BOND objects</cell><cell cols="2">ISCAT: most striking/easy to only refers to object, not</cell><cell>PAINFUL PAINFUL</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">understand further attributes or context</cell><cell>OBJECT</cell></row><row><cell>PART-</cell><cell></cell><cell>CREATIVITY IS</cell><cell cols="2">ISCAT: most striking/easy to</cell><cell>NEAR-FAR</cell></row><row><cell>Präferenz WHOLE</cell><cell></cell><cell>PUTTING PARTS</cell><cell cols="2">understand</cell><cell>HEAVY-LIGHT</cell></row><row><cell>Preference</cell><cell></cell><cell>TOGETHER</cell><cell></cell></row><row><cell cols="5">• Welche Darstellungsform hat Ihnen insgesamt besser gefallen und warum? HEAVY-IMPORTANT IS HEAVY [25] UP-DOWN</cell></row><row><cell cols="5">Which form of representation did you like better? Why? LIGHT -UNIMPORTANT IS</cell><cell>CONTENT-</cell></row><row><cell></cell><cell></cell><cell>LIGHT</cell><cell></cell><cell>CONTAINER</cell></row><row><cell>OBJECT</cell><cell></cell><cell>IDEAS ARE OBJECTS</cell><cell cols="2">ISCAT: metaphor which only</cell><cell>LINKAGE</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">refers to object, not further</cell><cell>PAINFUL</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">attributes or context</cell></row></table><note>• Gibt es zum Schluss noch etwas, dass Sie uns mitteilen möchten? (optional) Finally, is there anything else you would like to tell us? (optional) 3</note></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Supplementary Material 1 Instructions</head><p>Sie bekommen insgesamt 14 nummerierte Zettel. Auf der Rückseite der nummerierten Zettel steht jeweils ein kurzer Satz. Sie lesen immer den jeweiligen Satz. Anschließend zeigt Ihnen die Versuchsleitung drei Icons bzw. Icon-Paare. Wählen Sie das Icon bzw. Icon-Paar aus, dass Ihrer Meinung nach im Satz enthalten ist. Denken Sie nicht zu lange nach, entscheiden Sie intuitiv aus dem Bauch heraus.</p><p>Hinweis: Die Sätze sind auf Englisch. Wenn Sie eine Übersetzungsliste brauchen, können Sie im Fragebogen einmal auf "Weiter" klicken.</p><p>You will be given a total of 14 numbered sheets of paper. On the back of each sheet there is a short sentence. You will read each sentence. The experimenter will then show you three icons or pairs of icons. Choose the icon or pair of icons that you think is in the sentence. Don't think too long, make an intuitive decision.</p><p>Note: The sentences are in English. If you need a translation list, you can click 'Next' once in the questionnaire.</p><p>Sie bekommen insgesamt 14 nummerierte Zettel. Auf der Rückseite der nummerierten Zettel steht jeweils ein kurzer Satz. Sie lesen immer den jeweiligen Satz. Anschließend zeigt Ihnen die Versuchsleitung drei Objekte bzw. Objekt-Paare. Wählen Sie das Objekt bzw. Objekt-Paar aus, dass Ihrer Meinung nach im Satz enthalten ist. Denken Sie nicht zu lange nach, entscheiden Sie intuitiv aus dem Bauch heraus.</p><p>Hinweis: Die Sätze sind auf Englisch. Wenn Sie eine Übersetzungsliste brauchen, können Sie im Fragebogen einmal auf "Weiter" klicken.</p><p>You will be given a total of 14 numbered sheets of paper. On the back of each sheet there is a short sentence. You will read each sentence. The experimenter will then show you three objects or pairs of objects. Choose the object or pair of objects that you think is in the sentence. Don't think too long, make an intuitive decision.</p><p>Note: The sentences are in English. If you need a translation list, you can click 'Next' once in the questionnaire.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Questionnaires Demographic Data • Welches ist Ihr bisher höchster Bildungsabschluss?</head><p>What is your highest educational qualification to date?</p><p>• Welches ist Ihr Geschlecht? What is your gender?</p><p>• Wie alt sind Sie gemessen in Jahren? How old are you in years?</p><p>• Welches ist Ihre Muttersprache? What is your mother tongue?</p><p>• Wie würden Sie Ihre Englischkenntnisse einordnen? How would you categorise your English language skills? A1, A2, B1, B2, C1, C2 </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Designing Data Physicalisations with Physical Image Schema Instantiations</title>
		<author>
			<persName><forename type="first">C</forename><surname>Baur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Short Paper Proceedings of the 5th European Tangible Interaction Studio</title>
				<meeting><address><addrLine>Toulouse France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-11">2022. Nov. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Form Follows Mental Models: Finding Instantiations of Image Schemas Using a Design Research Approach</title>
		<author>
			<persName><forename type="first">C</forename><surname>Baur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 ACM Designing Interactive Systems Conference (Virtual Event Australia</title>
				<meeting>the 2022 ACM Designing Interactive Systems Conference (Virtual Event Australia</meeting>
		<imprint>
			<date type="published" when="2022-06">2022. Jun. 2022</date>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="586" to="598" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Image Schemas as Tool for Exploring the Design Space of Data Physicalisations</title>
		<author>
			<persName><forename type="first">C</forename><surname>Baur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of The Seventh Image Schema Day</title>
				<meeting>The Seventh Image Schema Day<address><addrLine>Rhodes Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023-09">2023. Sep. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A narrative in three acts: Using combinations of image schemas to model events</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Besold</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.bica.2016.11.001</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1016/j.bica.2016.11.001" />
	</analytic>
	<monogr>
		<title level="m">Biologically Inspired Cognitive Architectures</title>
				<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="10" to="20" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Cienki</surname></persName>
		</author>
		<title level="m">STRAIGHT: An image schema and its metaphorical extensions</title>
				<imprint>
			<date type="published" when="1998">1998. 1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Image Schemas and Conceptual Blending in Diagrammatic Reasoning: The Case of Hasse Diagrams. Diagrammatic Representation and Inference</title>
		<author>
			<persName><forename type="first">Dimitra</forename><surname>Bourou</surname></persName>
		</author>
		<editor>Amrita Basu et al.</editor>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>Springer</publisher>
			<biblScope unit="page" from="297" to="314" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Why Research-Oriented Design Isn&apos;t Design-Oriented Research: On the Tensions Between Design and Research in an Implicit Design Discipline</title>
		<author>
			<persName><forename type="first">D</forename><surname>Fallman</surname></persName>
		</author>
		<idno type="DOI">10.1007/s12130-007-9022-8</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1007/s12130-007-9022-8" />
	</analytic>
	<monogr>
		<title level="j">Knowledge, Technology &amp; Policy</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="193" to="200" />
			<date type="published" when="2007-10">2007. Oct. 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Metaphor in Culture: Universality and Variation, Zoltán Kövecses</title>
		<author>
			<persName><forename type="first">C</forename><surname>Forceville</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Pragmatics</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="page" from="1528" to="1531" />
			<date type="published" when="2005-09">2006. 2005. Sep. 2006</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
	<note>hardback</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title/>
		<idno type="DOI">10.1016/j.pragma.2006.03.003</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1016/j.pragma.2006.03.003" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The cognitive psychological reality of image schemas and their transformations</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Gibbs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">L</forename><surname>Colston</surname></persName>
		</author>
		<idno type="DOI">10.1515/cogl.1995.6.4.347</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1515/cogl.1995.6.4.347" />
	</analytic>
	<monogr>
		<title level="j">Cognitive Linguistics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="347" to="378" />
			<date type="published" when="1995">1995. 1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Foundations of meaning: Primary metaphors and primary scenes</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Grady</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1997">1997</date>
		</imprint>
		<respStmt>
			<orgName>Department of Linguistics, University of California at Berkeley</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Between Contact and Support: Introducing a Logic for Image Schemas and Directed Movement</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the XVIth International Conference of the Italian Association for Artificial Intelligence</title>
				<meeting>the XVIth International Conference of the Italian Association for Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2017-11">2017. 2017. Nov. 2017</date>
			<biblScope unit="page" from="256" to="268" />
		</imprint>
	</monogr>
	<note>Advances in Artificial Intelligence</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Image Schemas and Concept Invention: Cognitive, Logical, and Linguistic Investigations</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Conceptual Puzzle Pieces. Modeling and Using Context</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kutz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CONTEXT</title>
		<imprint>
			<biblScope unit="page" from="98" to="111" />
			<date type="published" when="2019">2019. 2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Visualising Image Schemas: A Preliminary Look at the Diagrammatic Image Schema Language (DISL)</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Hedblom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Neuhaus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth Image Schema Day</title>
				<meeting>the Sixth Image Schema Day<address><addrLine>Jönköping Sweden</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-03">2022. Mar. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Contextual Design: Evolved</title>
		<author>
			<persName><forename type="first">K</forename><surname>Holtzblatt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Beyer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>Morgan &amp; Claypool Publishers</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Cognition in HCI: An Ongoing Story</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<idno type="DOI">10.17011/ht/urn.20094141408</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.17011/ht/urn.20094141408" />
	</analytic>
	<monogr>
		<title level="j">Human Technology</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="12" to="28" />
			<date type="published" when="2009-05">2009. May 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Comparing Pictorial and Tangible Notations of Force Image Schemas</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction</title>
				<meeting>the Ninth International Conference on Tangible, Embedded, and Embodied Interaction<address><addrLine>Stanford California USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015-01">2015. Jan. 2015</date>
			<biblScope unit="page" from="249" to="256" />
		</imprint>
	</monogr>
	<note>TEI &apos;15</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Cooking up real world business applications combining physicality, digitality, and image schemas</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">TEI &apos;08: Proceedings of the 2nd international conference on Tangible and embedded interaction</title>
				<meeting><address><addrLine>Bonn Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008-02">2008. Feb. 2008</date>
			<biblScope unit="page" from="239" to="246" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Designing with Image Schemas: Resolving the Tension Between Innovation, Inclusion and Intuitive Use</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<idno type="DOI">10.1093/iwc/iwu049</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1093/iwc/iwu049" />
	</analytic>
	<monogr>
		<title level="j">Interacting with Computers</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<date type="published" when="2015-04">2015. Apr. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Designing with Image Schemas: Resolving the Tension Between Innovation, Inclusion and Intuitive Use</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<idno type="DOI">10.1093/iwc/iwu049</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1093/iwc/iwu049" />
	</analytic>
	<monogr>
		<title level="j">Interacting with Computers</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="235" to="255" />
			<date type="published" when="2015-05">2015. May 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">How Cognitive Linguistics Inspires HCI: Image Schemas and Image-Schematic Metaphors</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<idno type="DOI">10.1080/10447318.2016.1232227</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1080/10447318.2016.1232227" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="20" />
			<date type="published" when="2016-09">2016. Sep. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Image schemas: a new language for user interface design? Prospektive Gestaltung von Mensch-Technik-Interaktion</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<editor>M. Rötting et al.</editor>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>VDI Verlag</publisher>
			<biblScope unit="page" from="167" to="172" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">Image Schemas and Design for Intuitive Use</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
		<respStmt>
			<orgName>Technische Universität Berlin</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Physical gestures for abstract concepts: Inclusive design with primary metaphors</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.intcom.2010.08.009</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1016/j.intcom.2010.08.009" />
	</analytic>
	<monogr>
		<title level="j">Interacting with Computers</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="475" to="484" />
			<date type="published" when="2010-11">2010. Nov. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Sad is heavy and happy is light: population stereotypes of tangible object attributes</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 3rd International Conference on Tangible and Embedded Interaction</title>
				<meeting>the 3rd International Conference on Tangible and Embedded Interaction<address><addrLine>Cambridge United Kingdom</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009-02">2009. Feb. 2009</date>
			<biblScope unit="page" from="61" to="68" />
		</imprint>
	</monogr>
	<note>TEI &apos;09</note>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Supporting User Interface Design with Image Schemas: The ISCAT Database as a Research Tool</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth Image Schema Day</title>
				<meeting>the Sixth Image Schema Day<address><addrLine>Jönköping Sweden</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-03">2022. Mar. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Design for intuitive use -Testing Image Schema Theory for User Interface Design</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Blessing</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of ICED 2007, the 16th International Conference on Engineering Design</title>
				<meeting>ICED 2007, the 16th International Conference on Engineering Design<address><addrLine>Paris France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007-07">2007. Jul. 2007</date>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="829" to="830" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Image schemas and their metaphorical extensions: intuitive patterns for tangible interaction</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Israel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">TEI &apos;07: Proceedings of the 1st international conference on Tangible and embedded interaction</title>
				<meeting><address><addrLine>Baton Rouge Lousiana</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007-02">2007. Feb. 2007</date>
			<biblScope unit="page" from="127" to="134" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Soft Pillows and the Near and Dear: Physical-to-Abstract Mappings with Image-Schematic Metaphors</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hurtienne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Meschke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the TEI &apos;16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction</title>
				<meeting>the TEI &apos;16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction<address><addrLine>Eindhoven Netherlands</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016-02">2016. Feb. 2016</date>
			<biblScope unit="page" from="324" to="331" />
		</imprint>
	</monogr>
	<note>TEI &apos;16</note>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Opportunities and Challenges for Data Physicalization</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Jansen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems</title>
				<meeting>the 33rd Annual ACM Conference on Human Factors in Computing Systems<address><addrLine>Seoul Republic of Korea</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015-04">2015. Apr. 2015</date>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="3227" to="3236" />
		</imprint>
	</monogr>
	<note>CHI &apos;</note>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">JASP</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Johnson</surname></persName>
		</author>
		<title level="m">The body in the mind: The bodily basis of meaning, imagination, and reason</title>
				<imprint>
			<publisher>University of Chicago Press</publisher>
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">The philosophical significance of image schemas. From Perception to Meaning: Image Schemas in Cognitive Linguistics</title>
		<author>
			<persName><forename type="first">M</forename><surname>Johnson</surname></persName>
		</author>
		<editor>B. Hampe</editor>
		<imprint>
			<date type="published" when="2005">2005</date>
			<publisher>De Gruyter Mouton</publisher>
			<biblScope unit="page" from="15" to="34" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Lakoff</surname></persName>
		</author>
		<title level="m">Women, Fire, and Dangerous Things: What Categories Reveal about the Mind</title>
				<imprint>
			<publisher>University of Chicago Press</publisher>
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<title level="m" type="main">Metaphors we live by</title>
		<author>
			<persName><forename type="first">G</forename><surname>Lakoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Johnson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1980">1980</date>
			<publisher>University of Chicago Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<title level="m" type="main">Philosophy in The Flesh: The Embodied Mind And Its Challenge To Western Thought</title>
		<author>
			<persName><forename type="first">G</forename><surname>Lakoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Johnson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Basic Books</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<title level="m">LimeSurvey: An Open Source survey Tool</title>
				<imprint>
			<publisher>Limesurvey GmbH</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Developing Intuitive User Interfaces by Integrating Users&apos; Mental Models into Requirements Engineering</title>
		<author>
			<persName><forename type="first">D</forename><surname>Löffler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 27th International BCS Human Computer Interaction Conference</title>
				<meeting>the 27th International BCS Human Computer Interaction Conference<address><addrLine>London, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013-09">2013. Sep. 2013</date>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="1" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Mixing Languages&apos;: image schema inspired designs for rural Africa</title>
		<author>
			<persName><forename type="first">D</forename><surname>Löffler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CHI &apos;14 Extended Abstracts on Human Factors in Computing Systems</title>
				<meeting><address><addrLine>Toronto Ontario Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014-05">2014. May 2014</date>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="1999" to="2004" />
		</imprint>
	</monogr>
	<note>CHI EA &apos;</note>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Bridging the gap: attribute and spatial metaphors for tangible interface design</title>
		<author>
			<persName><forename type="first">A</forename><surname>Macaranas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction</title>
				<meeting>the Sixth International Conference on Tangible, Embedded and Embodied Interaction<address><addrLine>Kingston Ontario Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012-02">2012. Feb. 2012</date>
			<biblScope unit="page" from="161" to="168" />
		</imprint>
	</monogr>
	<note>TEI &apos;12</note>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">How to build a baby: II. Conceptual primitives</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Mandler</surname></persName>
		</author>
		<idno type="DOI">10.1037/0033-295X.99.4.587</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1037/0033-295X.99.4.587" />
	</analytic>
	<monogr>
		<title level="j">Psychological Review</title>
		<imprint>
			<biblScope unit="volume">99</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="587" to="604" />
			<date type="published" when="1992-11">1992. Nov. 1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Is Metaphor Universal? Cross-Language Evidence From German and Japanese</title>
		<author>
			<persName><forename type="first">C</forename><surname>Neumann</surname></persName>
		</author>
		<idno type="DOI">10.1207/S15327868MS1601&amp;2_9</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1207/S15327868MS1601&amp;2_9" />
	</analytic>
	<monogr>
		<title level="j">Metaphor and Symbol -METAPHOR SYMB</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="123" to="142" />
			<date type="published" when="2001-04">2001. Apr. 2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">Eine modulare Erweiterung des User Experience Questionnaire</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schrepp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Thomaschewski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Usability Professionals</title>
		<imprint>
			<biblScope unit="page" from="148" to="156" />
			<date type="published" when="2019">2019. UP19. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Force Dynamics in Language and Cognition</title>
		<author>
			<persName><forename type="first">L</forename><surname>Talmy</surname></persName>
		</author>
		<idno type="DOI">10.1207/s15516709cog1201_2</idno>
		<idno>DOI:</idno>
		<ptr target="https://doi.org/10.1207/s15516709cog1201_2" />
	</analytic>
	<monogr>
		<title level="j">Cognitive Science</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="49" to="100" />
			<date type="published" when="1988-01">1988. Jan. 1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">Design of Age-Inclusive Tangible User Interfaces Using Image-Schematic Metaphors</title>
		<author>
			<persName><forename type="first">R</forename><surname>Tscharn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction</title>
				<meeting>the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction<address><addrLine>Yokohama Japan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017-03">2017. Mar. 2017</date>
			<biblScope unit="page" from="693" to="696" />
		</imprint>
	</monogr>
	<note>TEI &apos;17</note>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">Evaluating Musical Software Using Conceptual Metaphors</title>
		<author>
			<persName><forename type="first">K</forename><surname>Wilkie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology</title>
				<meeting>the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology<address><addrLine>Cambridge Great Britain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009-09">2009. Sep. 2009</date>
			<biblScope unit="page" from="232" to="237" />
		</imprint>
	</monogr>
	<note>BCS-HCI &apos;09</note>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Evaluation of an Application Based on Conceptual Metaphors for Social Interaction Between Vehicles</title>
		<author>
			<persName><forename type="first">A</forename><surname>Winkler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 ACM Conference on Designing Interactive Systems</title>
				<meeting>the 2016 ACM Conference on Designing Interactive Systems<address><addrLine>Brisbane QLD Australia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016-06">2016. Jun. 2016</date>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="1148" to="1159" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
