<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Exploring how users across cultures design and perceive multimodal robot emotion -Abstract</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Mathieu</forename><surname>Depaul</surname></persName>
							<email>mdepaul1@sfsu.edu</email>
							<affiliation key="aff0">
								<orgName type="department">School of Engineering</orgName>
								<orgName type="institution">San Francisco State University</orgName>
								<address>
									<addrLine>1600 Holloway Ave</addrLine>
									<postCode>94132</postCode>
									<settlement>San Francisco</settlement>
									<region>CA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dagoberto</forename><surname>Cruz-Sandoval</surname></persName>
							<email>dcruzsandoval@ucsd.edu</email>
							<affiliation key="aff1">
								<orgName type="department">Computer Science and Engineering</orgName>
								<orgName type="institution">University of California San Diego</orgName>
								<address>
									<addrLine>9500 Gilman Dr, La Jolla</addrLine>
									<postCode>92093</postCode>
									<region>CA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alyssa</forename><surname>Kubota</surname></persName>
							<email>akubota@sfsu.edu</email>
							<affiliation key="aff0">
								<orgName type="department">School of Engineering</orgName>
								<orgName type="institution">San Francisco State University</orgName>
								<address>
									<addrLine>1600 Holloway Ave</addrLine>
									<postCode>94132</postCode>
									<settlement>San Francisco</settlement>
									<region>CA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<address>
									<settlement>Pasadena</settlement>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Exploring how users across cultures design and perceive multimodal robot emotion -Abstract</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">170AF420A1CB86356DC7A44BCBCF0361</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Human-robot interaction</term>
					<term>Cross-cultural perception</term>
					<term>Multimodal robot expression</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>As robots enter more human-centered spaces, such as homes, and engage with more diverse populations, they will need to interact with people in a culturally appropriate manner. This interaction plays an important role in maintaining engagement over long periods of time to maximize efficacy for applications, such as delivering health interventions. In our work, we seek to understand how a user's cultural background influences how they design expressions to convey different emotions on robots, as well as how they perceive those emotions. We explore how cultural factors impact how people perceive robot emotions composed of different modalities, including sounds (verbal and non-verbal expressions) and color. Our proposed work will contribute towards design considerations to make robots more culturally sensitive and inclusive.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Perception of robot expressions may vary widely across cultures and contexts <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>, as cultural values influence users' perception, acceptance, and trust of robots <ref type="bibr" target="#b2">[3]</ref>. However, inappropriate design of these expressions may have the potential to perpetuate cultural biases and stereotypes particularly if designers are not familiar with the culture of the intended end users <ref type="bibr" target="#b3">[4]</ref>. Understanding these factors together may help reduce the perpetuation of cultural stereotypes and biases in robots while promoting social equity <ref type="bibr" target="#b2">[3]</ref>.</p><p>However, it is unclear how robots can leverage multiple modalities to most effectively convey emotion across cultures. The lack of universality of perceptions of robot emotion and social cues across cultures presents new design considerations for researchers seeking to increase the quality of human-robot interactions <ref type="bibr" target="#b4">[5]</ref>. Furthermore, robot expressions are typically designed by roboticists rather than the intended end users of these systems, possibly leading to misalignments in an intended robot emotion and how users perceive them.</p><p>Our work explores synthesizing multimodal robot expressions, focusing on sound and color, which effectively communicate robot emotion <ref type="bibr" target="#b5">[6]</ref>. We aim to identify how combining these modalities affect human perceptions of robot emotion across cultures and how these perceptions may impact design consideration for culturally aware robots, with the long-term goal of supporting autonomous personalization. We propose a mixed-measures study in which participants from various cultural backgrounds will design different robot expressions that they perceive to convey specific emotions. We will leverage an online tool for the Cognitively Assistive Robot for Motivation and Neurorehabilitation (CARMEN) <ref type="bibr" target="#b6">[7]</ref> to enable participants to design personalized, multimodal robot expressions on a simulated robot (see Figure <ref type="figure" target="#fig_0">1</ref>). We will evaluate how participants perceive expressions designed by other participants from different cultures. We anticipate two main contributions from our proposed work. First, we will provide insight into how various modalities of expression and their combinations affect the perception of emotions and social behaviors across cultures. Second, we will propose design considerations for multimodal expression that researchers can leverage to make socially assistive robots more culturally sensitive and synthesize higher quality interactions between users and robots. Our research also extends to maintaining longitudinal engagement with robot-delivered interventions in the home, where multimodal robot expression may provide more quality and engaging interactions between users and robots across cultures <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10]</ref>. Ultimately, our work seeks to create more effective care that promotes inclusiveness across different cultures.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Robot Emotion and Social Cues</head><p>The design of robot expressions can convey complex information to users such as different emotions or social cues <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b5">6]</ref>. Researchers have found that robot expressions of emotion and social cues have a positive effect on user perception and how accurately they can be recognized <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b0">1,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref>. For instance, utilizing colored lights for expression significantly improves participants' accuracy when identifying a robot's internal state and improves trust towards a robot <ref type="bibr" target="#b12">[13]</ref>. Other research on vocal expression revealed that intonation, pitch, and timbre are the primary sound parameters which impact its perception <ref type="bibr" target="#b13">[14]</ref>. Nonverbal sounds may also convey important information to users about robot emotion or social cues <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b14">15]</ref>. However, questions remain on how the combinations of these modalities impacts users' perception of robot expressions across cultures. Thus, we plan to enable participants to design these expressions to better understand how cultural backgrounds affect the perception of robot emotion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Cross-Cultural Emotion Perception</head><p>Researchers have identified that multimodal robot expression may provide more quality and engaging interactions between users and robots across cultures <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10]</ref>. Research on the cross-cultural impact of vocal expression shows that human emotion and social cue expressions manifest as subtle, nuanced patterns for expressing and perceiving such expressions <ref type="bibr" target="#b1">[2]</ref>. A related study <ref type="bibr" target="#b15">[16]</ref> provided various human vocal expressions of emotions (happiness, anger, fear, disgusted, sadness, and surprise) made in the US to a culturally diverse group of participants. They found that a wider gap between cultures led to decreased emotional recognition accuracy among the participants <ref type="bibr" target="#b15">[16]</ref>. Other work focuses on how personalizing robots across cultures promotes acceptance of robots during human-robot interaction <ref type="bibr" target="#b16">[17]</ref>. These studies highlight the importance of understanding how users' cultural backgrounds influence their perception of robot expressions and how these expressions can be personalized across cultures.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Proposed Methodology</head><p>We plan to conduct an online mixed-methods study to identify how multimodal expression, through sound and color, impacts a user's perception of robot emotion and social cues, and how these perceptions change across cultures. We will follow commonly used frameworks from psychology <ref type="bibr" target="#b17">[18]</ref> and focus on both innate primary emotions (joy, sadness, anger, fear, and disgust) and acquired secondary emotions (guilt, regret, pride, and jealousy) <ref type="bibr" target="#b5">[6]</ref>. We plan to recruit participants from the US and Mexico to design multimodal robot expressions and social cues to convey these emotions. These locations allow us to explore how cultural elements, such as expressiveness, communication, and attitudes towards technology, impact the design and perception of robot emotions. Furthermore, our research will be primarily conducted in California where there is a relatively high population of people of Mexican heritage. Participants will report the culture with which they self-identify.</p><p>We will use the CARMEN platform <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b6">7]</ref>, a cognitively assistive robot which supports flexible, expressive modalities that is designed to deliver longitudinal interventions at home. Participants can use CARMEN's online interface to design their preferred robot expressions to convey emotions with an easy to use block programming system. We will provide participants with a brief tutorial on using the interface to design their own robot emotions in order to better understand how people from different cultural backgrounds perceive these emotions in robots. We will present participants with a predefined neutral robot expression, and they can adjust both the color and sounds in order to isolate the effects of these two modalities. Colored lights will be visible through the robot's body to enable participants to personalize their design preferences. They can consider features such as frequency, light animation, hue, saturation, and brightness. For sound modalities, verbal and non-verbal sounds will be considered to maximize personalization options for participants where they can adjust features like intonation, pitch, and timbre.</p><p>After completing the design process, we will ask participants open-ended questions to understand the reasoning behind their choices, including what features they chose and why. We will conduct a thematic analysis of the qualitative data, evaluating how participants weigh each modality and their features, the effect of multimodal expression on emotion and social cue perception, and how this influences users' perception of the conveyed robot emotion. We will also explore how cultural differences affect users' perception of the emotions they assign to different robot expressions. In order to understand users' perception of the robot emotions, we will ask participants to identify the designs from both their own and the other culture with the emotion they perceive, allowing us to compare cross-cultural differences in perception.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Future Work</head><p>This proposed work aims to identify how multimodal expressions of robot emotion and social cues are perceived by users, understand how users' respective cultures affect their perception of robot expressions, and learn how these modalities can be combined to be more culturally aware. Multimodal expressions have a significantly positive impact on user engagement <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21]</ref> which may improve robot healthcare interventions deployed longitudinally in the home <ref type="bibr" target="#b9">[10]</ref>. In future work, we will also explore these differences among other cultures and how additional modalities, such as facial expressions, can produce more effective modality combinations to improve expression recognition accuracy across cultures <ref type="bibr" target="#b5">[6]</ref>. With more modalities to consider, there is the possibility to expand this study to include more emotion and social cues, or even more complex behavior, such as creating more personalized robot personalities. Finally, we want to better understand the impact of these culturally influenced emotions on longitudinal engagement <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21]</ref>, as well as ethical implications such as trust, attachment, and reliance on robots with these abilities <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b22">23,</ref><ref type="bibr" target="#b23">24]</ref>. The results of this work may allow researchers to automatically synthesize personalized behaviors based on a user's cultural background. Our work will enable robots to interact with more culturally diverse populations and ultimately improve equity and accessibility of personalized systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: We will employ CARMEN, a cognitively assistive robot that leverages multiple modalities including sound, color, and movement, to communicate.</figDesc><graphic coords="2,162.25,65.61,270.77,248.40" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Artificial emotion expression for a robot by dynamic color change</title>
		<author>
			<persName><forename type="first">K</forename><surname>Terada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Yamauchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ito</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2012">2012. 2012</date>
			<biblScope unit="page" from="314" to="321" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Sauter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Eisner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ekman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Scott</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the National Academy of Sciences</title>
		<imprint>
			<biblScope unit="volume">107</biblScope>
			<biblScope unit="page" from="2408" to="2412" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Social robots on a global stage: establishing a role for culture during human-robot interaction</title>
		<author>
			<persName><forename type="first">V</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rooksby</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">S</forename><surname>Cross</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="1307" to="1333" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Londoño</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Röfer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Welschehold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Valada</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2202.02654</idno>
		<title level="m">Doing right by not doing wrong in human-robot collaboration</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Nonverbal sound in human-robot interaction: a systematic review</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">T</forename><surname>Fitter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Human-Robot Interaction</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="1" to="46" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Multimodal expression of artificial emotion in social robots using color, motion and sound</title>
		<author>
			<persName><forename type="first">D</forename><surname>Löffler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tscharn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction</title>
				<meeting>the 2018 ACM/IEEE International Conference on Human-Robot Interaction</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="334" to="343" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Carmen: A cognitively assistive robot for personalized neurorehabilitation at home</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bouzida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kubota</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cruz-Sandoval</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">W</forename><surname>Twamley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">D</forename><surname>Riek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction</title>
				<meeting>the 2024 ACM/IEEE International Conference on Human-Robot Interaction</meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="55" to="64" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The power of color: A study on the effective use of colored light in human-robot interaction</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pörtner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Schröder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rasch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sprute</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hoffmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>König</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="3395" to="3402" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Recent advancements in multimodal human-robot interaction</title>
		<author>
			<persName><forename type="first">H</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sandoval</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Laribi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Neurorobotics</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page">1084000</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Context-enhanced human-robot interaction: exploring the role of system interactivity and multimodal stimuli on the engagement of people with dementia</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Perugia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">I</forename><surname>Barakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Rauterberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="page" from="1" to="20" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Habibian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Valdivia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">H</forename><surname>Blumenschein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Losey</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2312.00948</idno>
		<title level="m">A review of communicating robot learning during human-robot interaction</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The effects of robot voices and appearances on users&apos; emotion recognition and subjective perception</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Barnes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">H</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Howard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jeon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Humanoid Robotics</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page">2350001</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Mobile service robot state revealing through expressive lights: formalism, design, and evaluation</title>
		<author>
			<persName><forename type="first">K</forename><surname>Baraka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Veloso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="65" to="92" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Sound design for emotion and intention expression of socially interactive robots</title>
		<author>
			<persName><forename type="first">E.-S</forename><surname>Jee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-J</forename><surname>Jeong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">H</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kobayashi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Intelligent Service Robotics</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="199" to="206" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Sounding robots: Design and evaluation of auditory displays for unintentional human-robot interaction</title>
		<author>
			<persName><forename type="first">B</forename><surname>Orthmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Leite</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bresin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Torre</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Human-Robot Interaction</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="1" to="26" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Cross-cultural emotion recognition and in-group advantage in vocal expression: A meta-analysis</title>
		<author>
			<persName><forename type="first">P</forename><surname>Laukka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Elfenbein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Emotion Review</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="3" to="11" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Factors for personalization and localization to optimize human-robot interaction: A literature review</title>
		<author>
			<persName><forename type="first">N</forename><surname>Gasteiger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hellou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">S</forename><surname>Ahn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="689" to="701" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Ascribing emotions to robots: Explicit and implicit attribution of emotions and perceived robot anthropomorphism</title>
		<author>
			<persName><forename type="first">N</forename><surname>Spatola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">A</forename><surname>Wudarczyk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers in Human Behavior</title>
		<imprint>
			<biblScope unit="volume">124</biblScope>
			<biblScope unit="page">106934</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Get smart: Collaborative goal setting with cognitively assistive robots</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kubota</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cruz-Sandoval</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">D</forename><surname>Riek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction</title>
				<meeting>the 2023 ACM/IEEE International Conference on Human-Robot Interaction</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="44" to="53" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Fully automatic analysis of engagement and its relationship to personality in human-robot interactions</title>
		<author>
			<persName><forename type="first">H</forename><surname>Salam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Celiktutan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Hupont</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Gunes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chetouani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="705" to="721" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Are you still with me? continuous engagement assessment from a robot&apos;s point of view</title>
		<author>
			<persName><forename type="first">F</forename><surname>Del Duchetto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Baxter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hanheide</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Robotics and AI</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page">116</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Trust: Recent concepts and evaluations in human-robot interaction</title>
		<author>
			<persName><forename type="first">T</forename><surname>Law</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Scheutz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Trust in Human-Robot Interaction</title>
		<imprint>
			<biblScope unit="page">27</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">The relationship between trust and use choice in human-robot interaction</title>
		<author>
			<persName><forename type="first">T</forename><surname>Sanders</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Koch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Hancock</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Human factors</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="page" from="614" to="626" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Somebody that i used to know: The risks of personalizing robots for dementia care</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kubota</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pourebadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Banh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Riek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of We Robot</title>
				<meeting>We Robot</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
