<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Assessing Emotion Mitigation through Robot Facial Expressions for Human-Robot Interaction</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Luigi</forename><surname>D'arco</surname></persName>
							<email>luigi.darco@unina.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Electrical Engineering and Information Technologies</orgName>
								<orgName type="institution">University of Naples Federico II</orgName>
								<address>
									<addrLine>Via Claudio 21</addrLine>
									<postCode>80125</postCode>
									<settlement>Naples</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alessandra</forename><surname>Rossi</surname></persName>
							<email>alessandra.rossi@unina.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Electrical Engineering and Information Technologies</orgName>
								<orgName type="institution">University of Naples Federico II</orgName>
								<address>
									<addrLine>Via Claudio 21</addrLine>
									<postCode>80125</postCode>
									<settlement>Naples</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Silvia</forename><surname>Rossi</surname></persName>
							<email>silvia.rossi@unina.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Electrical Engineering and Information Technologies</orgName>
								<orgName type="institution">University of Naples Federico II</orgName>
								<address>
									<addrLine>Via Claudio 21</addrLine>
									<postCode>80125</postCode>
									<settlement>Naples</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="laboratory">Workshop on Advanced AI Methods and Interfaces for Human-Centered Assistive and Rehabilitation Robotics</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Assessing Emotion Mitigation through Robot Facial Expressions for Human-Robot Interaction</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C3E2FE70383AF5065AA2A9F69CA60905</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:06+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Emotion elicitation</term>
					<term>Socially Assistive Robotics</term>
					<term>Human-Robot Interaction</term>
					<term>Emotion Recognition</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Affective responses are one of the primary and clearer signals used by agents for communicating their internal state. These internal states can represent a positive or negative acceptance of a robotic agent's behavior during a humanrobot interaction (HRI). In these scenarios, it is fundamental for robots to be able to interpret people's emotional responses and to adjust their behaviors accordingly, to appease them, and to provoke an emotional change in them. This research investigates the impact of robot facial expressions on human emotional experiences within HRI, focusing specifically on whether a robot's expressions can amplify or mitigate users' emotional responses when viewing emotion-eliciting videos. To evaluate participants' emotional states, an AI-based multimodal emotion recognition approach was employed, combining analysis of facial expressions and physiological signals, complemented by a self-assessment questionnaire. Findings indicate that participants responded more positively when the robot's facial expressions aligned with the emotional tone of the videos, suggesting that emotioncoherent displays could enhance user experience and strengthen engagement. These results underscore the potential for expressive social robots to influence human emotions effectively, offering promising applications in therapy, education, and entertainment. By incorporating emotional facial expressions, socially assistive robots could foster behavior change and emotional engagement in HRI, broadening their role in supporting human emotional well-being.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Socially Assistive Robotics (SAR) is an emerging field of robotics that focuses on developing robots that can assist users with hands-off interaction strategies, providing emotional and cognitive assistance <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. To improve the Human-Robot Interaction (HRI) experience, SARs must be capable of interpreting, mimicking, and responding to emotional cues, with facial expressions being a primary mode of emotional communication. This ability is essential when robots are used in contexts where emotional engagement can facilitate positive outcomes, such as therapy, learning, and behavior change. In human-human communication, facial expressions are critical for conveying emotions, improving understanding, and guiding social interactions. Several studies showed that facial expressions not only reflect how a person is feeling but also influence how others feel <ref type="bibr" target="#b2">[3]</ref>. This phenomenon, known as emotional contagion <ref type="bibr" target="#b3">[4]</ref>, suggests that emotions can spread from one person to another through non-verbal cues, influencing the emotional state of the observer. If SARs are to be effective in emotionally engaging users, they must be able to use facial expressions in ways that influence the user's emotional experience, particularly in situations where emotional states can influence behavior and decision-making. Staffa et. al <ref type="bibr" target="#b4">[5]</ref> investigated whether positive or negative robot personalities can affect the mental state of users during HRI by assessing the Electroencephalogram (EEG) signals of participants. They involved an anthropomorphic robot with two engagement personalities, one more prone to engage the user and the other not, by modeling voice, dialogues, and head and body movements. The results showed that participants felt the robot's personality, affecting their emotional state and engagement. Similarly, Fiorini et al. <ref type="bibr" target="#b5">[6]</ref> explored the impact of a robot's behavior on the emotional state of users during exposure to emotion-eliciting images. The robot displayed emotions that were coherent or incoherent with those experienced by the user to assess the level of influence it could have on the user. The results showed high accuracy, up to 98% in the robot recognizing 3 emotional states, including positive, negative, and neutral, reporting that such states were better identified when the robot was not neutral but performed coherent or incoherent behaviors. Rossi et al. <ref type="bibr" target="#b6">[7]</ref> conducted a study to evaluate the impact of non-verbal behaviors of an anthropomorphic robot on the emotional responses in users. By using coherent, incoherent, and neutral behaviors the robot's non-verbal cues were modeled using emotional gestures. Findings revealed that emotional reactions with high arousal could be challenging by using only emotional gestures, and additional interaction strategies are needed.</p><p>In light of the different achievements in the literature, the impact of robot facial expressions on human emotions during HRI has yet to be fully investigated. Hence, the present study focuses on assessing whether the facial expressions of a robot can affect the mood of users while watching emotion-eliciting videos. By displaying facial expressions that either match or contrast the user's emotional state, the robot could promote the general effect of mirroring or emotional contagion, whereby an observer tends to covertly and unconsciously mimic the behavior of the person being observed <ref type="bibr" target="#b7">[8]</ref>. The study design is based on the approach outlined by Rossi et al. <ref type="bibr" target="#b6">[7]</ref>, with the modification of including only two conditions: the robot's facial expressions either align with the emotional content of the videos or display opposing emotions. To evaluate the emotion felt by the user, an Artificial Intelligence (AI) approach has been developed that predicts the users' emotional state based on a fusion of facial expressions and physiological signals. Furthermore, participants of the study were provided with a questionnaire at the beginning to ascertain their empathetic capacity and one questionnaire at the end to evaluate their perception of the robot's emotional display. By demonstrating the potential of robots to influence human emotions through facial expressions, the study can contribute to the development of SARs that are more emotionally intelligent and capable of supporting users in emotionally meaningful ways, unleashing their application in scenarios where emotional engagement can facilitate positive outcomes, such as therapy, learning, and behavior change.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Materials and Methods</head><p>This study evaluates a robot's ability to mitigate emotions in participants watching emotion-eliciting videos. A multimodal emotion recognition system assessed participants' emotional states through facial expression and physiological signal analysis. Pre-experiment questionnaires evaluated participants' empathy levels, while post-experiment questionnaires assessed their perceptions of the robot's emotional displays. The study was conducted in a controlled environment to ensure reliable results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Robotic Agent and Sensing Elements</head><p>The robotic agent involved in this study is a Furhat robot <ref type="bibr" target="#b8">[9]</ref>, which is a human-like, rear-projected robotic head that uses computer animations and neck movements to provide facial expressions <ref type="bibr" target="#b9">[10]</ref>. The robot is equipped with a camera and a microphone to capture information from the surrounding environment. However, due to the need to show emotion-eliciting videos from the laptop, the laptop camera is used to have a straight view of the user's face, better for identifying the emotion felt. Although facial expression may be the most significant nonverbal form of emotional expression <ref type="bibr" target="#b2">[3]</ref>, some people can mask their facial emotions by adopting a neutral expression and using non-intuitive human body language that can lead to misinterpretations <ref type="bibr" target="#b10">[11]</ref>. Therefore, a multimodal information-based solution for emotion recognition has been pursued to produce a more reliable emotion recognition system. Alongside the facial expressions, physiological signals have been considered, including the Electrocardiogram (ECG) and Galvanic Skin Response (GSR) signals, which can be considered more reliable indicators of emotions, as they are more difficult to cover or alter through human disguise <ref type="bibr" target="#b11">[12]</ref>. These signals are acquired by the BITalino biosignal platform provided by PLUX Biosignals <ref type="bibr" target="#b12">[13]</ref>. The settings of the equipment involved in the study are shown in Figure <ref type="figure" target="#fig_0">1</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Emotion Recognition Model</head><p>The emotion recognition model employed in this study builds upon a baseline architecture previously established in <ref type="bibr" target="#b6">[7]</ref>. The model was selected due to its proven effectiveness in recognizing emotions from facial expressions and physiological signals. The model was trained using the AMIGOS dataset <ref type="bibr" target="#b13">[14]</ref>, which contains multimodal data, including EEG, ECG, GSR, and facial video recordings. The dataset provides affective level annotations for the participants based on the Self-Assessment Manikin (SAM) scale <ref type="bibr" target="#b14">[15]</ref> and evaluations made by the dataset's authors. The annotations include valence, arousal, dominance, and basic emotions (Neutral, Disgust, Happiness, Surprise, Anger, Fear, and Sadness) for each participant for every video. The model is based on a multimodal approach that combines facial expressions and physiological signals to predict the user's emotional state. Each modality has been processed individually by an artificial intelligence model, with the facial images processed by a Convolutional Neural Network (CNN)-based architecture, and the physiological signals processed by a Support Vector Machine (SVM) model. The predictions of the extracted emotions from each modality are then fused to provide a final prediction. The model was trained on the AMIGOS dataset using a train-test split approach of 70-30.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Emotion Elicitation Videos</head><p>The videos for emotion elicitation have been selected from the DECAF database <ref type="bibr" target="#b15">[16]</ref>, which is a multimodal dataset for decoding user physiological responses to affective multimedia content. Videos with a total length not exceeding 120 seconds have been chosen to avoid fatigue and to maintain the participants' attention, but also to ensure that only one emotion is elicited at a time. Three videos were selected for each of the four emotional categories Low Arousal -Negative Valence (LALV), Low Arousal -Positive Valence (LAHV), High Arousal -Negative Valence (HALV), and High Arousal -Positive Valence (HAHV) based on the annotations provided in the DECAF dataset, resulting in a total of 12 videos. For instance, the scene from Bambi where "Bambi's mother gets killed" was categorized under LALV due to its emotionally distressing content, while the scene from Wall-E where "Wall-E and Eve spend a romantic night together" was classified under LAHV to evoke positive but calm emotions. Videos have been presented to the participants in random order to avoid any bias in the results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Questionnaires</head><p>Two questionnaires have been prepared for the study, one to be completed before the experiment and one after. The pre-experiment questionnaire aimed to collect demographic information about the participants, such as age and gender, as well as information about their previous experience with robots and their empathetic capacity. The empathetic capacity is assessed using the Empathy Quotient test <ref type="bibr" target="#b16">[17]</ref>, which is a self-report questionnaire designed to measure empathy in adults. The short version of the test has been chosen, which consists of 40 questions, where each question is scored on a scale from 0 to 2, with higher scores indicating higher levels of empathy. The test provides a total score that ranges from 0 to 80. To distinguish participants' level of empathy, four categories have been identified: low empathy (0-20), medium-low empathy (21-40), medium-high empathy (41-60), and high empathy (61-80). On the other hand, the post-experiment questionnaire aimed to evaluate the participant's perception of the robot's emotional display during the experiment. The post-questionnaire included questions about the robot's facial expressions, the perceived emotions, and the impact of the robot's expressions on the participants' emotional state. The post-questionnaire is created above the System Usability Scale (SUS) score principles <ref type="bibr" target="#b17">[18]</ref>, and each item is scored on a scale of 1 to 5, with 1 being completely disagree and 5 being completely agree. The post-questionnaire results were scored with a range from 0 to 100, with higher scores indicating a more positive perception of the robot's emotional display.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head><p>A total of 60 subjects, aged 18 to 34, voluntarily participated in the study. Participants included 34 males, 17 females, and 1 non-binary individual. Of these, 24 participants reported no prior experience with robots. Two participants withdrew from the study before completing the session due to personal commitments. The participants were randomly assigned to two groups: coherent (𝑛 = 29) and incoherent (𝑛 = 29). This preliminary analysis aims to assess participants' experiences and perceptions of the robot, comparing post-experiment responses between the two groups. Statistical analyses were conducted using the Student's t-test for independent samples.</p><p>Participants in the coherent group rated the robot's behavior as more natural (𝜇 = 3.393, 𝜎 = 1.197) than those in the incoherent group (𝜇 = 2.448, 𝜎 = 1.213), with a statistically significant difference (𝑝 &lt; 0.05). Similarly, participants in the incoherent group reported a higher discomfort level created by the robot (𝜇 = 3.000, 𝜎 = 1.363) than those in the coherent group (𝜇 = 1.786, 𝜎 = 0.994, 𝑝 &lt; 0.05). However, both groups did not perceive a significant influence from the robot while watching the videos (𝜇 = 2.579, 𝜎 = 1.224, 𝑝 &gt; 0.05). Furthermore, participants in the coherent group perceived the robot as more aware of the video content (𝜇 = 4.143, 𝜎 = 1.02) compared to those in the incoherent group (𝜇 = 2.414, 𝜎 = 1.21). They also rated the robot as less incoherent relative to the video scenes presented (𝜇 = 1.964 for the coherent group, 𝜇 = 3.793 for the incoherent group). The robot's expressions were more distracting for participants in the incoherent group (𝜇 = 3.000, 𝜎 = 1.15) than for those in the coherent group (𝜇 = 2.357, 𝜎 = 1.07). For other aspects, such as the social acceptability of the robot and its potential utility in communicating emotions, no significant differences were observed between the two groups (𝑝 &gt; 0.05), with both groups agreeing on the robot's usefulness.</p><p>Overall, these findings suggest that coherent emotional expressions in the robot enhance perceptions of it as more natural, aware, and non-intrusive, whereas incoherent expressions increase perceptions of discomfort, distraction, and incoherence. Although participants did not report feeling influenced by the robot, future studies will explore potential unconscious emotional changes in participants using emotion recognition models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>This preliminary study explores how robot facial expressions influence human emotional experiences in HRI. In the experiment, participants watched emotion-eliciting videos while interacting with a robot that displays facial expressions either aligned or misaligned with the emotional content of the videos. The findings indicate that participants generally responded positively to interactions where the robot's expressions matched the emotional content of the videos, underscoring the potential of using facial expressions in SARs to enhance user engagement. This study lays a foundation for incorporating emotionally expressive robots in SAR applications across therapeutic, educational, and entertainment settings. Future research will delve further into the collected data to determine whether participants experienced unconscious emotional changes, advancing the understanding of how emotionally aware robots might foster behavioral change and enhance emotional connection in HRI.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Experimental settings. Example of a user wearing the BITalino biosignal platform while watching emotion-eliciting videos and interacting with the Furhat robot.</figDesc><graphic coords="3,184.82,65.62,225.58,146.45" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This research is supported by the Italian MUR and EU under the project ADVISOR (ADaptiVe leglble robotS for trustwORthy health coaching) -PRIN PNRR 2022 PE6 -Cod. P202277EJ2 and under the complementary actions to the NRRP "Fit4MedRob -Fit for Medical Robotics" Grant (# PNC0000007).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Socially assistive robotics</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Matarić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Scassellati</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Springer handbook of robotics</title>
		<imprint>
			<biblScope unit="page" from="1973" to="1994" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Sensebot: A wearable sensor enabled robotic system to support health and well-being</title>
		<author>
			<persName><forename type="first">L</forename><surname>D'arco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">6th Collaborative European Research Conference</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="30" to="45" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Role of facial expressions in social interactions</title>
		<author>
			<persName><forename type="first">C</forename><surname>Frith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philosophical Transactions of the Royal Society B: Biological Sciences</title>
		<imprint>
			<biblScope unit="volume">364</biblScope>
			<biblScope unit="page" from="3453" to="3458" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Emotional contagion in human-robot interaction</title>
		<author>
			<persName><forename type="first">C.-E</forename><surname>Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">E-review of Tourism Research</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Enhancing affective robotics via human internal state monitoring</title>
		<author>
			<persName><forename type="first">M</forename><surname>Staffa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rossi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Intern. Conf. on Robot and Human Interactive Communication (RO-MAN)</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="884" to="890" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Can i feel you? recognizing human&apos;s emotions during human-robot interaction</title>
		<author>
			<persName><forename type="first">L</forename><surname>Fiorini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">G</forename><surname>Loizzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>D'onofrio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sorrentino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ciccone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Russo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giuliani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sancarlo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cavallo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Social Robotics</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="511" to="521" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Towards the Evaluation of the Role of Embodiment in Emotions Elicitation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Rossi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rossi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sangiovanni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), IEEE</title>
				<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Unconscious facial reactions to emotional facial expressions</title>
		<author>
			<persName><forename type="first">U</forename><surname>Dimberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Thunberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Elmehed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological science</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="86" to="89" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">F</forename><surname>Robotics</surname></persName>
		</author>
		<ptr target="www.furhatrobotics.com/" />
		<title level="m">Furhat robot</title>
				<imprint>
			<date type="published" when="2024-03-01">2024. 2024-03-01</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Al Moubayed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Beskow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Skantze</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Granström</surname></persName>
		</author>
		<title level="m">Furhat: A back-projected human-like robot head for multiparty human-machine interaction</title>
				<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="114" to="130" />
		</imprint>
	</monogr>
	<note>Cognitive Behavioural Systems</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Personalized models for facial emotion recognition through transfer learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Rescigno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Spezialetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rossi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications</title>
		<imprint>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page" from="35811" to="35828" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Facial-video-based physiological signal measurement: Recent advances and affective applications</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Signal Processing Magazine</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="page" from="50" to="58" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Biosignals</surname></persName>
		</author>
		<ptr target="www.pluxbiosignals.com/collections/bitalino" />
		<title level="m">Bitalino</title>
				<imprint>
			<date type="published" when="2024-03-01">2024. 2024-03-01</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Amigos: A dataset for affect, personality and mood research on individuals and groups</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Miranda-Correa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Abadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sebe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Patras</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on affective computing</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="479" to="493" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Observations: Sam: the self-assessment manikin; an efficient cross-cultural measurement of emotional response</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Morris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of advertising research</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="63" to="68" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Decaf: Meg-based multimodal database for decoding affective physiological responses</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Abadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Subramanian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Kia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Avesani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Patras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sebe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Affective Computing</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="209" to="222" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Measuring empathy: reliability and validity of the empathy quotient</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">J</forename><surname>Lawrence</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shaw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Baker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Baron-Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>David</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological medicine</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="911" to="920" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Determining what individual sus scores mean: Adding an adjective rating scale</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bangor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kortum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of usability studies</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="114" to="123" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
