<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The Effect of Emoji Type on Trust in AI Teammates</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Morgan</forename><forename type="middle">E</forename><surname>Bailey</surname></persName>
							<email>m.bailey.1@research.gla.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">School of Computer Science</orgName>
								<orgName type="institution">University of Glasgow Williams Building</orgName>
								<address>
									<addrLine>Sir Alwyn</addrLine>
									<postCode>G12 8RZ</postCode>
									<settlement>Glasgow</settlement>
									<country key="GB">Scotland</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">School of Psychology &amp; Neuroscience</orgName>
								<orgName type="institution">University of Glasgow</orgName>
								<address>
									<addrLine>62 Hillhead Street</addrLine>
									<postCode>G12 8QB</postCode>
									<settlement>Glasgow</settlement>
									<country key="GB">Scotland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Benjamin</forename><surname>Gancz</surname></persName>
							<email>benjamin.gancz@qumo.do</email>
							<affiliation key="aff2">
								<orgName type="institution">Qumodo Ltd</orgName>
								<address>
									<addrLine>7 Bell Yard</addrLine>
									<postCode>WC2A 2JR</postCode>
									<settlement>London</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Frank</forename><forename type="middle">E</forename><surname>Pollick</surname></persName>
							<email>frank.pollick@glasgow.ac.uk</email>
							<affiliation key="aff1">
								<orgName type="department">School of Psychology &amp; Neuroscience</orgName>
								<orgName type="institution">University of Glasgow</orgName>
								<address>
									<addrLine>62 Hillhead Street</addrLine>
									<postCode>G12 8QB</postCode>
									<settlement>Glasgow</settlement>
									<country key="GB">Scotland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="institution">Human-AI Team</orgName>
								<address>
									<addrLine>Dec 04</addrLine>
									<postCode>2023</postCode>
									<settlement>Gothenburg</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The Effect of Emoji Type on Trust in AI Teammates</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">B001AA5203A9A098F1754C31380B3510</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Human-AI Teams</term>
					<term>Human-AI Dynamic Team Trust</term>
					<term>Trust-Calibration</term>
					<term>Trust 1 MultiTTrust: 2nd Workshop on Multidisciplinary Perspectives on</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The rapid advancement of Artificial Intelligence (AI) has revolutionized various sectors, with the workplace being no exception. Collaborative efforts between humans and AI, known as Human-AI teams (HATs), have gained increasing attention. Trust plays a central role in shaping HAT dynamics, as excessive trust can lead to over-reliance, while insufficient trust can hinder AI utilization. This study explores the potential of emojis in enhancing Social Intelligence (SI) within HATs and influencing trust calibration. Drawing on prior research indicating the role of emojis in conveying emotional states, the study implemented a mixed-methods design, participants were divided into two groups based on a between-group factor, with one group interacting with a highly reliable AI, and the other with a less reliable AI. The within groups factor was emoji type in the following three conditions: Face Emojis (☹,</p><p>), Icon Emojis ( , ) or No Emojis. Participants also had a human teammate who never used emojis and performed at the same level across all conditions. The task involved determining geographic locations with the help of teammates' responses, with AI and human teammates often providing conflicting answers. The analysis revealed that the use of emojis in AI responses and the reliability of AI teammates had no significant impact on trust or influence ratings. Furthermore, the type of emojis used did not affect trust calibration. The Trust in Automation Questionnaire results indicated that reliability significantly affected trust and familiarity while emoji type did not. Despite the limited influence of emojis on trust calibration in HATs, the study sheds light on the complex dynamics at play. The specific nature of tasks in HATs, requiring precision and cognitive effort, may overshadow emotional cues conveyed by emojis. Nevertheless, the study identified that participants perceived highly reliable AI as less familiar, possibly due to anthropomorphic priming, which aligns with past research. Trust calibration strategies should consider AI's human-like performance. In conclusion, this research underscores the intricate nature of trust calibration in HATs and suggests that while emojis hold potential for enhancing human-computer interactions, their impact on trust may be more restrained in some contexts. Future studies should delve deeper into trust complexities in HATs and explore strategies beyond emojis to foster trust in HATs.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In recent decades, the rapid advancement of Artificial Intelligence (AI) has profoundly transformed various aspects of society. Specifically, within the workplace, AI has proven to excel in tasks involving extensive data analysis, high precision, and sustained cognitive effort. Nevertheless, research consistently emphasizes the effectiveness of human-AI collaboration, often referred to as hybrid intelligence, in achieving optimal results <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b10">11]</ref>. This has sparked a growing interest in comprehending the dynamics of Human-AI teams (HATs) to implement AI effectively within the workforce.</p><p>Trust emerges as a pivotal factor in shaping the dynamics of HATs, as it underlies critical team interactions. Striking the right balance of trust is essential within HATs, where excessive trust can lead to an over reliance on AI systems, causing users to overlook mistakes and errors <ref type="bibr" target="#b9">[10]</ref>. Conversely, insufficient trust may result in team members underutilizing the capabilities of AI, ultimately leading to reduced team performance <ref type="bibr" target="#b4">[5]</ref>. Calibrating trust within HATs entails transitioning from black box AI methods to explainable AI, which is essential for successful trust calibration. Presenting AI outputs in a more human-friendly manner, integrating elements of Social Intelligence (SI) <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b10">11]</ref>, proves to be a valuable approach for explaining AI and facilitating trust calibration <ref type="bibr" target="#b10">[11]</ref>.</p><p>Previous research has indicated emojis can play a significant role in enhancing SI within professional settings. Emojis offer a valuable means for AI to convey emotional states, thereby allowing AI systems to better interpret and respond to users' emotional cues and potentially calibrate trust successfully. Building on prior research, which has effectively employed emojis on platforms like Twitter to develop models for inferring affect from emoji usage patterns <ref type="bibr" target="#b0">[1]</ref>, the use of emojis can be extended to foster SI in HATs and could allow for mutual understanding of affective state from both the human teammate and AI teammate, Furthermore, in the domain of health-related applications, particularly those involving chatbots inquiring about participants' mental well-being, studies have demonstrated the positive impact of emojis. Chatbots that incorporate emojis have received higher ratings in terms of user enjoyment, attitude, and confidence <ref type="bibr" target="#b3">[4]</ref>. Research has also indicated messages from chatbots featuring emojis were rated on par with those from human senders <ref type="bibr" target="#b2">[3]</ref>. Additionally, both human and AI senders who utilized emojis were perceived as significantly more socially appealing, competent computer-mediated communication, and credible compared to senders who relied solely on verbal messages <ref type="bibr" target="#b2">[3]</ref>. The incorporation of emojis into AI-mediated communication not only enhances the ability to understand and express affective states but also fosters positive user experiences and perceptions, aligning with the goals of social intelligence within work environments. From the current literature we pose the following hypotheses:</p><p>H 1 : Use of Emojis in AI responses will influence the decision-making process when determining which teammate to trust.</p><p>H 2 : Type of Emojis in AI responses will influence the decision-making process when determining which teammate to trust.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Method</head><p>We used a mixed between-within subjects' design (2x3 configuration) where participants interacted with an AI teammate of either high (90%) or low (60%) reliability and a human teammate with 30% reliability. Within these separate groups participants then interacted with three different emoji uses, either face emojis, icon emojis or no emojis. We determined a sample size of N = 44 for 85% power to detect a medium effect in a two-way ANOVA (α = .05).</p><p>The study employed a Wizard of Oz experimental method to facilitate development convenience and ensure optimal control. Participants were led to believe they were collaborating with an AI and a human teammate when instead they were actually interacting with responses produced by ChatGPT. The task involved presenting participants with random locations extracted from Google Earth. Participants were tasked with determining the continent, country, and city associated with each location, with the final decision resting on the participant, who assumed the role of the 'team leader'. A time constraint of 120 seconds per location was enforced, meaning participants had to rely on their teammates' responses to submit the location in time. Notably, the AI and human team-mates provided conflicting answers 90% of the time, necessitating participants to discern which teammate they trusted more.</p><p>A total of 30 locations were identified by each participant across three blocks, comprising 10 trials per block. Each block either used Face Emojis (☹,</p><p>), Icon Emojis ( , ) or No Emojis. Following each trial, the correct answer was revealed, enabling participants to assess the performance of the human and AI teammates. On each trial participants rated which teammate influenced them most and which teammate they trusted most, at the end of each block participants completed the Trust in Automation Questionnaire <ref type="bibr" target="#b5">[6]</ref>, which is has six sub-sections (Trust, Familiarity, Understanding, Intentions of developers, Reliability of AI and Propensity to trust) to measure different elements of trust of the AI interacted with in the previous block, questionnaires were slightly altered to fit the zero embodiment scenario being explored. We collected the location responses from ChatGPT, a large language model, by inputting location descriptions and requesting versions with emojis distributed throughout them, minimal editing was needed to make the responses suitable. The AI's writing style mimicked humans, following previous successful approaches <ref type="bibr" target="#b1">[2]</ref>. We conducted the experiment using PsychoPy and hosted it on Pavlovia.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head><p>A total of 42 participants from the University of Glasgow were recruited. The group consisted of 24 males and 18 females and had a mix of students (n = 27) and professionals (n = 17).</p><p>We conducted a two-way ANOVA with interactions to compare trust ratings of the AI. The analysis indicated that neither the type of emojis used in AI responses nor the reliability level of AI teammates had a significant impact on trust. Specifically, the main effects of emoji types (F(2, 4) = 0.647, p = 0.524) and reliability (F(1, 4) = 1.363, p = 0.243) were non-significant, as well as the interaction effect between emoji type and reliability (F(2, 4) = 0.554, p = 0.575). We also conducted a two-way ANOVA with interactions to compare influence ratings of the AI. The analysis indicated that neither emoji type nor reliability had a statistically significant impact on influence. Specifically, both the main effects of within-condition (F(2, 4) = 0.368, p = 0.692) and reliability (F(1, 4) = 0.010, p = 0.921), along with the interaction effect between emoji type and reliability (F(2, 4) = 1.493, p = 0.225), were found to be non-significant.</p><p>We also analyzed the Trust in Automation Questionnaire <ref type="bibr" target="#b5">[6]</ref>by completing two ANOVAs on the various subsections assessing different dimensions of trust. For the subsection Trust, the analysis demonstrated a statistically significant effect of Reliability (F(1, 2) = 69.133, p = 0.0142) while emoji type showed no significant impact. Post hoc comparisons via Tukey method revealed significant differences for all emoji type, but only by reliability. For the Familiarity subsection, Reliability demonstrated a significant impact (F (1, 2) = 141.187, p = 0.007), while emoji type had no significant effect. Post hoc tests indicated that there were differences in emoji type but only by reliability, not between emoji types. In the Propensity to trust subsection, Reliability showed a significant effect (F(1, 2) = 30.990, p = 0.0308), emoji type did not significantly influence trust. Tukey post hoc tests did not identify specific trust differences based on emoji type and reliability. In the Reliability of AI, Understanding and Intentions of developers' subsections, neither Reliability nor emoji type had a significant effect on trust, with both showing p-values above 0.05.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Discussion</head><p>The aim of this study was to investigate how emojis influence trust calibration within Human-AI teams (HATs) and what this means for team dynamics. While emojis have shown potential in improving human-computer interactions <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>, our research revealed that their impact on trust calibration within HATs was not as significant as anticipated and neither of our research hypotheses were fully supported.</p><p>Contrary to our expectations, integrating emojis into AI-mediated communication did not enhance trust calibration between human team members and AI. Despite emojis offering a more human-friendly and emotionally expressive interface, their effect on trust calibration in HATs seemed limited. Several factors may explain these outcomes. Trust in HATs appears to be influenced by multifaceted dynamics that go beyond emotional cues. Transparency of AI systems <ref type="bibr" target="#b6">[7]</ref>, their past performance <ref type="bibr" target="#b11">[12]</ref>, and the unique traits of human team members <ref type="bibr" target="#b7">[8]</ref> likely play crucial roles in trust development. Emojis, while enhancing emotional expressiveness, might not address these fundamental trust determinants in HATs.</p><p>Additionally, the specific nature of tasks in HATs, demanding precision, data analysis, and cognitive effort, might overshadow the emotional cues conveyed by emojis. the experimental task does not require any emotions, in other situations where emojis are found to be useful there is often a need for emotion, such as health care <ref type="bibr" target="#b3">[4]</ref>.</p><p>Our research did find significant results concerning participants' trust and familiarity with AI, these results were only for reliability. Participants rated highly reliable AI as significantly less familiar compared to less reliable AI. This suggests that proficient AI using humanized behavior effectively, might not be frequently encountered. These findings could explain why the high reliability AI received lower trust scores on the TIA, previous research has shown highly reliable AI with high humanness is less trustworthy than humanized low reliability AI <ref type="bibr" target="#b1">[2]</ref>, possibly influenced by anthropomorphic priming <ref type="bibr" target="#b11">[12]</ref>. Although limited because these trust results were not replicated in experimental data but only in the questionnaire, they support past research indicating that users find AI more trustworthy when it appears more human-like, especially when the AI's performance is not perfect <ref type="bibr" target="#b12">[13]</ref>.</p><p>In conclusion, our research highlights the intricate nature of trust calibration within HATs and indicates that while emojis have potential in enhancing human-computer interactions, their impact on trust might be more restrained in this specific context. Future studies should delve deeper into the complexities of trust in HATs and explore strategies beyond emojis that can effectively foster trust in the evolving realm of human-AI collaboration.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Figure one shows the ratings given on the different subsections of the TIA questionnaire. *Indicates significance &lt;0.05, ** indicates significance &lt;0.01.</figDesc><graphic coords="3,86.20,72.00,446.50,274.75" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>Morgan Bailey is supported by the UKRI Centre for Doctoral Training in Socially Intelligent Artificial Agents, Grant Number EP/S02266X/1.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Directions in hybrid intelligence: complementing AI systems with human intelligence</title>
		<author>
			<persName><forename type="first">E</forename><surname>Kamar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI&apos;16)</title>
				<meeting>the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI&apos;16)<address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<biblScope unit="page" from="4070" to="4073" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Supporting Artificial Social Intelligence With Theory of Mind</title>
		<author>
			<persName><forename type="first">J</forename><surname>Williams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Fiore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Jentsch</surname></persName>
		</author>
		<idno type="DOI">10.3389/frai.2022.750763</idno>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="750" to="763" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Towards a Theory of Longitudinal Trust Calibration in Human-Robot Teams</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">J</forename><surname>Visser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M M</forename><surname>Peeters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Jung</surname></persName>
		</author>
		<idno type="DOI">10.1007/s12369-019-00596-x</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="459" to="478" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Intelligence and its uses</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">L</forename><surname>Thorndike</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Harper&apos;s Magazine</title>
		<imprint>
			<biblScope unit="volume">140</biblScope>
			<biblScope unit="page" from="227" to="235" />
			<date type="published" when="1920">1920</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A multi-label emoji classification method using balanced pointwise mutual information-based feature selection</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Ahanin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Ismail</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.csl.2021.101330</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Speech and Language</title>
		<imprint>
			<biblScope unit="volume">73</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The effect of emojis when interacting with conversational interface assisted health coaching system</title>
		<author>
			<persName><forename type="first">A</forename><surname>Fadhil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Schiavo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Yilma</surname></persName>
		</author>
		<idno type="DOI">10.1145/3240925.3240965</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare</title>
				<meeting>the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare<address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A Bot and a Smile: Interpersonal Impressions of Chatbots and Humans Using Emoji in Computer-mediated Communication</title>
		<author>
			<persName><forename type="first">A</forename><surname>Beattie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Edwards</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Edwards</surname></persName>
		</author>
		<idno type="DOI">10.1080/10510974.2020.1725082</idno>
	</analytic>
	<monogr>
		<title level="j">Communication Studies</title>
		<imprint>
			<biblScope unit="volume">71</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Theoretical considerations and development of a questionnaire to measure trust in automation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Körber</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-96074-6_2</idno>
	</analytic>
	<monogr>
		<title level="m">Advances in Intelligent Systems and Computing</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Social Intelligence towards Human-AI Teambuilding</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Bailey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">E</forename><surname>Pollick</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v37i13.26940</idno>
		<ptr target="https://doi.org/10.1609/aaai.v37i13.26940" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence</title>
				<meeting>the AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="16160" to="16161" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Transparency and trust in artificial intelligence systems</title>
		<author>
			<persName><forename type="first">P</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Biessmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Teubner</surname></persName>
		</author>
		<idno type="DOI">10.1080/12460125.2020.1819094</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Decision Systems</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Priming anthropomorphism: Can the credibility of humanlike robots be transferred to non-humanlike robots?</title>
		<author>
			<persName><forename type="first">D</forename><surname>Zanatto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Patacchiola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Goslin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cangelosi</surname></persName>
		</author>
		<idno type="DOI">10.1109/HRI.2016.7451847</idno>
	</analytic>
	<monogr>
		<title level="m">ACM/IEEE International Conference on Human-Robot Interaction</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The effects of personality and locus of control on trust in humans versus artificial intelligence</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">N</forename><surname>Sharan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Romano</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.heliyon.2020.e04572</idno>
	</analytic>
	<monogr>
		<title level="j">Heliyon</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Investigating cooperation with robotic peers</title>
		<author>
			<persName><forename type="first">D</forename><surname>Zanatto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Patacchiola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Goslin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cangelosi</surname></persName>
		</author>
		<idno type="DOI">10.1371/journal.pone.0225028</idno>
	</analytic>
	<monogr>
		<title level="j">PLoS One14</title>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
