<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Stimulating Cognitive Engagement in Hybrid Decision-Making: Friction, Reliance and Biases (preface)</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Chiara</forename><surname>Natali</surname></persName>
							<email>chiara.natali@unimib.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<addrLine>Viale Sarca 336</addrLine>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Brett</forename><surname>Frischmann</surname></persName>
							<email>brett.frischmann@law.villanova.edu</email>
							<affiliation key="aff1">
								<orgName type="institution">Villanova University</orgName>
								<address>
									<addrLine>299 N. Spring Mill Rd</addrLine>
									<settlement>Villanova</settlement>
									<region>Pennsylvania</region>
									<country key="US">United States</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Federico</forename><surname>Cabitza</surname></persName>
							<email>federico.cabitza@unimib.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<addrLine>Viale Sarca 336</addrLine>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="institution">IRCCS Galeazzi Sant&apos;Ambrogio Hospital</orgName>
								<address>
									<addrLine>Via Cristina Belgioioso 173</addrLine>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="department">Third International Conference on Hybrid Human-Artificial Intelligence (HHAI)</orgName>
								<address>
									<addrLine>June 10-14</addrLine>
									<postCode>2024</postCode>
									<settlement>Malmö</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Stimulating Cognitive Engagement in Hybrid Decision-Making: Friction, Reliance and Biases (preface)</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">5133786D457C48DD5398C4FD11B1EA39</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Human-AI Interaction, Frictional AI, Decision Support Systems, Machine Learning, Interaction protocols, Usability federicocabitza.net (F. Cabitza) 0000-0002-5171-5239 (C. Natali)</term>
					<term>0000-0002-7425-8931 (B. Frischmann)</term>
					<term>0000-0002-4065-3415 (F. Cabitza)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This workshop critically examined the trend toward rapid and seamless human-AI interactions and considered alternative forms of prosocial engagement. We focused on the role of designers and developers in fostering user empowerment, skill development, and appropriate reliance on AI for responsible decision-making. Our discussions centered on friction-in-design and the core concepts of 'programmed inefficiencies' and 'frictional protocols' that involve design elements intentionally included to promote cognitive engagement and thoughtful interaction with AI, even if they might be slower. The workshop featured contributions on design principles that balance efficiency with engagement, methods for revealing and reducing biases in explainable AI systems, and considerations for a meaningful future with AI. In this first edition, this workshop set the stage for future research and community-building efforts around 'Frictional AI' to encourage more informed and reflective human-AI interactions.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>We are pleased to present the proceedings of the inaugural Frictional AI Workshop, which took place on June 11th, 2024 at HHAI2024 (Malmö, Sweden) as a half-day event. This workshop marked a first milestone in the exploration of Frictional AI, a novel concept aimed at redefining the dynamics of Human-AI Interaction. By challenging the prevailing trends that favor seamless and rapid interaction with AI systems, the workshop sought to introduce and examine 'frictional protocols' <ref type="bibr" target="#b0">[1]</ref>-deliberate design choices that slow down interactions to foster greater cognitive engagement and more thoughtful decision-making.</p><p>The workshop brought together a diverse group of scholars, practitioners, and researchers who critically examined the role of AI designers and developers in shaping user reliance on AI systems. Moving beyond the conventional attribution of over-reliance to inherent cognitive biases, the discussions highlighted how intentional design can either exacerbate or mitigate such biases, ultimately influencing the quality of human knowledge work and decision-making.</p><p>The workshop's contributions can be broadly categorized into two core themes, encompassing both theory and practice: the theoretical exploration of biases in Human-AI interaction and the presentation of practical design applications.</p><p>The first theme focused on a theoretical examination of cognitive biases and their interaction with AI systems. The contributions involved reflections on the psychological and philosophical factors influencing human reliance on AI, as well as proposing philosophical accounts over a meaningful future with AI. This theoretical exploration provided a foundation for understanding how our knowledge of human biases and reflections on the future of Human-AI interaction can be leveraged to improve decision-making quality through more reflective and deliberate interactions with AI.</p><p>The second core theme centered around the practical application of frictional design principles in real-world settings. Case studies and design frameworks were presented, illustrating how frictional protocols can be integrated into various AI systems to balance efficiency with cognitive engagement. Participants shared insights into how these principles could be applied across different domains, from seamful design for human-AI creative systems to adding friction to human-robot interaction, decision support systems for social media content regulation, and AImediated communication in medicine. The discussions in this theme provided clear examples of how frictional design can help prevent automation bias, encourage skill retention, and promote ethical AI development and use.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Organization</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Workshop Chairs</head><p>• Chiara Natali (University of Milano-Bicocca, Italy) • Brett M. Frischmann (Villanova University, USA) • Federico Cabitza (University of Milano-Bicocca, IRCCS Galeazzi Sant'Ambrogio Hospital, Italy)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Programme Committee</head><p>The Programme Committee comprised a multidisciplinary team of experts from fields including Computer Science, Human-Centered Computing, Human-Computer Interaction, Psychology, Philosophy, Sociology, and Artificial Intelligence. Their collective expertise was instrumental in ensuring the rigorous evaluation of workshop submissions. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Summary of the workshop</head><p>The workshop included 8 accepted submissions, with authors from institutions from Italy, United States of America, Germany Portugal and Sweden. The submissions were grouped according to their overarching themes, identifying two presentation sessions: Human-AI Collaboration and Biases and Frictional AI applications.</p><p>Each session included a reflection roundtable, where all the paper authors discussed the similarities and differences of their approaches and answered questions from the audience. Finally, we discussed future work to build the frictional AI community.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introductory talks</head><p>• Brett M. FRISCHMANN, Villanova University (USA) -"An Interdisciplinary Research Agenda for Prosocial Friction-in-Design" • Chiara NATALI, University of Milan-Bicocca (Italy) -"Frictional AI: Topics and Issues" Brett M. Frischmann's talk, "An Interdisciplinary Research Agenda for Prosocial Frictionin-Design," drew from his 2018 book Re-Engineering Humanity with Evan Selinger <ref type="bibr" target="#b1">[2]</ref> and subsequent research on friction-in-design <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>. He addressed the root of humanity's technosocial dilemma: the prevailing economic, social, and political logics that drive the design of AI systems toward goals like maximizing efficiency, minimizing transaction costs, and eliminating friction <ref type="bibr" target="#b2">[3]</ref>. Frischmann argued that these design principles, which prioritize speed, scale, and seamlessness, often undermine human autonomy and social welfare. To counter these tendencies, he called for prosocial "friction-in-design" principles and regulations that challenge the conventional wisdom perpetuating these logics. His proposed strategies include intentionally engineering friction, such as transaction costs and inefficiencies, into AI systems to resist the dominance of efficiency and productivity logics and to promote human flourishing through the exercise and development of human capabilities <ref type="bibr" target="#b1">[2]</ref>. Chiara Natali followed with "Frictional AI: Topics and Issues," providing a comprehensive overview of the key areas and challenges in applying frictional design to AI systems drawing parallels with slow design <ref type="bibr" target="#b4">[5]</ref>, microboundaries <ref type="bibr" target="#b5">[6]</ref>, desirable and programmed inefficiencies <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref> for constructive distrust <ref type="bibr" target="#b8">[9]</ref> and debiasing strategies against over-confidence <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref> and anchoring bias <ref type="bibr" target="#b11">[12]</ref>. This requires new methodologies to assess over-and under-reliance <ref type="bibr" target="#b12">[13]</ref>, such as the Human-AI Interaction Assessment tool. Together, these talks set the stage for a deeper exploration of how frictional design can be strategically used to shape the social and ethical impacts of AI. The Human-AI Collaboration and Biases session examines different aspects of bias and interaction in human engagement with AI systems, emphasizing the need to rethink design and user practices to promote thoughtful and meaningful AI deployment. Regina de Brito Duarte and Joana Campos examine cognitive biases in AI-assisted decision-making, advocating for balanced friction to avoid both over-reliance and undue skepticism towards AI recommendations. Sebastiano Moruzzi, Filippo Ferrari, and Filippo Riscica discuss how "epistemic filters" impact the outputs and user interactions with XAI and Generative AI, and how understanding and adjusting them can address technical and cognitive biases. Christopher D. Quintana and Georg Theiner propose "technoamicitia," a design approach that goes beyond traditional usability metrics to foster deeper human engagement with AI: their approach aims to support psychological and moral development, and thus counter the prevailing view of AI as mere tools for efficiency and productivity. Scott Robbins builds upon the concept of friction by challenging the conventional focus on regulation and design controls as sole means of achieving ethical AI deployment; he suggests that norms around the intentional use or restraint of AI can help preserve human autonomy and ensure that certain meaningful tasks remain within human control.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>First session: Human-AI Collaboration and Biases</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Second session: Frictional AI Applications</head><p>• Caterina FREGOSI, Federico CABITZA, University of Milan-Bicocca (Italy) -"A frictional design approach: towards Judicial AI and its possible applications"</p><p>• Ingar BRINCK, Samantha STEDTLER and Valentina FANTASIA, Lund University (Sweden) -"Exploring Frictional Design in Human-Robot Interaction: Delayed Movement in a Turntaking Game" • Sarah INMAN and Sarah D'ANGELO, Google (USA) -"Enabling Creative Human-AI Systems with Seamful Design" (not included in the proceedings) • Evan SELINGER, Rochester Institute of Technology (USA) -"Balancing Empathy and Accountability: Exploring Friction-In-Design For AI-Mediated Doctor-Patient Communication"</p><p>The Frictional AI Applications session highlights diverse approaches to incorporating intentional friction in AI design to promote critical thinking, creativity, and ethical engagement. Caterina Fregosi and Federico Cabitza present "Judicial AI," a decision support system that offers two contrasting explanations to foster critical thinking and reduce automation bias. They explore how complex decision pathways can enhance user autonomy. Ingar Brinck, Samantha Stedtler, and Valentina Fantasia examine frictional design in human-robot interactions and demonstrate how deliberate delays in a turn-taking game can enhance cognitive engagement and foster deeper interaction with social robots. Sarah Inman and Sarah D'Angelo propose applying "seamful design" in software engineering to support creative problem-solving: they emphasize the value of exposing hidden processes to maintain control and foster creativity beyond mere productivity. Evan Selinger suggests using generative AI to enhance empathetic content in doctor-patient communication, addressing the issue of doctors often sounding robotic due to systemic pressures. To ensure this technology is used ethically and maintains trust, he advocates for incorporating friction, such as transparency measures and manual revisions, and establishing governance procedures to hold doctors accountable for how they integrate AI-generated content into their messages.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion and Remarks</head><p>The concept of Frictional AI draws heavily on the idea that some level of friction, or 'seamfulness, ' is essential to prevent overreliance on AI and to maintain human agency in decision-making processes. As Frischmann and Selinger <ref type="bibr" target="#b1">[2]</ref> argued in Re-engineering Humanity, tolerating some friction in our interactions with technology is vital for sustaining environments that support human flourishing.</p><p>The Frictional AI Workshop has laid the groundwork for future research and collaboration on this new paradigm in Human-AI Interaction-one that values cognitive engagement and ethical responsibility as much as it does efficiency and performance.</p><p>Looking ahead, we are confident that the contributions contained in these proceedings will serve as a valuable resource for scholars and practitioners alike, providing both theoretical frameworks and practical guidance for integrating Frictional AI into a wide range of applications.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>• Regina DE BRITO DUARTE and Joana CAMPOS, INESC-ID, Instituto Superior Tecnico (Portugal) -"Looking for cognitive bias in Human-AI decision-making" • Sebastiano MORUZZI, Filippo FERRARI and Filippo RISCICA LIZZO, University of Bologna (Italy) -"Biases, Epistemic Filters, and Explainable Artificial Intelligence" • Christopher D. QUINTANA, Georg THEINER, Villanova University (USA) -"Make Friends, Not Tools: Designing AI for Technoamicitia" • Scott ROBBINS, University of Bonn (Germany) -"Beyond Regulation: How We Can Craft a Meaningful Future with AI"</figDesc><table /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We extend our sincere gratitude to all the participants, speakers, and the HHAI conference organizers who contributed to the success of this workshop. Special thanks go to the members of the Programme Committee for their expertise and commitment.</p><p>C. Natali gratefully acknowledges the PhD grant awarded by the Fondazione Fratelli Confalonieri, which has been instrumental in facilitating her research pursuits. F. Cabitza acknowledges funding support provided by the Italian project PRIN PNRR 2022 InXAID -Interaction with eXplainable Artificial Intelligence in (medical) Decision making. CUP: H53D23008090001 funded by the European Union -Next Generation EU.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Never tell me the odds: Investigating pro-hoc explanations in medical decision making</title>
		<author>
			<persName><forename type="first">F</forename><surname>Cabitza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Natali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Famiglini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Campagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Caccavella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gallazzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence in Medicine</title>
		<imprint>
			<biblScope unit="volume">150</biblScope>
			<biblScope unit="page">102819</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Frischmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Selinger</surname></persName>
		</author>
		<title level="m">Re-engineering humanity</title>
				<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Friction-in-design regulation as 21st century time, place, and manner restriction</title>
		<author>
			<persName><forename type="first">B</forename><surname>Frischmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Benesch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Yale JL &amp; Tech</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page">376</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Governance seams</title>
		<author>
			<persName><forename type="first">B</forename><surname>Frischmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ohm</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Harvard Journal of Law &amp; TechnologyVolume</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Slow design for meaningful interactions</title>
		<author>
			<persName><forename type="first">B</forename><surname>Grosse-Hering</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mason</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Aliakseyeu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bakker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Desmet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</title>
				<meeting>the SIGCHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="3431" to="3440" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Design frictions for mindful interactions: The case for microboundaries</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Cox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Gould</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Cecchinato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Iacovides</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Renfree</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 CHI conference extended abstracts on human factors in computing systems</title>
				<meeting>the 2016 CHI conference extended abstracts on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1389" to="1397" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Desirable inefficiency</title>
		<author>
			<persName><forename type="first">P</forename><surname>Ohm</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Frankle</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fla. L. Rev</title>
		<imprint>
			<biblScope unit="volume">70</biblScope>
			<biblScope unit="page">777</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Programmed inefficiencies in dss-supported human decision making</title>
		<author>
			<persName><forename type="first">F</forename><surname>Cabitza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Campagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ciucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Seveso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Modeling Decisions for Artificial Intelligence: 16th International Conference, MDAI 2019</title>
				<meeting><address><addrLine>Milan, Italy</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">September 4-6, 2019. 2019</date>
			<biblScope unit="page" from="201" to="212" />
		</imprint>
	</monogr>
	<note>Proceedings 16</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Privacy as protection of the incomputable self: From agnostic to agonistic machine learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Hildebrandt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theoretical Inquiries in Law</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="83" to="121" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A review of possible effects of cognitive biases on interpretation of rule-based machine learning models</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kliegr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Š</forename><surname>Bahník</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fürnkranz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">295</biblScope>
			<biblScope unit="page">103458</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">How cognitive biases affect xai-assisted decision-making: A systematic review</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bertrand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Belloum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Eagan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Maxwell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society</title>
				<meeting>the 2022 AAAI/ACM Conference on AI, Ethics, and Society</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="78" to="91" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">if i had all the time in the world&quot;: Ophthalmologists&apos; perceptions of anchoring bias mitigation in clinical ai support</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K P</forename><surname>Bach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Nørgaard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Brok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Van</surname></persName>
		</author>
		<author>
			<persName><surname>Berkel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2023 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Ai shall have no dominion: on how to measure technology dominance in ai-supported human decision-making</title>
		<author>
			<persName><forename type="first">F</forename><surname>Cabitza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Campagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Angius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Natali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Reverberi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CHI &apos;23: the Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>to be published</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Online Resources • Workshop website</title>
		<author>
			<persName><forename type="first">A</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Human-AI Interaction Assessment Tool</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
