<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Looking For Cognitive Bias In AI-Assisted Decision-Making</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Regina</forename><surname>De Brito Duarte</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">INESC-ID</orgName>
								<orgName type="institution">Instituto Superior Técnico</orgName>
								<address>
									<settlement>Lisbon</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Joana</forename><surname>Campos</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">INESC-ID</orgName>
								<orgName type="institution">Instituto Superior Técnico</orgName>
								<address>
									<settlement>Lisbon</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Looking For Cognitive Bias In AI-Assisted Decision-Making</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">50353284484DE0F44613685C47B59115</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>AI-assisted decision-making</term>
					<term>Cognitive bias</term>
					<term>Human-AI interaction</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Artificial intelligence (AI) has been widely employed in decision-making contexts. However, AI-assisted decisionmaking continues to encounter several challenges, including prevalent patterns of over-reliance and under-reliance. This paper provides an analysis of the most common cognitive biases in AI-assisted decision-making, supported by multiple examples from the literature. Various solutions proposed in the literature to address the shortcomings of AI-assisted decision-making, such as Explainable AI techniques or cognitive forcing functions, may mitigate certain biases but potentially exacerbate others.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The rapid integration of Artificial Intelligence (AI) into society is driven by its remarkable capabilities, which enhance decision-making in fields such as law and healthcare <ref type="bibr" target="#b0">[1]</ref>. However, the full impact of AI recommendations on human decisions remains an area of ongoing investigation <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref>. To address this, eXplainable AI (XAI) has emerged with the goal of making AI predictions more understandable <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>. Despite this, the effectiveness of XAI faces challenges, such as overreliance, where users place excessive trust in AI <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10]</ref>.</p><p>To enhance AI-assisted decision-making, proposals include designing clearer explanations <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13</ref>] and implementing cognitive forcing functions-techniques designed to increase user engagement in AI-assisted decision-making. These functions, such as decision checklists, delayed AI responses, or AI suggestions on demand, are intended to boost user attention <ref type="bibr" target="#b13">[14]</ref>. While these approaches address cognitive biases inherent in AI-assisted decision-making <ref type="bibr" target="#b14">[15]</ref>, the same strategies that mitigate certain cognitive biases can unintentionally trigger others.</p><p>This extended abstract explores the identification and discussion of the most common cognitive biases in AI-assisted decision-making, along with their implications for the field. The aim is to highlight design considerations related to cognitive biases for XAI and human-AI interface designers and to provide a comprehensive perspective on how to approach cognitive biases in AI-assisted decision-making.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">The AI-assisted decision-making process</head><p>Hoffman et al. propose a three-stage model to define the traditional decision-making process that includes situation assessment, interpretation, and selection <ref type="bibr" target="#b15">[16]</ref>. The model begins with gathering and evaluating relevant information to define the problem and set goals, followed by analyzing this information to develop a plan of action, and concludes with selecting and committing to a specific course of action. These stages provide a structured approach that can vary depending on the decision task at hand. In AI-assisted decision-making, the traditional decision-making stages of situation assessment, interpretation, and selection/commitment are preserved. AI improves the interpretation stage by evaluating options, assessing their value, and considering potential outcomes <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b11">12]</ref>. It typically offers high-accuracy recommendations that can be further enhanced with confidence intervals and explanations. Despite these advantages, the assumption that AI-assisted decision-making is always more efficient than human decision making is sometimes challenged <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b13">14]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>HHAI-WS</head><p>AI-assisted decision-making can be understood as a process involving three primary components: the human decision-maker, who is ultimately responsible for the final decision and its outcomes; the decision task with its specific characteristics; and the AI agent that provides recommendations to support the decision-making process. Each component can exhibit different characteristics that influence the decision-making process. For instance, a decision task may vary in complexity (task difficulty), be conducted in a high-stakes or low-stakes environment (risk), demand varying levels of cognitive effort from the user and be designed in different ways (design). Similarly, on the human side, factors like expertise level and whether the decision is made by a group or an individual (number of decision-makers) can impact the process. For the AI agent, aspects such as the accuracy of its recommendations and the types of explanations provided to the user can also influence the final outcome.</p><p>In AI-assisted decision-making processes, focusing solely on task performance -such as efficacy, efficiency, and fairness-is not sufficient. It is also essential to consider the human-AI relationship, including whether the human decision-maker relies on the AI appropriately and comprehends the AI's recommendations, as these factors significantly impact task performance. By considering these three components and the related factors that influence task performance, we can develop a framework to understand how AI-assisted decision-making processes function and the dynamics among the various contributing factors. Figure <ref type="figure" target="#fig_0">1</ref> illustrates the AI-assisted decision-making framework with these three components and the key decision metrics for evaluating the process. In the following sections, this framework will provide a clear mental model for understanding where cognitive biases might affect the AI-assisted decision-making process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Cognitive Bias in AI-assisted Decision-Making</head><p>The scientific community recognizes that the human mind operates within a dual-process system, where certain cognitive processes are rapid, effortless, and intuitive -generated by System 1 -while others are slower and require greater mental effort -generated by System 2 <ref type="bibr" target="#b16">[17]</ref>. This understanding is crucial for understanding human decision making, which often occurs under uncertainty with incomplete information. In such situations, decision-makers rely on heuristics-simple, quick judgments-as proxies for unknown answers. These heuristics are typically generated by System 1 and can lead to cognitive biases if not scrutinized by System 2.</p><p>While cognitive biases in classical decision-making have been extensively studied, those arising in AI-assisted decision-making are only now gaining attention <ref type="bibr" target="#b16">[17]</ref>. This is due to the recent prominence of AI-assisted decision making and the previously unchallenged belief that AI tools inherently enhance decision efficiency <ref type="bibr" target="#b11">[12]</ref>. However, recent studies suggest that AI can mitigate in one hand, and reinforce in another, cognitive biases in decision-making. This section provides an analysis of each cognitive bias in AI-assisted decision-making and how they can impact the decision making process, supported by various examples from the literature.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Confirmation Bias</head><p>Confirmation bias involves seeking information that confirms existing beliefs, disregarding contradictory data, and making decisions that reinforce initial beliefs <ref type="bibr" target="#b16">[17]</ref>. In AI-assisted decision-making, it occurs when AI suggestions align with preexisting beliefs, reducing critical thinking <ref type="bibr" target="#b10">[11]</ref>. Users may accept or reject recommendations solely on the basis of alignment, neglecting other factors. This bias is more common in lay users than in experts <ref type="bibr" target="#b17">[18]</ref>. Additionally, when looking at explanations, users may selectively focus on parts confirming their beliefs <ref type="bibr" target="#b18">[19]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Automation Bias</head><p>Automation bias is the tendency to favor decisions made by automated systems, even when they are prone to errors, leading to overreliance <ref type="bibr" target="#b19">[20]</ref>. In AI-assisted decision-making, this cognitive effect occurs, especially when the cognitive load of the decision is high <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22]</ref> or when the expertise of the human decider is low. Explainable AI <ref type="bibr" target="#b10">[11]</ref> and cognitive forcing functions <ref type="bibr" target="#b22">[23,</ref><ref type="bibr" target="#b13">14]</ref> are viewed as solutions that can mitigate this bias.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Algorithm Aversion Bias</head><p>In contrast to automation bias, algorithm aversion bias leads humans to dismiss algorithmic decisions just because it is a machine <ref type="bibr" target="#b23">[24]</ref>. In AI-assisted decision-making, users may prefer human recommendations as they perceive them as easier to understand <ref type="bibr" target="#b24">[25]</ref>. In critical tasks, individuals may favor human discretion over algorithmic application of fairness principles, as humans can transcend these principles if necessary <ref type="bibr" target="#b25">[26]</ref>. This bias can lead to under-reliance and disuse of AI systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Anchoring Bias</head><p>The anchor effect, occurs when individuals estimate uncertain quantities, influenced by initial reference points called anchors <ref type="bibr" target="#b26">[27]</ref>. These anchors, whether informative or randomly assigned, bias final estimates. This effect is prominent in various contexts, particularly in quantitative estimations like real estate pricing, where initial listing prices affect subsequent estimates <ref type="bibr" target="#b27">[28]</ref>. It also affects qualitative judgments, such as sentencing decisions in judicial settings, evidenced by studies that show significant variations based on initial sentencing demands <ref type="bibr" target="#b28">[29]</ref>.</p><p>Cognitive biases that arise from the anchor effect in AI-assisted decision-making stem from direct and indirect anchoring processes. An immediate anchor is the AI's suggestion, influencing decisions by guiding towards similar options and potentially neglecting other factors. This can yield varied outcomes. If the AI system surpasses human capabilities, it improves decision accuracy <ref type="bibr" target="#b29">[30]</ref>. In contrast, reliance on less accurate AI recommendations can lead to overreliance <ref type="bibr" target="#b30">[31]</ref>. Additionally, when AI recommendations come after humans initiate decision-making, the original human estimate can act as an anchor. Two possibilities emerge: If the AI suggestion aligns with the initial estimate, confirmation bias may prompt immediate adoption, as previously discussed. On the contrary, if the AI suggestion differs, individuals tend to stick to their original estimate and may not rely on the system.</p><p>Anchoring bias may manifest itself indirectly in situations involving ordering and framing effects. For example, when individuals receive accuracy information about an AI assistant, it can act as an anchor, reducing trust compared to scenarios without disclosure <ref type="bibr" target="#b31">[32]</ref>. Additionally, in repeated use of AI assistants, users may initially perceive high accuracy, leading to inflated trust and anchoring future assessments to this impression, increasing reliance on the system <ref type="bibr" target="#b32">[33]</ref>. The opposite scenario may also occur.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Loss Aversion</head><p>One notable human behavioral trait is loss aversion, where losses hold more weight than equivalent gains <ref type="bibr" target="#b16">[17]</ref>. This bias can extend to AI-assisted decision-making. Humans may focus more on false positives than false negatives in AI errors <ref type="bibr" target="#b14">[15]</ref>, leading to algorithm aversion bias and under-reliance. Additionally, in risky decision tasks, individuals tend to trust their beliefs over AI, contributing to a lack of trust, which is challenging to mitigate <ref type="bibr" target="#b30">[31]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.6.">Availability Bias</head><p>Availability bias leads to an overestimation of event frequencies based on easily recalled instances <ref type="bibr" target="#b16">[17]</ref>. In human decision-making with AI recommendations, users can incorrectly estimate the frequency of AI suggestions due to memory recall <ref type="bibr" target="#b10">[11]</ref>, affecting AI reliance. <ref type="bibr">Wang et al.</ref> propose presenting base frequencies to mitigate this bias <ref type="bibr" target="#b10">[11]</ref>. Furthermore, explanations can also induce availability bias if users recall relevant knowledge. Users may perceive explanations as more or less plausible based on recalled knowledge, potentially reinforcing biased perceptions <ref type="bibr" target="#b18">[19]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.7.">The Effects of Cognitive Bias</head><p>The cognitive biases that arise and their effects can vary depending on the characteristics of the decisionmaking task. Within the AI-assisted decision-making framework described in section 2, several biases may occur in relation to the three components. For example, an individual with limited knowledge of AI but high expertise in the relevant field may exhibit algorithm aversion, leading to a lack of trust in AI recommendations <ref type="bibr" target="#b23">[24]</ref>. Conversely, a lack of experience on the part of the human decision-maker can result in automation bias. In group decision-making scenarios, there is a tendency toward groupthink-a bias where individuals conform to the majority opinion, potentially increasing overreliance on the AI system <ref type="bibr" target="#b33">[34,</ref><ref type="bibr" target="#b34">35]</ref>.</p><p>The task's characteristics also play a significant role. High-stakes decisions are more prone to loss aversion bias, which may lead to under-reliance on the AI system <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b30">31]</ref>. Conversely, highly complex tasks may result in automation bias, as the human decision-maker might rely more on the AI system due to the task's difficulty <ref type="bibr" target="#b21">[22]</ref>. Even task design can influence the decision-making process and the emergence of specific biases. For instance, if the AI recommendation is presented at the outset, alongside the collection of all relevant information, it could trigger anchoring bias, where the AI recommendation serves as an anchor <ref type="bibr" target="#b29">[30]</ref>. However, a cognitive forcing function that delays showing the recommendation until after a certain period could lead to confirmation bias, where the human decision-maker has already formed an opinion, and the AI recommendation merely reinforces this decision, reducing critical thinking <ref type="bibr" target="#b17">[18]</ref>.</p><p>Finally, regarding the AI component, a high cognitive load required to interpret the explanations-or even just the presence of explanations-might induce automation bias in the human decision-maker <ref type="bibr" target="#b21">[22]</ref>.</p><p>Each decision-making task is defined by the unique characteristics of its components-human, task, and AI-and the interplay of these factors results in different cognitive biases for each task. Therefore, it is crucial to analyze various cognitive forcing function designs and explanations in each scenario. This analysis should identify not only the biases that need to be mitigated but also those that could potentially be introduced by the explanations or the new design.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>The paper focuses on common cognitive biases in AI-assisted decision-making rather than covering all possible biases. Techniques such as XAI and cognitive forcing functions can help address some of these biases, but they can also unintentionally introduce new ones. For instance, explanations can trigger the mere exposure effect, leading to overreliance <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b13">14]</ref>. Additionally, complex explanations that aim for completeness <ref type="bibr" target="#b12">[13]</ref> or present arguments for and against each option <ref type="bibr" target="#b11">[12]</ref> can induce automation bias due to their high cognitive demands <ref type="bibr" target="#b21">[22]</ref>.</p><p>Cognitive forcing functions are designed to enhance user engagement in AI-assisted decision-making. These kinds of techniques can also trigger cognitive biases similar to those caused by explanations. For example, introducing AI suggestions after the user's initial decision can lead to anchoring effects or algorithm aversion <ref type="bibr" target="#b29">[30]</ref>. Moreover, if not carefully designed, these functions can inadvertently reduce engagement by making the decision process overly complex. In conclusion, while these techniques offer valuable solutions, they also present challenges, requiring a nuanced approach to effectively manage potential cognitive biases for each case of AI-assisted decision-making task.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure1: Simplified framework of the AI-assisted decision-making process. This framework includes the three main components that can affect the decision-making process: the Human Decider, Task Characteristics, and AI Agent. It also highlights the principal metrics used to evaluate the AI-assisted decision-making process, which can be influenced by all the components.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="2,98.80,65.60,397.68,206.40" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgments This research was funded by INESC-ID (UIDB/50021/2020), as well as the projects CRAI C628696807-00454142 (IAPMEI/PRR) and TAILOR H2020-ICT-48-2020/952215 and HumanE AI Network H2020-ICT-48-2020/952026.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Regulators alarmed by doctors already using ai to diagnose patients</title>
		<author>
			<persName><forename type="first">J</forename><surname>Christian</surname></persName>
		</author>
		<ptr target="https://futurism.com/neoscope/doctors-using-ai" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Understanding the role of human intuition on reliance in human-ai decision-making with explanations</title>
		<author>
			<persName><forename type="first">V</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Wortman</forename><surname>Vaughan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bansal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM on Humancomputer Interaction</title>
				<meeting>the ACM on Humancomputer Interaction</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="1" to="32" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Human-ai complementarity in hybrid intelligence systems: A structured literature review</title>
		<author>
			<persName><forename type="first">P</forename><surname>Hemmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schemmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vössing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kühl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PACIS</title>
		<imprint>
			<biblScope unit="page">78</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yin</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2401.05840</idno>
		<title level="m">Decoding ai&apos;s nudge: A unified framework to predict human behavior in ai-assisted decision making</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Towards a science of human-ai decision making: An overview of design space in empirical human-subject studies</title>
		<author>
			<persName><forename type="first">V</forename><surname>Lai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Smith-Renner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency</title>
				<meeting>the 2023 ACM Conference on Fairness, Accountability, and Transparency</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1369" to="1385" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">help me help the ai&quot;: Understanding how explainability can support human-ai interaction</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Watkins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Russakovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monroy-Hernández</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2023 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="17" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Appropriate reliance on ai advice: Conceptualization and the effect of explanations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schemmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kuehl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Benz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bartos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Satzger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 28th International Conference on Intelligent User Interfaces</title>
				<meeting>the 28th International Conference on Intelligent User Interfaces</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="410" to="422" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A meta-analysis of the utility of explainable artificial intelligence in human-ai decision-making</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schemmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hemmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Nitsche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kühl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vössing</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society</title>
				<meeting>the 2022 AAAI/ACM Conference on AI, Ethics, and Society</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="617" to="626" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Does the whole exceed its parts? the effect of ai explanations on complementary team performance</title>
		<author>
			<persName><forename type="first">G</forename><surname>Bansal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Nushi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kamar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Weld</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 CHI conference on human factors in computing systems</title>
				<meeting>the 2021 CHI conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="16" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The impact of placebic explanations on trust in intelligent systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Eiband</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Buschek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kremer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hussmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Extended abstracts of the 2019 CHI conference on human factors in computing systems</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Designing theory-driven user-centric explainable ai</title>
		<author>
			<persName><forename type="first">D</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Abdul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">Y</forename><surname>Lim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 CHI conference on human factors in computing systems</title>
				<meeting>the 2019 CHI conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Explainable ai is dead, long live explainable ai! hypothesis-driven decision support using evaluative ai</title>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency</title>
				<meeting>the 2023 ACM Conference on Fairness, Accountability, and Transparency</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="333" to="342" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Diagnosing ai explanation methods with folk concepts of behavior</title>
		<author>
			<persName><forename type="first">A</forename><surname>Jacovi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bastings</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gehrmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Goldberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Filippova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Artificial Intelligence Research</title>
		<imprint>
			<biblScope unit="volume">78</biblScope>
			<biblScope unit="page" from="459" to="489" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">To trust or to think: cognitive forcing functions can reduce overreliance on ai in ai-assisted decision-making</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Buçinca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Malaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">Z</forename><surname>Gajos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM on Human-Computer Interaction</title>
				<meeting>the ACM on Human-Computer Interaction</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="1" to="21" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">How cognitive biases affect xai-assisted decisionmaking: A systematic review</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bertrand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Belloum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Eagan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Maxwell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society</title>
				<meeting>the 2022 AAAI/ACM Conference on AI, Ethics, and Society</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="78" to="91" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Decision making [human-centered computing</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Hoffman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Yates</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Intelligent Systems</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="76" to="83" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Thinking, Fast and Slow</title>
		<author>
			<persName><forename type="first">D</forename><surname>Kahneman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>Straus Giroux</publisher>
			<pubPlace>NY</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Visual, textual or hybrid: the effect of user expertise on different explanations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Szymanski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Millecamp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Verbert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">26th international conference on intelligent user interfaces</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="109" to="119" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">A review of possible effects of cognitive biases on interpretation of rule-based machine learning models</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kliegr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Š</forename><surname>Bahník</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fürnkranz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">295</biblScope>
			<biblScope unit="page">103458</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Automation bias: a systematic review of frequency, effect mediators, and mitigators</title>
		<author>
			<persName><forename type="first">K</forename><surname>Goddard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roudsari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Wyatt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Medical Informatics Association</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="121" to="127" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Automation bias and verification complexity: a systematic review</title>
		<author>
			<persName><forename type="first">D</forename><surname>Lyell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Coiera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Medical Informatics Association</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="423" to="431" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Explanations can reduce overreliance on ai systems during decision-making</title>
		<author>
			<persName><forename type="first">H</forename><surname>Vasconcelos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jörke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Grunde-Mclaughlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gerstenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Bernstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Krishna</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the ACM on Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="1" to="38" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Do people engage cognitively with ai? impact of ai assistance on incidental learning</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">Z</forename><surname>Gajos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Mamykina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">27th international conference on intelligent user interfaces</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="794" to="806" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">Why are we averse towards algorithms? a comprehensive literature review on algorithm aversion</title>
		<author>
			<persName><forename type="first">E</forename><surname>Jussupow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Benbasat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Heinzl</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Making sense of recommendations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Yeomans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mullainathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kleinberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Behavioral Decision Making</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="403" to="414" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">People prefer moral discretion to algorithms: Algorithm aversion beyond intransparency</title>
		<author>
			<persName><forename type="first">J</forename><surname>Jauernig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Uhl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Walkowitz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philosophy &amp; Technology</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page">2</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Judgment under uncertainty: Heuristics and biases: Biases in judgments reveal some heuristics of thinking under uncertainty</title>
		<author>
			<persName><forename type="first">A</forename><surname>Tversky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kahneman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">science</title>
		<imprint>
			<biblScope unit="volume">185</biblScope>
			<biblScope unit="page" from="1124" to="1131" />
			<date type="published" when="1974">1974</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Experts, amateurs, and real estate: An anchoring-and-adjustment perspective on property pricing decisions</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">B</forename><surname>Northcraft</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Neale</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Organizational behavior and human decision processes</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="84" to="97" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Playing dice with criminal sentences: The influence of irrelevant anchors on experts&apos; judicial decision making</title>
		<author>
			<persName><forename type="first">B</forename><surname>Englich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mussweiler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Strack</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Personality and Social Psychology Bulletin</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="188" to="200" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Rams, hounds and white boxes: Investigating human-ai collaboration protocols in medical diagnosis</title>
		<author>
			<persName><forename type="first">F</forename><surname>Cabitza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Campagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ronzio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cameli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Mandoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Pastore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Sconfienza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Folgado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Barandas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Gamboa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence in Medicine</title>
		<imprint>
			<biblScope unit="volume">138</biblScope>
			<biblScope unit="page">102506</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Ai trust: Can explainable ai enhance warranted trust?</title>
		<author>
			<persName><forename type="first">R</forename><surname>De Brito Duarte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Correia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Arriaga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Paiva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Human Behavior and Emerging Technologies</title>
		<imprint>
			<biblScope unit="page">2023</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">The effect of message framing and timing on the acceptance of artificial intelligence&apos;s suggestion</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Song</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Anchoring bias affects mental model formation and user reliance in explainable ai systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Nourani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Roy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Block</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Honeycutt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Rahman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ragan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Gogate</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">26th International Conference on Intelligent User Interfaces</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="340" to="350" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Are two heads better than one in ai-assisted decision making? comparing the behavior and performance of groups and individuals in human-ai collaborative recidivism risk assessment</title>
		<author>
			<persName><forename type="first">C.-W</forename><surname>Chiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2023 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Groupthink</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">L</forename><surname>Janis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Engineering Management Review</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page">36</biblScope>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
