<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Decision-Making</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Regina de Brito Duarte</string-name>
          <email>reginaduarte@tecnico.ulisboa.pt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joana Campos</string-name>
          <email>joana.campos@tecnico.ulisboa.pt</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>INESC-ID, Instituto Superior Técnico</institution>
          ,
          <addr-line>Lisbon</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <fpage>10</fpage>
      <lpage>14</lpage>
      <abstract>
        <p>Artificial intelligence (AI) has been widely employed in decision-making contexts. However, AI-assisted decisionmaking continues to encounter several challenges, including prevalent patterns of over-reliance and under-reliance. This paper provides an analysis of the most common cognitive biases in AI-assisted decision-making, supported by multiple examples from the literature. Various solutions proposed in the literature to address the shortcomings of AI-assisted decision-making, such as Explainable AI techniques or cognitive forcing functions, may mitigate certain biases but potentially exacerbate others.</p>
      </abstract>
      <kwd-group>
        <kwd>AI-assisted decision-making</kwd>
        <kwd>Cognitive bias</kwd>
        <kwd>Human-AI interaction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The rapid integration of Artificial Intelligence (AI) into society is driven by its remarkable capabilities,
which enhance decision-making in fields such as law and healthcare [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, the full impact of AI
recommendations on human decisions remains an area of ongoing investigation [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2, 3, 4, 5</xref>
        ]. To address
this, eXplainable AI (XAI) has emerged with the goal of making AI predictions more understandable
[
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. Despite this, the efectiveness of XAI faces challenges, such as overreliance, where users place
excessive trust in AI [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9, 10</xref>
        ].
      </p>
      <p>To enhance AI-assisted decision-making, proposals include designing clearer explanations [11, 12, 13]
and implementing cognitive forcing functions—techniques designed to increase user engagement in
AI-assisted decision-making. These functions, such as decision checklists, delayed AI responses, or
AI suggestions on demand, are intended to boost user attention [14]. While these approaches address
cognitive biases inherent in AI-assisted decision-making [15], the same strategies that mitigate certain
cognitive biases can unintentionally trigger others.</p>
      <p>This extended abstract explores the identification and discussion of the most common cognitive biases
in AI-assisted decision-making, along with their implications for the field. The aim is to highlight design
considerations related to cognitive biases for XAI and human-AI interface designers and to provide a
comprehensive perspective on how to approach cognitive biases in AI-assisted decision-making.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The AI-assisted decision-making process</title>
      <p>Hofman et al. propose a three-stage model to define the traditional decision-making process that
includes situation assessment, interpretation, and selection [16]. The model begins with gathering
and evaluating relevant information to define the problem and set goals, followed by analyzing this
information to develop a plan of action, and concludes with selecting and committing to a specific
course of action. These stages provide a structured approach that can vary depending on the decision
task at hand.</p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
      <p>
        In AI-assisted decision-making, the traditional decision-making stages of situation assessment,
interpretation, and selection/commitment are preserved. AI improves the interpretation stage by
evaluating options, assessing their value, and considering potential outcomes [16, 12]. It typically
ofers high-accuracy recommendations that can be further enhanced with confidence intervals and
explanations. Despite these advantages, the assumption that AI-assisted decision-making is always
more eficient than human decision making is sometimes challenged [
        <xref ref-type="bibr" rid="ref9">9, 14</xref>
        ].
      </p>
      <p>AI-assisted decision-making can be understood as a process involving three primary components:
the human decision-maker, who is ultimately responsible for the final decision and its outcomes; the
decision task with its specific characteristics; and the AI agent that provides recommendations to support
the decision-making process. Each component can exhibit diferent characteristics that influence the
decision-making process. For instance, a decision task may vary in complexity (task dificulty), be
conducted in a high-stakes or low-stakes environment (risk), demand varying levels of cognitive efort
from the user and be designed in diferent ways (design). Similarly, on the human side, factors like
expertise level and whether the decision is made by a group or an individual (number of decision-makers)
can impact the process. For the AI agent, aspects such as the accuracy of its recommendations and the
types of explanations provided to the user can also influence the final outcome.</p>
      <p>In AI-assisted decision-making processes, focusing solely on task performance — such as eficacy,
eficiency, and fairness—is not suficient. It is also essential to consider the human-AI relationship,
including whether the human decision-maker relies on the AI appropriately and comprehends the AI’s
recommendations, as these factors significantly impact task performance. By considering these three
components and the related factors that influence task performance, we can develop a framework to
understand how AI-assisted decision-making processes function and the dynamics among the various
contributing factors. Figure 1 illustrates the AI-assisted decision-making framework with these three
components and the key decision metrics for evaluating the process. In the following sections, this
framework will provide a clear mental model for understanding where cognitive biases might afect the
AI-assisted decision-making process.
The scientific community recognizes that the human mind operates within a dual-process system, where
certain cognitive processes are rapid, efortless, and intuitive — generated by System 1 — while others
are slower and require greater mental efort — generated by System 2 [ 17]. This understanding is crucial
for understanding human decision making, which often occurs under uncertainty with incomplete
information. In such situations, decision-makers rely on heuristics—simple, quick judgments—as proxies
for unknown answers. These heuristics are typically generated by System 1 and can lead to cognitive
biases if not scrutinized by System 2.</p>
      <p>While cognitive biases in classical decision-making have been extensively studied, those arising in
AI-assisted decision-making are only now gaining attention [17]. This is due to the recent prominence
of AI-assisted decision making and the previously unchallenged belief that AI tools inherently enhance
decision eficiency [ 12]. However, recent studies suggest that AI can mitigate in one hand, and reinforce
in another, cognitive biases in decision-making. This section provides an analysis of each cognitive bias
in AI-assisted decision-making and how they can impact the decision making process, supported by
various examples from the literature.</p>
      <sec id="sec-2-1">
        <title>3.1. Confirmation Bias</title>
        <p>Confirmation bias involves seeking information that confirms existing beliefs, disregarding contradictory
data, and making decisions that reinforce initial beliefs [17]. In AI-assisted decision-making, it occurs
when AI suggestions align with preexisting beliefs, reducing critical thinking [11]. Users may accept or
reject recommendations solely on the basis of alignment, neglecting other factors. This bias is more
common in lay users than in experts [18]. Additionally, when looking at explanations, users may
selectively focus on parts confirming their beliefs [ 19].</p>
      </sec>
      <sec id="sec-2-2">
        <title>3.2. Automation Bias</title>
        <p>Automation bias is the tendency to favor decisions made by automated systems, even when they are
prone to errors, leading to overreliance [20]. In AI-assisted decision-making, this cognitive efect occurs,
especially when the cognitive load of the decision is high [21, 22] or when the expertise of the human
decider is low. Explainable AI [11] and cognitive forcing functions [23, 14] are viewed as solutions that
can mitigate this bias.</p>
      </sec>
      <sec id="sec-2-3">
        <title>3.3. Algorithm Aversion Bias</title>
        <p>In contrast to automation bias, algorithm aversion bias leads humans to dismiss algorithmic decisions just
because it is a machine [24]. In AI-assisted decision-making, users may prefer human recommendations
as they perceive them as easier to understand [25]. In critical tasks, individuals may favor human
discretion over algorithmic application of fairness principles, as humans can transcend these principles
if necessary [26]. This bias can lead to under-reliance and disuse of AI systems.</p>
      </sec>
      <sec id="sec-2-4">
        <title>3.4. Anchoring Bias</title>
        <p>The anchor efect, occurs when individuals estimate uncertain quantities, influenced by initial reference
points called anchors [27]. These anchors, whether informative or randomly assigned, bias final
estimates. This efect is prominent in various contexts, particularly in quantitative estimations like real
estate pricing, where initial listing prices afect subsequent estimates [ 28]. It also afects qualitative
judgments, such as sentencing decisions in judicial settings, evidenced by studies that show significant
variations based on initial sentencing demands [29].</p>
        <p>Cognitive biases that arise from the anchor efect in AI-assisted decision-making stem from direct
and indirect anchoring processes. An immediate anchor is the AI’s suggestion, influencing decisions
by guiding towards similar options and potentially neglecting other factors. This can yield varied
outcomes. If the AI system surpasses human capabilities, it improves decision accuracy [30]. In contrast,
reliance on less accurate AI recommendations can lead to overreliance [31]. Additionally, when AI
recommendations come after humans initiate decision-making, the original human estimate can act as
an anchor. Two possibilities emerge: If the AI suggestion aligns with the initial estimate, confirmation
bias may prompt immediate adoption, as previously discussed. On the contrary, if the AI suggestion
difers, individuals tend to stick to their original estimate and may not rely on the system.</p>
        <p>Anchoring bias may manifest itself indirectly in situations involving ordering and framing efects.
For example, when individuals receive accuracy information about an AI assistant, it can act as an
anchor, reducing trust compared to scenarios without disclosure [32]. Additionally, in repeated use of
AI assistants, users may initially perceive high accuracy, leading to inflated trust and anchoring future
assessments to this impression, increasing reliance on the system [33]. The opposite scenario may also
occur.</p>
      </sec>
      <sec id="sec-2-5">
        <title>3.5. Loss Aversion</title>
        <p>One notable human behavioral trait is loss aversion, where losses hold more weight than equivalent
gains [17]. This bias can extend to AI-assisted decision-making. Humans may focus more on false
positives than false negatives in AI errors [15], leading to algorithm aversion bias and under-reliance.
Additionally, in risky decision tasks, individuals tend to trust their beliefs over AI, contributing to a
lack of trust, which is challenging to mitigate [31].</p>
      </sec>
      <sec id="sec-2-6">
        <title>3.6. Availability Bias</title>
        <p>Availability bias leads to an overestimation of event frequencies based on easily recalled instances [17].
In human decision-making with AI recommendations, users can incorrectly estimate the frequency of
AI suggestions due to memory recall [11], afecting AI reliance. Wang et al. propose presenting base
frequencies to mitigate this bias [11]. Furthermore, explanations can also induce availability bias if
users recall relevant knowledge. Users may perceive explanations as more or less plausible based on
recalled knowledge, potentially reinforcing biased perceptions [19].</p>
      </sec>
      <sec id="sec-2-7">
        <title>3.7. The Efects of Cognitive Bias</title>
        <p>The cognitive biases that arise and their efects can vary depending on the characteristics of the
decisionmaking task. Within the AI-assisted decision-making framework described in section 2, several biases
may occur in relation to the three components. For example, an individual with limited knowledge of
AI but high expertise in the relevant field may exhibit algorithm aversion, leading to a lack of trust in AI
recommendations [24]. Conversely, a lack of experience on the part of the human decision-maker can
result in automation bias. In group decision-making scenarios, there is a tendency toward groupthink—a
bias where individuals conform to the majority opinion, potentially increasing overreliance on the AI
system [34, 35].</p>
        <p>The task’s characteristics also play a significant role. High-stakes decisions are more prone to loss
aversion bias, which may lead to under-reliance on the AI system [15, 31]. Conversely, highly complex
tasks may result in automation bias, as the human decision-maker might rely more on the AI system
due to the task’s dificulty [ 22]. Even task design can influence the decision-making process and
the emergence of specific biases. For instance, if the AI recommendation is presented at the outset,
alongside the collection of all relevant information, it could trigger anchoring bias, where the AI
recommendation serves as an anchor [30]. However, a cognitive forcing function that delays showing
the recommendation until after a certain period could lead to confirmation bias, where the human
decision-maker has already formed an opinion, and the AI recommendation merely reinforces this
decision, reducing critical thinking [18].</p>
        <p>Finally, regarding the AI component, a high cognitive load required to interpret the explanations—or
even just the presence of explanations—might induce automation bias in the human decision-maker
[22].</p>
        <p>Each decision-making task is defined by the unique characteristics of its components—human, task,
and AI—and the interplay of these factors results in diferent cognitive biases for each task. Therefore,
it is crucial to analyze various cognitive forcing function designs and explanations in each scenario.
This analysis should identify not only the biases that need to be mitigated but also those that could
potentially be introduced by the explanations or the new design.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Conclusion</title>
      <p>The paper focuses on common cognitive biases in AI-assisted decision-making rather than covering all
possible biases. Techniques such as XAI and cognitive forcing functions can help address some of these
biases, but they can also unintentionally introduce new ones. For instance, explanations can trigger the
mere exposure efect, leading to overreliance [ 15, 14]. Additionally, complex explanations that aim for
completeness [13] or present arguments for and against each option [12] can induce automation bias
due to their high cognitive demands [22].</p>
      <p>Cognitive forcing functions are designed to enhance user engagement in AI-assisted decision-making.
These kinds of techniques can also trigger cognitive biases similar to those caused by explanations. For
example, introducing AI suggestions after the user’s initial decision can lead to anchoring efects or
algorithm aversion [30]. Moreover, if not carefully designed, these functions can inadvertently reduce
engagement by making the decision process overly complex. In conclusion, while these techniques ofer
valuable solutions, they also present challenges, requiring a nuanced approach to efectively manage
potential cognitive biases for each case of AI-assisted decision-making task.</p>
      <p>Acknowledgments This research was funded by INESC-ID (UIDB/50021/2020), as well as the projects
CRAI C628696807-00454142 (IAPMEI/PRR) and TAILOR H2020-ICT-48-2020/952215 and HumanE AI
Network H2020-ICT-48-2020/952026.
its parts? the efect of ai explanations on complementary team performance, in: Proceedings of
the 2021 CHI conference on human factors in computing systems, 2021, pp. 1–16.
[10] M. Eiband, D. Buschek, A. Kremer, H. Hussmann, The impact of placebic explanations on trust
in intelligent systems, in: Extended abstracts of the 2019 CHI conference on human factors in
computing systems, 2019, pp. 1–6.
[11] D. Wang, Q. Yang, A. Abdul, B. Y. Lim, Designing theory-driven user-centric explainable ai, in:</p>
      <p>Proceedings of the 2019 CHI conference on human factors in computing systems, 2019, pp. 1–15.
[12] T. Miller, Explainable ai is dead, long live explainable ai! hypothesis-driven decision support
using evaluative ai, in: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and
Transparency, 2023, pp. 333–342.
[13] A. Jacovi, J. Bastings, S. Gehrmann, Y. Goldberg, K. Filippova, Diagnosing ai explanation methods
with folk concepts of behavior, Journal of Artificial Intelligence Research 78 (2023) 459–489.
[14] Z. Buçinca, M. B. Malaya, K. Z. Gajos, To trust or to think: cognitive forcing functions can reduce
overreliance on ai in ai-assisted decision-making, Proceedings of the ACM on Human-Computer
Interaction 5 (2021) 1–21.
[15] A. Bertrand, R. Belloum, J. R. Eagan, W. Maxwell, How cognitive biases afect xai-assisted
decisionmaking: A systematic review, in: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics,
and Society, 2022, pp. 78–91.
[16] R. R. Hofman, J. F. Yates, Decision making [human-centered computing], IEEE Intelligent Systems
20 (2005) 76–83.
[17] D. Kahneman, Thinking, Fast and Slow, Farrar, Straus Giroux, NY, 2011.
[18] M. Szymanski, M. Millecamp, K. Verbert, Visual, textual or hybrid: the efect of user expertise on
diferent explanations, in: 26th international conference on intelligent user interfaces, 2021, pp.
109–119.
[19] T. Kliegr, Š. Bahník, J. Fürnkranz, A review of possible efects of cognitive biases on interpretation
of rule-based machine learning models, Artificial Intelligence 295 (2021) 103458.
[20] K. Goddard, A. Roudsari, J. C. Wyatt, Automation bias: a systematic review of frequency, efect
mediators, and mitigators, Journal of the American Medical Informatics Association 19 (2012)
121–127.
[21] D. Lyell, E. Coiera, Automation bias and verification complexity: a systematic review, Journal of
the American Medical Informatics Association 24 (2017) 423–431.
[22] H. Vasconcelos, M. Jörke, M. Grunde-McLaughlin, T. Gerstenberg, M. S. Bernstein, R. Krishna,
Explanations can reduce overreliance on ai systems during decision-making, Proceedings of the
ACM on Human-Computer Interaction 7 (2023) 1–38.
[23] K. Z. Gajos, L. Mamykina, Do people engage cognitively with ai? impact of ai assistance on
incidental learning, in: 27th international conference on intelligent user interfaces, 2022, pp.
794–806.
[24] E. Jussupow, I. Benbasat, A. Heinzl, Why are we averse towards algorithms? a comprehensive
literature review on algorithm aversion (2020).
[25] M. Yeomans, A. Shah, S. Mullainathan, J. Kleinberg, Making sense of recommendations, Journal
of Behavioral Decision Making 32 (2019) 403–414.
[26] J. Jauernig, M. Uhl, G. Walkowitz, People prefer moral discretion to algorithms: Algorithm aversion
beyond intransparency, Philosophy &amp; Technology 35 (2022) 2.
[27] A. Tversky, D. Kahneman, Judgment under uncertainty: Heuristics and biases: Biases in judgments
reveal some heuristics of thinking under uncertainty., science 185 (1974) 1124–1131.
[28] G. B. Northcraft, M. A. Neale, Experts, amateurs, and real estate: An anchoring-and-adjustment
perspective on property pricing decisions, Organizational behavior and human decision processes
39 (1987) 84–97.
[29] B. Englich, T. Mussweiler, F. Strack, Playing dice with criminal sentences: The influence of
irrelevant anchors on experts’ judicial decision making, Personality and Social Psychology
Bulletin 32 (2006) 188–200.
[30] F. Cabitza, A. Campagner, L. Ronzio, M. Cameli, G. E. Mandoli, M. C. Pastore, L. M. Sconfienza,
D. Folgado, M. Barandas, H. Gamboa, Rams, hounds and white boxes: Investigating human–ai
collaboration protocols in medical diagnosis, Artificial Intelligence in Medicine 138 (2023) 102506.
[31] R. de Brito Duarte, F. Correia, P. Arriaga, A. Paiva, et al., Ai trust: Can explainable ai enhance
warranted trust?, Human Behavior and Emerging Technologies 2023 (2023).
[32] T. Kim, H. Song, The efect of message framing and timing on the acceptance of artificial
intelligence’s suggestion, in: Extended Abstracts of the 2020 CHI Conference on Human Factors in
Computing Systems, 2020, pp. 1–8.
[33] M. Nourani, C. Roy, J. E. Block, D. R. Honeycutt, T. Rahman, E. Ragan, V. Gogate, Anchoring bias
afects mental model formation and user reliance in explainable ai systems, in: 26th International
Conference on Intelligent User Interfaces, 2021, pp. 340–350.
[34] C.-W. Chiang, Z. Lu, Z. Li, M. Yin, Are two heads better than one in ai-assisted decision making?
comparing the behavior and performance of groups and individuals in human-ai collaborative
recidivism risk assessment, in: Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems, 2023, pp. 1–18.
[35] I. L. Janis, Groupthink, IEEE Engineering Management Review 36 (2008) 36.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Christian</surname>
          </string-name>
          ,
          <article-title>Regulators alarmed by doctors already using ai to diagnose patients, 2023</article-title>
          . URL: https://futurism.com/neoscope/doctors
          <article-title>-using-ai.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Wortman</given-names>
            <surname>Vaughan</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Bansal, Understanding the role of human intuition on reliance in human-ai decision-making with explanations</article-title>
          ,
          <source>Proceedings of the ACM on Humancomputer Interaction</source>
          <volume>7</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>32</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Hemmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schemmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vössing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kühl</surname>
          </string-name>
          ,
          <article-title>Human-ai complementarity in hybrid intelligence systems: A structured literature review</article-title>
          .,
          <source>PACIS</source>
          (
          <year>2021</year>
          )
          <fpage>78</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <article-title>Decoding ai's nudge: A unified framework to predict human behavior in ai-assisted decision making</article-title>
          ,
          <source>arXiv preprint arXiv:2401.05840</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Smith-Renner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <article-title>Towards a science of human-ai decision making: An overview of design space in empirical human-subject studies</article-title>
          ,
          <source>in: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1369</fpage>
          -
          <lpage>1385</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Watkins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Russakovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monroy-Hernández</surname>
          </string-name>
          ,
          <article-title>” help me help the ai”: Understanding how explainability can support human-ai interaction</article-title>
          ,
          <source>in: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schemmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kuehl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Benz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bartos</surname>
          </string-name>
          , G. Satzger,
          <article-title>Appropriate reliance on ai advice: Conceptualization and the efect of explanations</article-title>
          ,
          <source>in: Proceedings of the 28th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>410</fpage>
          -
          <lpage>422</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schemmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hemmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nitsche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kühl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vössing</surname>
          </string-name>
          ,
          <article-title>A meta-analysis of the utility of explainable artificial intelligence in human-ai decision-making</article-title>
          ,
          <source>in: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>617</fpage>
          -
          <lpage>626</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bansal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nushi</surname>
          </string-name>
          , E. Kamar,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weld</surname>
          </string-name>
          ,
          <article-title>Does the whole exceed</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>