<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>BIASeD: Bringing Irrationality into Automated System Design</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aditya Gulati</string-name>
          <email>aditya@ellisalicante.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Miguel Angel Lozano</string-name>
          <email>malozano@ua.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bruno Lepri</string-name>
          <email>lepri@fbk.eu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nuria Oliver</string-name>
          <email>nuria@ellisalicante.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Cognitive Biases, Heuristics, Human Machine Collaboration</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>2022, Westin Arlington Gateway in Arlington</institution>
          ,
          <addr-line>Virginia</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Center for Information and Communication Technology Via Sommarive</institution>
          ,
          <addr-line>18, I-38123 Povo, TN</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>ELLIS Alicante, Parque Científico de Alicante ,Campus de San Vicente, s/n Universidad de Alicante - San Vicente del</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Human perception, memory and decision-making are impacted by tens of cognitive biases and heuristics that influence our actions and decisions. Despite the pervasiveness of such biases, they are generally not leveraged by today's Artificial Intelligence (AI) systems that model human behavior and interact with humans. In this theoretical paper, we claim that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases. We propose the need for a research agenda on the interplay between human cognitive biases and Artificial Intelligence. We categorize existing cognitive biases from the perspective of AI systems, identify three broad areas of interest and outline research directions for the design of AI systems that have a better understanding of our own biases.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        A cognitive bias is a systematic pattern of deviation from rationality that occurs when we process,
interpret or recall information from the world, and it afects the decisions and judgments
we make. Cognitive biases may lead to inaccurate judgments, illogical interpretations and
perceptual distortions. Thus, they are also referred to as irrational behavior [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].
      </p>
      <p>Since the 1970s, scholars in social psychology, cognitive science, and behavioral economics
have carried out studies aimed at uncovering and understanding these apparently irrational
elements in human decision making. As a result, diferent theories have been proposed to
explain the source of our cognitive biases.</p>
      <p>
        In 1955, Simon proposed the theory of bounded rationality [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. It posits that human decision
making is rational, but limited by our computation abilities which results in sub-optimal
nEvelop-O
LGOBE
      </p>
      <p>
        https://gulu42.github.io/ (A. Gulati); https://sites.google.com/site/malozanohomepage/ (M. A. Lozano);
decisions because we are unable to accurately solve the utility function of all the options
available at all times. Alternative theories include the dual process theory and the prospect theory,
both proposed by Kahneman [
        <xref ref-type="bibr" rid="ref1">4, 1</xref>
        ].
      </p>
      <p>Even though there is no unified theory of our cognitive biases, it is clear that we use multiple
shortcuts or heuristics1 to make decisions which might lead to sub-optimal outcomes. However
and despite these limitations, cognitive biases and heuristics are a crucial part of our decision
making.</p>
      <p>
        In fact, cognitive biases have traditionally been commercially leveraged in diferent sectors
to manipulate human behavior. Examples include casinos [5], addictive apps [6], advertisement
and marketing strategies to drive consumption [
        <xref ref-type="bibr" rid="ref2">7, 2</xref>
        ] and social media campaigns to impact the
outcome of elections [8]. However, we advocate in this paper for a constructive and positive
use of cognitive biases in technology, moving from manipulation to collaboration. We propose
that considering our cognitive biases in AI systems could lead to more eficient human-AI
collaboration.
      </p>
      <p>Nonetheless, there has been limited research to date on the interaction between human biases
and AI systems, as recently highlighted by several authors [9, 10, 11, 12]. In this context, we
highlight the work by Akata et al. [13] who propose a research agenda for the design of AI
systems that collaborate with humans, going beyond a human-in-the-loop setting. They pose a
set of research questions related to how to design AI systems that collaborate with and adapt
to humans in a responsible and explainable way. In their work, they note the importance of
understanding humans and leveraging AI to mitigate biases in human decisions.</p>
      <p>In this paper, we build from previous work by proposing a taxonomy of cognitive biases that
is tailored to the design of AI systems. Furthermore, we identify a subset of 20 cognitive biases
that are suitable to be considered in the development of AI systems and outline three directions
of research to design cognitive bias-aware AI systems.</p>
    </sec>
    <sec id="sec-2">
      <title>2. A Taxonomy of Cognitive Biases</title>
      <p>Since the early studies in the 1950s, approximately 200 cognitive biases have been identified
and classified [ 14, 15].</p>
      <p>Several taxonomies of cognitive biases have been proposed in the literature, particularly in
specific domains, such as medical decision making [ 16, 17], tourism [18] or fire evacuation [ 19].
Alternative taxonomies classify biases based on their underlying phenomenon [20, 21, 22].
However, given that there is no widely accepted theory of the source of cognitive biases [23],
classifying them according to their hypothesized source might be misleading.</p>
      <p>Dimara et al. [24] report similar limitations with existing taxonomies and propose a new
taxonomy of cognitive biases based on the experimental setting where each bias was studied
and with a focus on visualization. While this taxonomy is of great value for visualization, our
focus is the interplay between AI and cognitive biases. Thus, we propose classifying biases
according to five stages in the human decision making cycle as depicted in Figure 1.</p>
      <p>1While heuristics typically refer to a simplifying rule used to make a decision and a cognitive bias refers to a
consistent pattern of deviation in behavior, in this paper both terms are used interchangeably as both impact human
decisions in a similar way.</p>
      <p>The left part of Figure 1 represents the physical world that we perceive, interpret and interact
with. The right part represents the internal models and memories that we create based on our
experience. As seen in Figure 1, we propose classifying biases according to five main stages in the
human perception, interpretation and decision making process: presentation biases, associated
with how information or facts are presented to humans; interpretation biases that arise due to
misinterpretations of information; value attribution biases that emerge when humans assign
values to objects or ideas that are not rational or based on an underlying factual reality; recall
biases associated with how we recall facts from our memory and decision biases that have been
documented in the context of human-decision making.</p>
      <p>Figure 1 also illustrates how AI systems (represented as an orange undirected graph) may
interact with humans in this context. First, AI systems could be entities in the external world
that humans perceive or interact with (e.g. chatbots, robots, apps...). Second, they may be active
participants and assist humans in their information processing and decision-making processes
(e.g. cognitive assistants, assistive technologies...). Finally, AI systems could be observers that
model our behavior and provide feedback without directly being involved in the decision making
process. Note that these three forms of interaction with AI systems may occur simultaneously.</p>
      <p>We also present four representative cognitive biases for each category. These biases were
chosen according to the amount of evidence in the literature about the existence of the bias and
their relevance for the design of AI systems. Tables 1 and 2 summarize the selected biases, their
description, supporting literature and relevance to AI.
Decoy efect [25,
26, 27, 28]</p>
      <p>Placing deliberately a worse alternative between
two choices can reverse the user’s preference</p>
      <p>Could AI systems learn to place decoys efectively while
presenting alternatives? Could AI systems learn to identify decoys? [29]</p>
      <p>Framing efect
on [30, 31, 32, 33]
i
t
a
t
n
se Anchoring efect
re [20, 36, 37]
P</p>
      <p>Pseudocertainty
efect [30, 39, 40]
Conjunction
fallacy [41, 42, 43,
44]
n Base Rate fallacy
ito [45, 46]
a
t
e
rp Gamblers fallacy
re [20, 47, 48, 49]
t
n
I</p>
      <p>Hyperbolic
counting
[50, 51, 52]</p>
      <p>disefect</p>
      <p>Halo efect [ 55,
n 56, 57, 58]
o
i
t
u IKEA efect [ 59,
b
ir 60, 61]
t
t
A Risk
avere sion bias
lau [62, 63, 64, 65]
V</p>
      <p>Social
desirability bias [66, 67,
68, 69]</p>
      <p>How a statement is framed can alter its
perceived value
Human decision making is influenced by certain
reference points or anchors
Humans incorrectly estimate the certainty of
statements in a multi-stage decision making
process
In certain situations, humans see the
conjunction of two events as being more likely than any
one event individually
Humans have a tendency to ignore the base rate
information when making decisions
Humans tend to overvalue the impact of past
events when predicting the outcome of
independent future events
Humans tend to choose immediate rewards over
rewards that come later in the future
Positive attributes associated with a person in
one setting carry over other settings</p>
      <p>Studies have shown that when humans are placed in human-AI
teams, their decisions [34] and trust [35] are impacted by the
framing efect. Could AI systems learn to frame explanations to
make them more trustworthy?
The use of anchors to alter user preferences has been studied in
marketing and recommender systems [38]. Could AI systems
automatically identify anchors that humans might be subject to?
Could AI systems identify situations where humans are likely to
be unable to accurately compute the “complete picture”? Could
this efect be leveraged by AI algorithms to learn efectively from
smaller datasets?
Could AI systems recognize situations where humans are likely
to make such errors and provide alternate decisions?
Human reasoning does not follow Bayesian reasoning in certain
settings. Could AI systems leverage these non-Bayesian
computations efectively?
Decision making systems that learn from human decision making
–e.g. legal, college admissions or HR decision-making systems–
learn from data that reflects the gamblers fallacy. How could
this bias be mitigated to design fairer AI-based decision-support
systems?
Studies have shown a link between high social media usage and
hyperbolic discounting leading to unhealthy behavior [53, 54].</p>
      <p>Could AI systems recognize when we are impacted by this bias
and help mitigate it?
Could this efect be utilized to create systems that are easier to
trust? Does the halo efect manifest itself when humans interact
with chatbots or robots?
Humans associate a higher value to their own
creations than those of others</p>
      <p>Could this efect be leveraged to provide explanations that users
are more likely to accept?
We tend to avoid risky decisions even if they
have a higher net expected utility than less risky
options, especially if the uncertainty is high
Humans tend to provide the answers to surveys
or questions that they believe are expected from
them</p>
      <p>Could AI systems support human decision-making by
counterbalancing the risk aversion risk?
Do people provide socially desirable answers even when they are
interacting with or being evaluated by machines? If yes, could the
social desirability bias be be leveraged to nudge users to improve
their behavior?</p>
    </sec>
    <sec id="sec-3">
      <title>3. Cognitive Biases and AI: Research Directions</title>
      <p>Given the ubiquity of AI-based systems in our daily lives –from recommender systems to
personal assistants and chatbots– and the pervasiveness of our cognitive biases, there is an
opportunity to leverage cognitive biases to build more eficient AI systems.</p>
      <p>In this section, we propose three research directions to further explore the interplay between
cognitive biases and AI: (1) Human-AI interaction, (2) Cognitive biases in AI algorithms and (3)
Computational modeling of cognitive biases.
False memory
bias [70, 71, 72]
Humans incorrectly remember a past event de- False memories impact how we make decisions. Positive false
pending on the questions they are asked about memories have been shown to have positive consequences [73].</p>
      <p>the event Could this AI systems use this efect to improve user experience?</p>
      <p>Self-reference ef- Events with a direct impact are more likely to Could AI systems leverage this efect to make explanations about
ll fect [74, 75] be remembered their behavior more “memorable”?
a
ce
SerialR positioning
efect [ 76, 77, 78]</p>
      <p>Items at the start and the end of a list are more When providing explanations, could AI systems leverage this bias
memorable than those in the middle to have a more efective human interaction?</p>
      <sec id="sec-3-1">
        <title>3.1. Area I. Human-AI Interaction</title>
        <p>Cognitive biases have been studied since the 1970’s in experiments where human participants
interacted with other humans, animals or inanimate objects. However, as Hidalgo et al. [92]
note, we do not necessarily perceive, interact with and evaluate machines in the same way as
we do with humans, animals or objects. Thus, it is unclear today whether these cognitive biases
exist when humans interact with AI systems, and if so with which degree of intensity and under
what circumstances.</p>
        <p>This is especially the case with biases related to presentation and decision-making, as per</p>
        <p>For example, according to the social desirability bias, users tend to respond to surveys with
socially expected answers which are not necessarily their honest responses. Would this bias
also emerge when users interact with a chatbot? Another suitable bias to study in this context
is the halo efect. Does it exist when humans interact with chatbots, robots or avatars? Are
positive traits associated with an AI system in one area carried forward to other areas as well
or are machines viewed as tools with a single purpose and hence outside of the scope to the
halo afect?</p>
        <p>In addition to verifying if cognitive biases exist in human-machine interactions, these biases
could be leveraged to design more human-like AI systems. While it has been reported that
humans rely on heuristics when deciding if they should trust AI decisions [96], would the existence
of cognitive biases in AI systems have an impact on their interpretability and trustworthiness?
Anthropomorphic agents have been shown to increase user satisfaction and increase acceptance
[97]. However, studies on anthropomorphism tend to focus on physical attributes. We propose
exploring anthropomorphism in the realm of cognitive biases. Exemplary biases that could be
studied include the framing efect, which could inform the fine tuning of language models in
chatbots; the status quo bias, which could increase the trust in AI systems that suggest small
rather than major changes; or the halo efect, which could lead to humans trusting chatbots,
avatars or robots with certain appearances and attributes more than others independently of
their actual performance.</p>
        <p>Another dimension worth exploring is the intersection between cognitive biases and the
explainability of AI systems (XAI). While there is a large body of previous work in XAI, only a
small subset of cognitive biases has been considered in this research area [98, 99], mainly, with
the objective of mitigating them. Buçinca et al. [100] note that many machine explanations
are not useful because users rely on heuristics about when to trust the machine and when not
to, rather than using the provided explanation to make such a decision. The authors propose
cognitive forcing functions to help users consider machine explanations carefully and show the
efectiveness of these functions in certain scenarios through user studies. We propose expanding
the research agenda to consider the inclusion of cognitive biases in XAI. Exemplary biases in
this context are the framing efect and the self-reference efect which AI models could potentially
leverage to provide more trustworthy explanations.</p>
        <p>Finally, we postulate that it could be valuable to include knowledge about human cognitive
biases when designing AI systems that interact with users.</p>
        <p>Examples of biases that could be considered by AI systems include the gambler’s fallacy, the
anchoring efect or the framing efect, by presenting information to users in a manner that
would mitigate the existence of these biases; the default heuristic, by nudging users to consider
all options or highlighting the existence of alternative possibilities; and the shared information
bias, by performing a topic analysis on human conversations and providing hints on novel
topics to be discussed.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Area II. Cognitive Biases in AI Algorithms</title>
        <p>While cognitive biases and heuristics lead to sub-optimal decision making in certain situations,
they are undoubtedly useful decision making aids. These heuristics are at times as efective as
complex decision making rules while at the same time significantly reduce our cognitive load
[101]. Given that humans benefit greatly from the use of cognitive biases and heuristics, it is
worth exploring how AI could also benefit from them.</p>
        <p>Taniguchi et al. [102] work in this direction by building a modified Naive Bayes classifier
that leverages the symmetry [103] and the mutual exclusion [104] biases. The proposed model
is able to perform better than alternative, state-of-the-art methods on a spam classification task
when the dataset is small and biased. Taniguchi et al. [105] and Manome et al. [106] extend
this idea by incorporating the same biases in neural networks and learning vector quantization
respectively for diferent tasks.</p>
        <p>Given these successful examples of incorporating two cognitive biases in the design of AI
systems, it is worth considering how other biases could be leveraged to help design AI systems
that would learn faster and from less data. Additional biases –beyond the symmetry and mutual
exclusion biases– that could be relevant for this purpose include the take-the-best heuristic;
naive allocation; the status-quo bias which humans use to make efective decisions in situations
with a high degree of uncertainty; and the fast-and-frugal heuristics [107] which have been
shown to be efective decision making tools in real world scenarios such as medical decision
making [108]. They could potentially be used in the design of AI systems as noted in their
recent work [91].</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Area III. Computational Modeling of Cognitive Biases</title>
        <p>The third research area addresses the computational modeling of cognitive biases. Hiatt et al.
[9] present a detailed survey of the computational approaches proposed to date to model human
behavior in human-machine interaction. While they highlight the importance of having models
that can account for “some basic understanding of human reasoning, fallacy and error”, none of
the approaches they present explicitly models cognitive biases, which we believe is crucial for
the design of AI systems going forward.</p>
        <p>Interestingly, in the past decade we have witnessed an exponential growth of research on
understanding and mitigating algorithmic biases [109, 110] which are diferent from human
cognitive biases. However, a failure to recognize the diferences has led to misrepresentations
[111, 112].</p>
        <p>Previous work has focused on building models of the underlying cognitive process that
explain cognitive biases –such as bounded rationality [113, 114]. Alternatively, scholars have
focused on a subset of biases and have proposed a variety of computational models in a particular
task. One of the most promising modeling frameworks in this context is Bayesian modeling
[10, 115, 11, 116, 117, 118]. Beyond Bayesian modeling, Kang and Lerman [119] build on existing
generative models to predict the relevance of an item while accounting for the position bias
[120].</p>
        <p>Additional work has been proposed in the medical field: Crowley et al. [121] define a set
of 8 biases as a sequence of events in computer-based pathology tasks. McShane et al. [122]
provide a tool to support doctors in their diagnoses while mitigating recall biases. Alternative
approaches are based on expert observations of subjects performing certain tasks [123, 124].
However, such methods are dificult to scale.</p>
        <p>Proposing a unifying, task-independent AI-based framework to automatically identify
cognitive biases from observed human behavior could have a profound impact in the design of AI
systems.</p>
        <p>Such a framework could provide a representation that makes it possible to mitigate cognitive
biases efectively. It could enable the development of personalized systems that support each
individual by providing the most suitable mitigation strategy for them.</p>
        <p>In the persuasive computing literature, it has been observed that diferent people respond
to diferent techniques to support them in modifying their behavior, from simple awareness
to social support or competition [125, 126]. Recent work by Kliegr et al. [12] has studied
20 cognitive biases that could potentially impact how humans interpret machine learning
models and proposes several debiasing techniques. While informing users about their biases is
certainly a useful first step, it is generally not enough to mitigate them [ 127]. Personalization
and persuasive computing methods could open new doors to more efective cognitive bias
mitigation strategies.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>Human perception, memory and decision-making are impacted by cognitive biases and heuristics
that impact our actions and decisions. Despite the pervasiveness of such biases, they are largely
disregarded by today’s AI systems that model human behavior and interact with humans.
However, given the increased prominence of AI-human collaboration, we believe that it would
be crucial for AI systems to consider this fundamental element of human cognition.</p>
      <p>In this theoretical paper, we have proposed a taxonomy of cognitive biases from the
perspective of AI-human collaboration and have selected four exemplary biases in each of the five key
dimensions of the proposed taxonomy. We have also proposed three broad research areas in
the intersection between AI and cognitive biases: First, human-AI interaction which focuses
on open questions, such as determining if human-AI interaction exhibits the same cognitive
biases as human-to-human interaction and exploring the potential value of including cognitive
biases in AI systems to make them more trustworthy and interpretable. Second, cognitive biases
in AI systems, focused on the potential of leveraging the mechanisms behind our cognitive
biases and heuristics to build more robust and eficient machine learning algorithms. Third, the
computational modeling of cognitive biases to achieve a unifying modeling framework which
could be used to design personalized mitigation strategies to support human decision making.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>Aditya Gulati and Nuria Oliver are supported by a nominal grant received at the ELLIS Unit
Alicante Foundation from the Regional Government of Valencia in Spain (Convenio Singular
signed with Generalitat Valenciana, Conselleria d’Innovació, Universitats, Ciència i Societat
Digital, Dirección General para el Avance de la Sociedad Digital). Aditya Gulati is also supported
by a grant by the Banc Sabadell Foundation.
[4] D. Kahneman, Thinking, fast and slow, Macmillan, 2011.
[5] N. D. Schüll, Addiction by design, Princeton University Press, 2012.
[6] N. Eyal, Hooked: How to build habit-forming products, Penguin, 2014.
[7] M. Petticrew, N. Maani, L. Pettigrew, H. Rutter, M. C. Van Schalkwyk, Dark nudges
and sludge in big alcohol: Behavioral economics, cognitive biases, and alcohol industry
corporate social responsibility, The Milbank Quarterly 98 (2020) 1290–1328. URL: https:
//doi.org/10.1111/1468-0009.12475.
[8] R. Epstein, R. E. Robertson, The search engine manipulation efect (SEME) and its possible
impact on the outcomes of elections, Proceedings of the National Academy of Sciences
112 (2015). URL: https://doi.org/10.1073/pnas.1419828112.
[9] L. M. Hiatt, C. Narber, E. Bekele, S. S. Khemlani, J. G. Trafton, Human modeling for
human–robot collaboration, The International Journal of Robotics Research (2017). URL:
https://doi.org/10.1177/0278364917690592.
[10] A. S. Rich, T. M. Gureckis, Lessons for artificial intelligence from the study of natural
stupidity, Nature Machine Intelligence 1 (2019) 174–180. URL: https://doi.org/10.1038/
s42256-019-0038-z.
[11] C. Rastogi, Y. Zhang, D. Wei, K. R. Varshney, A. Dhurandhar, R. Tomsett, Deciding
fast and slow: The role of cognitive biases in ai-assisted decision-making, 2020. URL:
https://arxiv.org/abs/2010.07938.
[12] T. Kliegr, Š. Bahník, J. Fürnkranz, A review of possible efects of cognitive biases on
interpretation of rule-based machine learning models, Artificial Intelligence 295 (2021).</p>
      <p>URL: https://doi.org/10.1016/j.artint.2021.103458.
[13] Z. Akata, et al., A research agenda for hybrid intelligence: Augmenting human intellect
with collaborative, adaptive, responsible, and explainable artificial intelligence, Computer
53 (2020). URL: https://doi.org/10.1109/mc.2020.2996587.
[14] B. Benson, Cognitive bias cheat sheet, 2016. URL: https://betterhumans.coach.me/
cognitive-bias-cheat-sheet-55a472476b18.
[15] T. L. Hubbard, The possibility of an impetus heuristic, Psychonomic Bulletin Review
(2022). URL: https://doi.org/10.3758/s13423-022-02130-z.
[16] J. S. Blumenthal-Barby, H. Krieger, Cognitive biases and heuristics in medical decision
making, Medical Decision Making 35 (2014) 539–557. URL: https://doi.org/10.1177/
0272989x14547740.
[17] G. Saposnik, D. Redelmeier, C. C. Ruf, et al., Cognitive biases associated with medical
decisions: a systematic review, BMC Medical Informatics and Decision Making (2016).</p>
      <p>URL: https://doi.org/10.1186/s12911-016-0377-1.
[18] W. Wattanacharoensil, D. La-ornual, A systematic review of cognitive biases in tourist
decisions, Tourism Management 75 (2019) 353–369. URL: https://doi.org/10.1016/j.tourman.
2019.06.006.
[19] M. J. Kinsey, S. M. V. Gwynne, E. D. Kuligowski, M. Kinateder, Cognitive biases within
decision making during fire evacuations, Fire Technology 55 (2018) 465–485. URL:
https://doi.org/10.1007/s10694-018-0708-0.
[20] A. Tversky, D. Kahneman, Judgment under uncertainty: Heuristics and biases, Science
185 (1974). URL: https://doi.org/10.1126/science.185.4157.1124.
[21] K. E. Stanovich, M. E. Toplak, R. F. West, The development of rational thought: A
taxonomy of heuristics and biases, in: Advances in Child Development and Behavior,
Elsevier, 2008, pp. 251–285. URL: https://doi.org/10.1016/s0065-2407(08)00006-2.
[22] D. Arnott, Cognitive biases and decision support systems development: a design science
approach, Information Systems Journal 16 (2006). URL: https://doi.org/10.1111/j.1365-2575.
2006.00208.x.
[23] R. Pohl, R. F. Pohl, Cognitive illusions: A handbook on fallacies and biases in thinking,
judgement and memory, Psychology Press, 2004.
[24] E. Dimara, S. Franconeri, C. Plaisant, A. Bezerianos, P. Dragicevic, A task-based taxonomy
of cognitive biases for information visualization, IEEE Transactions on Visualization and
Computer Graphics 26 (2020) 1413–1432. URL: https://doi.org/10.1109/tvcg.2018.2872577.
[25] J. Huber, J. W. Payne, C. Puto, Adding asymmetrically dominated alternatives: Violations
of regularity and the similarity hypothesis, Journal of Consumer Research 9 (1982) 90.</p>
      <p>URL: https://doi.org/10.1086/208899. doi:10.1086/208899.
[26] J. Hu, R. Yu, The neural correlates of the decoy efect in decisions, Frontiers
in Behavioral Neuroscience 8 (2014). URL: https://doi.org/10.3389/fnbeh.2014.00271.
doi:10.3389/fnbeh.2014.00271.
[27] Z. Wang, M. Jusup, L. Shi, et al., Exploiting a cognitive bias promotes cooperation in
social dilemma experiments, Nature Communications 9 (2018). URL: https://doi.org/10.
1038/s41467-018-05259-5. doi:10.1038/s41467-018-05259-5.
[28] B. M. Josiam, J. P. Hobson, Consumer choice in context: The decoy efect in travel
and tourism, Journal of Travel Research 34 (1995) 45–50. URL: https://doi.org/10.1177/
004728759503400106. doi:10.1177/004728759503400106.
[29] E. C. Teppan, A. Felfernig, Minimization of decoy efects in recommender result sets,
Web Intelligence and Agent Systems: An International Journal 10 (2012) 385–395. URL:
https://doi.org/10.3233/wia-2012-0253. doi:10.3233/wia-2012-0253.
[30] A. Tversky, D. Kahneman, The framing of decisions and the psychology of choice, Science
211 (1981). URL: https://www.science.org/doi/abs/10.1126/science.7455683. doi:10.1126/
science.7455683.
[31] S. Gächter, H. Orzen, E. Renner, C. Starmer, Are experimental economists prone to framing
efects? a natural field experiment, Journal of Economic Behavior &amp; Organization 70 (2009)
443–446. URL: https://doi.org/10.1016/j.jebo.2007.11.003. doi:10.1016/j.jebo.2007.11.
003.
[32] I. P. Levin, S. L. Schneider, G. J. Gaeth, All frames are not created equal: A typology
and critical analysis of framing efects, Organizational Behavior and Human Decision
Processes 76 (1998) 149–188. URL: https://doi.org/10.1006/obhd.1998.2804. doi:10.1006/
obhd.1998.2804.
[33] J. Gong, Y. Zhang, Z. Yang, Y. Huang, J. Feng, W. Zhang, The framing efect in medical
decision-making: a review of the literature, Psychology, Health &amp; Medicine 18 (2013)
645–653. URL: https://doi.org/10.1080/13548506.2013.766352. doi:10.1080/13548506.
2013.766352.
[34] P. E. Souza, C. P. C. Chanel, F. Dehais, S. Givigi, Towards human-robot interaction: A
framing efect experiment, in: 2016 IEEE International Conference on Systems, Man,
and Cybernetics (SMC), IEEE, 2016, pp. 001929–001934. URL: https://doi.org/10.1109/smc.
2016.7844521. doi:10.1109/smc.2016.7844521.
[35] T. Kim, H. Song, Communicating the limitations of AI: The efect of message framing
and ownership on trust in artificial intelligence, International Journal of
Human–Computer Interaction (2022) 1–11. URL: https://doi.org/10.1080/10447318.2022.2049134. doi:10.
1080/10447318.2022.2049134.
[36] F. Ni, D. Arnott, S. Gao, The anchoring efect in business intelligence supported
decisionmaking, Journal of Decision Systems 28 (2019) 67–81. URL: https://doi.org/10.1080/
12460125.2019.1620573. doi:10.1080/12460125.2019.1620573.
[37] T. Yasseri, J. Reher, Fooled by facts: quantifying anchoring bias through a large-scale
experiment, Journal of Computational Social Science 5 (2022) 1001–1021. URL: https:
//doi.org/10.1007/s42001-021-00158-0. doi:10.1007/s42001-021-00158-0.
[38] G. Adomavicius, J. C. Bockstedt, S. P. Curley, J. Zhang, Do recommender systems
manipulate consumer preferences? a study of anchoring efects, Inf. Syst. Res. (2013).
[39] B. K. Hayes, B. R. Newell, Induction with uncertain categories: When do people consider
the category alternatives?, Memory &amp; Cognition 37 (2009) 730–743. URL: https://doi.org/
10.3758/mc.37.6.730. doi:10.3758/mc.37.6.730.
[40] D. V. Burakov, Exogenous credit cycle: An experimental study, World Applied Sciences</p>
      <p>Journal 26 (2013). doi:10.5829/idosi.wasj.2013.26.06.13510.
[41] A. Tversky, D. Kahneman, Extensional versus intuitive reasoning: The conjunction
fallacy in probability judgment., Psychological Review 90 (1983) 293–315. URL: https:
//doi.org/10.1037/0033-295x.90.4.293.
[42] K. Tentori, N. Bonini, D. Osherson, The conjunction fallacy: a misunderstanding
about conjunction?, Cognitive Science 28 (2004) 467–477. URL: https://doi.org/10.1207/
s15516709cog2803_8. doi:10.1207/s15516709cog2803_8.
[43] D. H. Wedell, R. Moro, Testing boundary conditions for the conjunction fallacy: Efects
of response mode, conceptual focus, and problem type, Cognition 107 (2008). URL: https:
//doi.org/10.1016/j.cognition.2007.08.003. doi:10.1016/j.cognition.2007.08.003.
[44] Y. Lo, A. Sides, J. Rozelle, D. Osherson, Evidential diversity and premise probability
in young children's inductive judgment, Cognitive Science 26 (2002) 181–206. URL:
https://doi.org/10.1207/s15516709cog2602_2. doi:10.1207/s15516709cog2602_2.
[45] A. K. Barbey, S. A. Sloman, Base-rate respect: From ecological rationality to dual
processes, Behavioral and Brain Sciences 30 (2007) 241–254. URL: https://doi.org/10.1017/
s0140525x07001653. doi:10.1017/s0140525x07001653.
[46] M. Bar-Hillel, The base-rate fallacy in probability judgments, Acta Psychologica
44 (1980) 211–233. URL: https://doi.org/10.1016/0001-6918(80)90046-3. doi:10.1016/
0001-6918(80)90046-3.
[47] E. Gold, The gambler’s fallacy, Ph.D. thesis, Carnegie Mellon University, 1997. URL: https:
//www.proquest.com/dissertations-theses/gamblers-fallacy/docview/304364133/se-2.
[48] G. Barron, S. Leider, The role of experience in the gambler's fallacy, Journal of Behavioral
Decision Making 23 (2010) 117–129. URL: https://doi.org/10.1002/bdm.676. doi:10.1002/
bdm.676.
[49] D. L. Chen, T. J. Moskowitz, K. Shue, Decision making under the gambler’s fallacy:
Evidence from asylum judges, loan oficers, and baseball umpires, The Quarterly Journal
of Economics 131 (2016) 1181–1242. URL: https://doi.org/10.1093/qje/qjw017. doi:10.
1093/qje/qjw017.
[50] R. Thaler, Some empirical evidence on dynamic inconsistency, Economics
Letters 8 (1981) 201–207. URL: https://doi.org/10.1016/0165-1765(81)90067-7. doi:10.1016/
0165-1765(81)90067-7.
[51] W. H. Hampton, N. Asadi, I. R. Olson, Good things for those who wait: Predictive
modeling highlights importance of delay discounting for income attainment, Frontiers in
Psychology 9 (2018). URL: https://doi.org/10.3389/fpsyg.2018.01545. doi:10.3389/fpsyg.
2018.01545.
[52] G. Ainslie, Specious reward: A behavioral theory of impulsiveness and impulse
control., Psychological Bulletin (1975). URL: https://doi.org/10.1037/h0076860. doi:10.1037/
h0076860.
[53] C. F. Kurz, A. N. König, Predicting time preference from social media behavior, Future
Generation Computer Systems 130 (2022) 155–163. URL: https://doi.org/10.1016/j.future.
2021.12.017. doi:10.1016/j.future.2021.12.017.
[54] T. S. van Endert, P. N. C. Mohr, Delay discounting of monetary and social media rewards:
Magnitude and trait efects, Frontiers in Psychology 13 (2022). URL: https://doi.org/10.
3389/fpsyg.2022.822505. doi:10.3389/fpsyg.2022.822505.
[55] K. Dion, E. Berscheid, E. Walster, What is beautiful is good., Journal of Personality
and Social Psychology 24 (1972) 285–290. URL: https://doi.org/10.1037/h0033731. doi:10.
1037/h0033731.
[56] R. E. Nisbett, T. D. Wilson, The halo efect: Evidence for unconscious alteration of
judgments., Journal of Personality and Social Psychology 35 (1977) 250–256. URL: https:
//doi.org/10.1037/0022-3514.35.4.250. doi:10.1037/0022-3514.35.4.250.
[57] J. L. Gibson, J. S. Gore, Is he a hero or a weirdo? how norm violations influence the halo
efect, Gender Issues 33 (2016) 299–310. URL: https://doi.org/10.1007/s12147-016-9173-6.
doi:10.1007/s12147-016-9173-6.
[58] D. Landy, H. Sigall, Beauty is talent: Task evaluation as a function of the performer's
physical attractiveness., Journal of Personality and Social Psychology 29 (1974). URL:
https://doi.org/10.1037/h0036018. doi:10.1037/h0036018.
[59] M. I. Norton, D. Mochon, D. Ariely, The IKEA efect: When labor leads to love, Journal of
Consumer Psychology 22 (2012) 453–460. URL: https://doi.org/10.1016/j.jcps.2011.08.002.
doi:10.1016/j.jcps.2011.08.002.
[60] T. Radtke, N. Liszewska, K. Horodyska, M. Boberska, K. Schenkel, A. Luszczynska,
Cooking together: The ikea efect on family vegetable intake, British Journal of Health
Psychology 24 (2019) 896–912. URL: https://doi.org/10.1111/bjhp.12385. doi:10.1111/
bjhp.12385.
[61] F. Brunner, F. Gamm, W. Mill, MyPortfolio: The IKEA efect in financial investment
decisions, Journal of Banking &amp; Finance (2022) 106529. URL: https://doi.org/10.1016/j.
jbankfin.2022.106529. doi:10.1016/j.jbankfin.2022.106529.
[62] J. W. Pratt, Risk aversion in the small and in the large, in: Uncertainty in Economics,
Elsevier, 1978, pp. 59–79. URL: https://doi.org/10.1016/b978-0-12-214850-7.50010-3. doi:10.
1016/b978-0-12-214850-7.50010-3.
[63] K. E. Stanovich, Decision making and rationality in the modern world, New York, Oxford</p>
      <p>University Press, 2010.
[64] B. Fischhof, P. Slovic, S. Lichtenstein, et al., How safe is safe enough? a psychometric
study of attitudes towards technological risks and benefits, Policy Sciences (1978). URL:
https://doi.org/10.1007/bf00143739. doi:10.1007/bf00143739.
[65] Y. Rottenstreich, C. K. Hsee, Money, kisses, and electric shocks: On the afective
psychology of risk, Psychological Science 12 (2001) 185–190. URL: https://doi.org/10.1111/
1467-9280.00334. doi:10.1111/1467-9280.00334.
[66] D. P. Crowne, D. Marlowe, A new scale of social desirability independent of
psychopathology., Journal of Consulting Psychology 24 (1960) 349–354. URL: https://doi.org/10.1037/
h0047358. doi:10.1037/h0047358.
[67] J. R. Hebert, L. Clemow, L. Pbert, I. S. Ockene, J. K. Ockene, Social desirability bias in
dietary self-report may compromise the validity of dietary intake measures, International
Journal of Epidemiology 24 (1995). URL: https://doi.org/10.1093/ije/24.2.389. doi:10.1093/
ije/24.2.389.
[68] L. Harrison, The validity of self-reported drug use in survey research: An overview and
critique of research methods. national institute of drug abuse monograph 167, 2006.
[69] G. S. Stuart, D. A. Grimes, Social desirability bias in family planning studies: a neglected
problem, Contraception 80 (2009) 108–112. URL: https://doi.org/10.1016/j.contraception.
2009.02.009. doi:10.1016/j.contraception.2009.02.009.
[70] E. F. Loftus, J. C. Palmer, Reconstruction of automobile destruction: An example of
the interaction between language and memory, Journal of Verbal Learning and Verbal
Behavior 13 (1974) 585–589. URL: https://doi.org/10.1016/s0022-5371(74)80011-3. doi:10.
1016/s0022-5371(74)80011-3.
[71] E. F. Loftus, Reconstructing memory: The incredible eyewitness, Jurimetrics Journal 15
(1975) 188–193. URL: http://www.jstor.org/stable/29761487.
[72] J. M. Lampinen, J. S. Neuschatz, D. G. Payne, Memory illusions and consciousness:
Examining the phenomenology of true and false memories, Current Psychology 16 (1997) 181–224.</p>
      <p>URL: https://doi.org/10.1007/s12144-997-1000-5. doi:10.1007/s12144-997-1000-5.
[73] D. M. Bernstein, E. F. Loftus, The consequences of false memories for food preferences
and choices, Perspectives on Psychological Science 4 (2009) 135–139. URL: https://doi.
org/10.1111/j.1745-6924.2009.01113.x. doi:10.1111/j.1745-6924.2009.01113.x.
[74] T. B. Rogers, N. A. Kuiper, W. S. Kirker, Self-reference and the encoding of personal
information., Journal of Personality and Social Psychology 35 (1977) 677–688. URL:
https://doi.org/10.1037/0022-3514.35.9.677. doi:10.1037/0022-3514.35.9.677.
[75] A. H. Gutchess, E. A. Kensinger, C. Yoon, D. L. Schacter, Ageing and the
selfreference efect in memory, Memory 15 (2007) 822–837. URL: https://doi.org/10.1080/
09658210701701394. doi:10.1080/09658210701701394.
[76] B. B. Murdock, The serial position efect of free recall., Journal of Experimental Psychology
64 (1962) 482–488. URL: https://doi.org/10.1037/h0045106. doi:10.1037/h0045106.
[77] B. Murdock, J. Metcalfe, Controlled rehearsal in single-trial free recall, Journal of
Verbal Learning and Verbal Behavior 17 (1978) 309–324. URL: https://doi.org/10.1016/
s0022-5371(78)90201-3. doi:10.1016/s0022-5371(78)90201-3.
[78] S. E. Asch, Forming impressions of personality., The Journal of Abnormal and Social</p>
      <p>Psychology 41 (1946). URL: https://doi.org/10.1037/h0055756. doi:10.1037/h0055756.
[79] D. Kahneman, B. L. Fredrickson, C. A. Schreiber, D. A. Redelmeier, When more pain is
preferred to less: Adding a better end, Psychological Science 4 (1993) 401–405. URL: https:
//doi.org/10.1111/j.1467-9280.1993.tb00589.x. doi:10.1111/j.1467-9280.1993.tb00589.
x.
[80] Z. Carmon, D. Kahneman, The experienced utility of queuing: real time afect and
retrospective evaluations of simulated queues, Duke University: Durham, NC, USA
(1996).
[81] P. De Maeyer, H. Estelami, Applying the peak-end rule to reference prices, Journal of</p>
      <p>Product &amp; Brand Management (2013).
[82] W. Samuelson, R. Zeckhauser, Status quo bias in decision making, Journal of Risk and</p>
      <p>Uncertainty 1 (1988). URL: https://doi.org/10.1007/bf00055564. doi:10.1007/bf00055564.
[83] D. Kahneman, J. L. Knetsch, R. H. Thaler, Anomalies: The endowment efect, loss
aversion, and status quo bias, Journal of Economic Perspectives 5 (1991). URL: https:
//doi.org/10.1257/jep.5.1.193. doi:10.1257/jep.5.1.193.
[84] D. R. Forsyth, Group dynamics, 1990.
[85] T. Postmes, R. Spears, S. Cihangir, Quality of decision making and group norms., Journal
of Personality and Social Psychology 80 (2001) 918–930. URL: https://doi.org/10.1037/
0022-3514.80.6.918. doi:10.1037/0022-3514.80.6.918.
[86] G. Stasser, D. Stewart, Discovery of hidden profiles by decision-making groups: Solving
a problem versus making a judgment., Journal of Personality and Social Psychology 63
(1992) 426–434. URL: https://doi.org/10.1037/0022-3514.63.3.426. doi:10.1037/0022-3514.
63.3.426.
[87] I. Simonson, The efect of purchase quantity and timing on variety-seeking behavior,
Journal of Marketing Research 27 (1990) 150. URL: https://doi.org/10.2307/3172842. doi:10.
2307/3172842.
[88] D. Read, G. Loewenstein, Diversification bias: Explaining the discrepancy in
variety seeking between combined and separated choices., Journal of Experimental
Psychology: Applied 1 (1995) 34–49. URL: https://doi.org/10.1037/1076-898x.1.1.34.
doi:10.1037/1076-898x.1.1.34.
[89] D. Kliger, M. J. van den Assem, R. C. Zwinkels, Empirical behavioral finance, Journal of
Economic Behavior &amp; Organization 107 (2014) 421–427. URL: https://doi.org/10.1016/j.
jebo.2014.10.012. doi:10.1016/j.jebo.2014.10.012.
[90] G. Gigerenzer, D. G. Goldstein, Reasoning the fast and frugal way: Models of bounded
rationality., Psychological Review 103 (1996) 650–669. URL: https://doi.org/10.1037/
0033-295x.103.4.650. doi:10.1037/0033-295x.103.4.650.
[91] Y. Wang, S. Luan, G. Gigerenzer, Modeling fast-and-frugal heuristics, PsyCh Journal 11
(2022) 600–611. URL: https://doi.org/10.1002/pchj.576. doi:10.1002/pchj.576.
[92] C. A. Hidalgo, et al., How humans judge machines, MIT Press, 2021.
[93] C. Hang, T. Ono, S. Yamada, Designing nudge agents that promote human altruism, 2021.</p>
      <p>URL: https://arxiv.org/abs/2110.00319.
[94] X. Dou, C.-F. Wu, Are we ready for “them” now? the relationship between human and
humanoid robots, in: Integrated Science, Springer International Publishing, Cham, 2021,
pp. 377–394. URL: https://doi.org/10.1007/978-3-030-65273-9_18.
[95] C. L. van Straten, J. Peter, R. Kühne, A. Barco, Transparency about a robot's lack of human
psychological capacities, ACM Transactions on Human-Robot Interaction 9 (2020) 1–22.</p>
      <p>URL: https://doi.org/10.1145/3365668.
[96] Z. Lu, M. Yin, Human reliance on machine learning models when performance feedback
is limited: Heuristics and risks, in: Proceedings of the 2021 CHI Conference on Human
Factors in Computing Systems, 2021, pp. 1–16.
[97] G. Pizzi, D. Scarpi, E. Pantano, Artificial intelligence and the new forms of interaction:
Who has the control when interacting with a chatbot?, Journal of Business Research 129
(2021) 878–890. URL: https://doi.org/10.1016/j.jbusres.2020.11.006.
[98] D. Wang, Q. Yang, A. Abdul, B. Y. Lim, Designing theory-driven user-centric explainable
ai, in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems,
CHI ’19, Association for Computing Machinery, New York, NY, USA, 2019, p. 1–15. URL:
https://doi.org/10.1145/3290605.3300831. doi:10.1145/3290605.3300831.
[99] T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial</p>
      <p>Intelligence 267 (2019). URL: https://doi.org/10.1016/j.artint.2018.07.007.
[100] Z. Buçinca, M. B. Malaya, K. Z. Gajos, To trust or to think: cognitive forcing functions
can reduce overreliance on ai in ai-assisted decision-making, Proceedings of the ACM on
Human-Computer Interaction 5 (2021). URL: https://doi.org/10.1145/3449287.
[101] G. Gigerenzer, H. Brighton, Homo heuristicus: Why biased minds make better inferences,
Topics in cognitive science 1 (2009) 107–143. URL: https://doi.org/10.1111/j.1756-8765.
2008.01006.x.
[102] H. Taniguchi, H. Sato, T. Shirakawa, Application of human cognitive mechanisms to
naïve bayes text classifier, AIP Conference Proceedings 1863 (2017) 360016. doi: 10.1063/
1.4992545.
[103] M. Sidman, R. Rauzin, R. Lazar, S. Cunningham, W. Tailby, P. Carrigan, A search for
symmetry in the conditional discriminations of rhesus monkeys, baboons, and children,
Journal of the Experimental Analysis of Behavior 37 (1982) 23–44. URL: https://doi.org/
10.1901/jeab.1982.37-23.
[104] W. E. Merriman, L. L. Bowman, B. MacWhinney, The mutual exclusivity bias in children's
word learning, Monographs of the Society for Research in Child Development 54 (1989) i.</p>
      <p>URL: https://doi.org/10.2307/1166130.
[105] H. Taniguchi, H. Sato, T. Shirakawa, Implementation of human cognitive bias on
neural network and its application to breast cancer diagnosis, SICE Journal of Control,
Measurement, and System Integration 12 (2019). URL: https://doi.org/10.9746/jcmsi.12.56.
[106] N. Manome, S. Shinohara, T. Takahashi, Y. Chen, U. il Chung, Self-incremental learning
vector quantization with human cognitive biases, Scientific Reports (2021). URL: https:
//doi.org/10.1038/s41598-021-83182-4.
[107] G. Gigerenzer, P. M. Todd, Simple heuristics that make us smart, Oxford University Press,</p>
      <p>USA, 1999.
[108] G. Gigerenzer, S. Kurzenhaeuser, Fast and frugal heuristics in medical decision making,</p>
      <p>Science and medicine in dialogue: Thinking through particulars and universals (2005).
[109] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, A. Galstyan, A survey on bias and
fairness in machine learning, ACM Computing Surveys 54 (2021) 1–35. URL: https:
//doi.org/10.1145/3457607.
[110] A. Castelnovo, R. Crupi, G. Greco, D. Regoli, I. G. Penco, A. C. Cosentini, A clarification
of the nuances in the fairness metrics landscape, Scientific Reports (2022). URL: https:
//doi.org/10.1038/s41598-022-07939-1.
[111] P. Sen, D. Ganguly, Towards socially responsible AI: Cognitive bias-aware multi-objective
learning, Proceedings of the AAAI Conference on Artificial Intelligence 34 (2020)
2685–2692. URL: https://doi.org/10.1609/aaai.v34i03.5654.
[112] C. G. Harris, Mitigating cognitive biases in machine learning algorithms for decision
making, in: Companion Proceedings of the Web Conference 2020, WWW ’20, Association
for Computing Machinery, New York, NY, USA, 2020, p. 775–781. URL: https://doi.org/10.
1145/3366424.3383562. doi:10.1145/3366424.3383562.
[113] B. Munier, R. Selten, D. Bouyssou, P. Bourgine, R. Day, N. Harvey, D. Hilton, M. J. Machina,
P. Parker, J. Sterman, E. Weber, B. Wernerfelt, R. Wensley, Bounded rationality modeling,
Marketing Letters 10 (1999). URL: https://doi.org/10.1023/a:1008058417088.
[114] F. Leibfried, D. A. Braun, Bounded rational decision-making in feedforward neural
networks, 2016. URL: https://arxiv.org/abs/1602.08332.
[115] J. Tenenbaum, Bayesian modeling of human concept learning, Advances in neural
information processing systems 11 (1998).
[116] T. L. Grifiths, J. B. Tenenbaum, Optimal predictions in everyday cognition, Psychological</p>
      <p>Science 17 (2006). URL: https://doi.org/10.1111/j.1467-9280.2006.01780.x.
[117] N. Chater, J. B. Tenenbaum, A. Yuille, Probabilistic models of cognition: Conceptual
foundations, Trends in Cognitive Sciences 10 (2006) 287–291. URL: https://doi.org/10.
1016/j.tics.2006.05.007.
[118] R. A. Jansen, A. N. Raferty, T. L. Grifiths, A rational model of the dunning–kruger efect
supports insensitivity to evidence in low performers, Nature Human Behaviour 5 (2021)
756–763. URL: https://doi.org/10.1038/s41562-021-01057-0.
[119] J.-H. Kang, K. Lerman, VIP: Incorporating human cognitive biases in a probabilistic
model of retweeting, in: Social Computing, Behavioral-Cultural Modeling, and
Prediction, Springer International Publishing, 2015, pp. 101–110. URL: https://doi.org/10.1007/
978-3-319-16268-3_11.
[120] N. J. Blunch, Position bias in multiple-choice questions, Journal of Marketing Research
21 (1984) 216–220. URL: https://doi.org/10.1177/002224378402100210.
[121] R. S. Crowley, E. Legowski, O. Medvedeva, et al., Automated detection of heuristics and
biases among pathologists in a computer-based system, Advances in Health Sciences
Education 18 (2012) 343–363. URL: https://doi.org/10.1007/s10459-012-9374-z.
[122] M. McShane, S. Nirenburg, B. Jarrell, Modeling decision-making biases, Biologically
Inspired Cognitive Architectures 3 (2013) 39–50. URL: https://doi.org/10.1016/j.bica.2012.
09.001.
[123] A. Nussbaumer, K. Verbert, E.-C. Hillemann, M. A. Bedek, D. Albert, A framework for
cognitive bias detection and feedback in a visual analytics environment, in: 2016 European
Intelligence and Security Informatics Conference (EISIC), IEEE, 2016, pp. 148–151. URL:
https://doi.org/10.1109/eisic.2016.038.
[124] M. A. Bedek, A. Nussbaumer, L. Huszar, D. Albert, Methods for discovering cognitive
biases in a visual analytics environment, in: Cognitive Biases in Visualizations, Springer
International Publishing, 2018, pp. 61–73. URL: https://doi.org/10.1007/978-3-319-95831-6_5.
[125] S. Michie, J. Thomas, M. Johnston, et al., The human behaviour-change project:
harnessing the power of artificial intelligence and machine learning for evidence synthesis
and interpretation, Implementation Science 12 (2017). URL: https://doi.org/10.1186/
s13012-017-0641-5.
[126] R. de Oliveira, M. Cherubini, N. Oliver, Movipill: Improving medication compliance for
elders using a mobile persuasive social game, in: Proceedings of the 12th ACM International
Conference on Ubiquitous Computing, UbiComp ’10, Association for Computing
Machinery, New York, NY, USA, 2010, p. 251–260. URL: https://doi.org/10.1145/1864349.1864371.
doi:10.1145/1864349.1864371.
[127] R. J. Heuer, Psychology of intelligence analysis, Center for the Study of Intelligence, 1999.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kahneman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tversky</surname>
          </string-name>
          ,
          <article-title>Prospect theory: An analysis of decision under risk</article-title>
          ,
          <source>Econometrica</source>
          <volume>47</volume>
          (
          <year>1979</year>
          )
          <article-title>263</article-title>
          . URL: https://doi.org/10.2307/1914185.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ariely</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jones</surname>
          </string-name>
          , Predictably irrational,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Simon</surname>
          </string-name>
          , A Behavioral Model of Rational Choice,
          <source>The Quarterly Journal of Economics</source>
          <volume>69</volume>
          (
          <year>1955</year>
          )
          <fpage>99</fpage>
          -
          <lpage>118</lpage>
          . URL: https://doi.org/10.2307/1884852.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>