<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Turin, Italy
nEvelop-O chiara.natali@unimib.it (C. Natali)
rOcid</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Per Aspera ad Astra, or Flourishing via Friction: Stimulating Cognitive Activation by Design through Frictional Decision Support Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Chiara Natali</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Milano-Bicocca</institution>
          ,
          <addr-line>viale Sarca 336, Milan, 20126</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>My research proposal explores the frictional design approach to stimulate cognitive engagement in Decision Support Systems, especially so for those that adopt eXplainable AI techniques (XAI). The aim of these programmed ineficiencies is the mitigation of human overreliance, while promoting thoughtfulness and cognitive enhancement, and ultimately improving the efectiveness of decision-making. Specifically, I will explore how frictional design principles can be applied to the development of AI decision support systems in order to create interfaces that encourage users to engage with the system actively and thoughtfully, rather than passively accepting its recommendations, in the form of cautious, comparative, judicial or adjunct support. By increasing transparency and promoting cognitive engagement, frictional design can help ensure that XAI decision support systems remain valuable tools for decision-makers, rather than becoming a crutch or a source of manipulation.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Decision Support System (DSS)</kwd>
        <kwd>eXplainable AI (XAI)</kwd>
        <kwd>Cognitive friction</kwd>
        <kwd>Human-AI Interaction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In these frantic months of 2023, we are witnessing the proliferation of generative and
conversational systems that have the capability to produce complex content and engage in human-like
interactions, fostering the illusion of high competence, reliability, and trustworthiness.
Although these systems may not possess sentience or awareness, they are becoming increasingly
persuasive, particularly in sensitive domains. The optimization of these systems aims to achieve
higher agreement with humans, reduce aversion, and foster trust by employing increasingly
efective persuasion strategies studied and classified by psychological and design sciences. As
a result, decision makers may be more inclined to rely on machine advice than ever before,
prompted also by the increasing intuitiveness and user-friendliness of AI system interfaces that
minimize cognitive friction [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], following the UX design maxim “Don’t Make Me Think!” that
was popularized by a book of the same title [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. While these principles have largely remained
relevant, it is evident that some suggestions inadvertently led to designs that elicited immediate
and thoughtless reactions from users, as in the case of dark patterns [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] in design. Drawing on
the insights of Frischmann and Selinger [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], according to whom “Tolerating some congestion,
some friction, some ineficiency, even some transaction costs may be necessary to sustain an
underdetermined environment conducive to human flourishing”, my research as a first-year PhD
student under the supervision of Prof. Federico Cabitza has focused on the role of AI Decision
Support Systems (DSS) in the short-term decision-making process within the medical domain.
Specifically, I investigate the phenomena of automation bias and deskilling. My objective is to
explore whether the cognitive-forcing functions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] embedded in ‘programmed ineficiencies’
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] or ‘frictional protocols’, that make the interpretation of the AI output less immediate and
promote the cognitive activation of users, can yield benefits and mitigate potential negative
impacts in terms of decision accuracy, usability and skill retention. Recognizing the significant
risks of overreliance and deskilling associated with the development of DSS, it is essential to
acknowledge that AI interventions encompass various design choices that become evident
in the presentation of the system’s output: it is the case of choosing between clear-cut
categories, probabilities, prioritized lists of alternatives, similar cases, or explanations, just to name
a few options. Additionally, lesser-explored design decisions include the order and level of
automation (e.g., AI as a first-opinion or second-opinion giver [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], AI as an autonomous agent
or case-mining tool [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]), optimization criteria (e.g., accuracy or utility), and adaptation to the
target population (e.g., novices or experts). These configurations collectively form a protocol of
human-AI interaction, which defines the “process schema that stipulates the use of AI tools by
competent practitioners to perform a certain task or job” [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Each specific protocol possesses a
distinct potential for influence and user reliance, leading to varying levels of dominance (i.e.
“the influence that an AI system can exert on human judgment and decisions” [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]) after proper
evaluation. Within the scope of my investigation, I propose and aim to evaluate empirically
four specific protocols aimed at safeguarding human agency: cautious protocols, comparative
protocols, judicial protocols, and adjunct protocols.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. The protocols</title>
      <p>
        Cautious protocol. This protocol involves presenting a set of potential answers (possibly also
with varying individual confidence scores), accompanied by a predetermined probability level
that the proposed set encompasses the correct answer, such as in conformal prediction [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
Consequentially, the system may ofer the option to abstain from providing an answer [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In
cautious protocols, the limitations of the AI system and of its predictions are transparently
communicated to the user, adopting a similar approach to ‘seamful design’ [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], where knowledge
gaps, uncertainties, and discrepancies between the training data and the application domain are
deliberately revealed as to empower human agency [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        Comparative protocol. In this protocol, the AI system provides users with the most similar
cases associated with either a ground truth or all available classes in relation to the case being
considered. By leveraging this approach, the system facilitates analogical reasoning, resembling
a transactive memory that taps into a repository of past cases and their corresponding correct
decisions [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. An illustrative example of a comparative protocol is the implementation of pro-hoc
explanations, which replace the AI’s decision support rather than being provided alongside it.
      </p>
      <p>
        Judicial protocol. In a judicial protocol, the AI system provides arguments and explanations
that support multiple, conflicting decisions or interpretations. It is the example of perorative
explanations generated by opposing conversational agents [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] or even the incorporation of
two or more antagonistic machine learning models, belonging to diferent families, trained on
distinct representations, ground truths, and parametrizations [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Similarly, the introduction
of “conflicting rules/knowledge” has been explored as a debiasing technique against
Overconifdence and Underconfidence [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. While this approach does not entirely eliminate the risk of
manipulation by the more persuasive agent, it places the human decision maker at the forefront.
It fosters an appreciation for maintaining an impartial, focused, and responsible stance, while
unveiling the persuasive tactics employed by the AI systems.
      </p>
      <p>
        Adjunct protocol. First introduced in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], the adjunct protocol difers from the previous
three protocols as it pertains not to the presentation of ‘desirably ineficient’ output modalities
but instead encompasses the design of the decision-making process itself through specialized
Human-AI Collaboration Protocols [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The typical configuration is that of AI as a
secondopinion giver, as exemplified in the Human-first HAI-CP discussed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Other process-based
cognitive forcing functions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], such as checklists, diagnostic time-outs, withholding the AI
suggestion or incorporating longer waiting times (previous studies have indicated that slower
algorithms enhance user accuracy [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]) also contribute to this protocol. During the waiting
period for the AI’s suggestion, the user may formulate their own hypothesis regarding the
correct solution and subsequently evaluate the AI’s explanation to assess its alignment with
their own conception, possibly alleviating anchoring bias [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] and confirmation bias [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Research questions and methodology</title>
      <p>
        The research questions to be addressed in my doctoral thesis are the following:
• RQ1. Are frictional design principles conducive to more efective, or at least equally efective,
decision-making compared to traditional protocols (cf. non-inferiority trials)?
• RQ2a. Do frictional DSS mitigate the risk of automation bias and dominance (as
operationalized in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ])?
• RQ2b. Do frictional DSS mitigate the risk of deskilling as well as upskilling inhibition?
• RQ3. What is the usability of frictional design patterns and does it change according to the
user expertise and task complexity?
      </p>
      <p>
        To address these research questions, a series of user experiments involving medical
professionals were conducted [
        <xref ref-type="bibr" rid="ref20 ref6 ref7 ref8 ref9">20, 9, 8, 6, 7</xref>
        ] and more are in the works to employ and assess related
Human-AI Collaboration Protocols (HAI-CP). For the first and second research questions, I will
consider the concepts of technology benefit and reliance patterns presented and
operationalized in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] for the assessment of the impact of DSS on decision-making in terms of decision
efectiveness and cognitive biases, using the rates of decision change following the exposure to
AI advice as a proxy for the (positive or negative) influence of the DSS. The most ambitious
and complex research question focuses on the consequences and dynamics of long-term use
of frictional and non-fritcional HAI-CP in terms of the deskilling or lack of acquisition of new
competences, to be addressed by consulting the relevant literature as it develops and designing
feasible empirical investigations. Finally, I will investigate the fourth research question by
referring to the state-of-the-art in usability questionnaires and possibly conducing qualitative
interviews following the grounded theory approach [? ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Preliminary results and future work</title>
      <p>The laboratory I am afiliated to (MUDI Lab, University of Milano-Bicocca) conducted an
empirical user study in the field of radiology to investigate the use of pro-hoc explanations
within a human-first, comparative protocol. The study employed elements of an adjunct
protocol, utilizing a human-first interaction approach, and a comparative protocol via pro-hoc
explanations, as the AI intervention involved presenting the three most similar cases (to the
case at hand) retrieved from the available dataset. The study’s findings are currently being
prepared for submission to a medical informatics journal. Although the AI intervention led to
a non-significant improvement in accuracy (approximately 2%), physicians perceived the AI
support as significantly useful, especially so those with less clinical experience.</p>
      <p>Building upon these preliminary results, future studies will mainly focus on evaluating the
efects of cautious and judicial protocols. A collaborative experimentation with the University
of Pavia is currently underway to investigate the application of the judicial protocol in the
radiological domain.</p>
      <p>Additionally, a conceptual emphasis will be placed on developing a taxonomy of frictional
design patterns. The objective is to create a repository of reusable design patterns that incorporate
frictional elements in several human-AI interaction protocols across diverse contexts.</p>
      <p>Finally, I will devote time to the study of ethical considerations connected to frictional support,
developing impact assessment checklists and work on governance issues: my aim will be to
develop guidelines and recommendations to ensure the responsible and ethical implementation
of programmed ineficiencies in various contexts.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>
        Instead of attributing the phenomenon of human over-reliance on AI systems to cognitive biases
and human limitations, my research proposes a paradigm shift by focusing on how technology
is designed and deployed, with designers and programmers responsible for the promotion of
user agency, responsibility and skill. While recognizing that increased friction may negatively
afect how users perceive system usability, I criticize the prioritization of eficiency and comfort
over the eficacy and integrity of our knowledge work, putting at the forefront the promotion of
responsible and thoughtful decision-making as well as the mitigation of risks of over-reliance,
technology dominance and deskilling. This approach aligns with the principles of slow design
[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], which does not advocate for slowness in itself in a technophobic fashion, but rather aims to
facilitate users in engaging in the right actions at the appropriate time and pace. By embracing
frictional design of DSS, we can encourage users to better understand and reflect on their actions,
fostering more intentional and meaningful interactions with technology.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Cooper</surname>
          </string-name>
          ,
          <article-title>The inmates are running the asylum</article-title>
          , Springer,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Krug</surname>
          </string-name>
          ,
          <article-title>Don't make me think!</article-title>
          ,
          <source>Pearson</source>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Battles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hoggatt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Toombs</surname>
          </string-name>
          ,
          <article-title>The dark (patterns) side of ux design</article-title>
          ,
          <source>in: Proceedings of the 2018 CHI conference on human factors in computing systems</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Frischmann</surname>
          </string-name>
          , E. Selinger, Re-engineering humanity, Cambridge University Press,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Buçinca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. B.</given-names>
            <surname>Malaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Z.</given-names>
            <surname>Gajos</surname>
          </string-name>
          ,
          <article-title>To trust or to think: cognitive forcing functions can reduce overreliance on ai in ai-assisted decision-making</article-title>
          ,
          <source>Proceedings of the ACM on Human-Computer Interaction</source>
          <volume>5</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ciucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Seveso</surname>
          </string-name>
          ,
          <article-title>Programmed ineficiencies in dss-supported human decision making</article-title>
          ,
          <source>in: Modeling Decisions for Artificial Intelligence: 16th International Conference, Milan, Italy, September 4-6</source>
          ,
          <year>2019</year>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>201</fpage>
          -
          <lpage>212</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          , et al.,
          <article-title>Rams, hounds and white boxes: Investigating human-ai collaboration protocols in medical diagnosis</article-title>
          ,
          <source>Artificial Intelligence in Medicine</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Simone</surname>
          </string-name>
          ,
          <article-title>The need to move away from agential-ai</article-title>
          ,
          <source>International Journal of Human-Computer Studies</source>
          <volume>155</volume>
          (
          <year>2021</year>
          )
          <fpage>102696</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Angius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Natali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Reverberi</surname>
          </string-name>
          ,
          <article-title>Ai shall have no dominion: on how to measure technology dominance in ai-supported human decision-making</article-title>
          ,
          <source>in: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Shafer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vovk</surname>
          </string-name>
          ,
          <article-title>A tutorial on conformal prediction</article-title>
          .,
          <source>Journal of Machine Learning Research</source>
          <volume>9</volume>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ciucci</surname>
          </string-name>
          ,
          <article-title>Three-way decision for handling uncertainty in machine learning: A narrative review</article-title>
          ,
          <source>in: IJCRS</source>
          <year>2020</year>
          , Havana, Cuba, June 29-July 3,
          <year>2020</year>
          , Proceedings, Springer,
          <year>2020</year>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>152</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Chalmers</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. MacColl</surname>
          </string-name>
          , M. Bell,
          <article-title>Seamful design: Showing the seams in wearable computing, in: 2003 IEE Eurowearable</article-title>
          , IET,
          <year>2003</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Inman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ribes</surname>
          </string-name>
          ,
          <article-title>Beautiful seams: Strategic revelations and concealments</article-title>
          ,
          <source>in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explainable ai is dead, long live explainable ai! hypothesis-driven decision support</article-title>
          ,
          <source>arXiv preprint arXiv:2302.12389</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hildebrandt</surname>
          </string-name>
          ,
          <article-title>Algorithmic regulation and the rule of law</article-title>
          ,
          <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</source>
          <volume>376</volume>
          (
          <year>2018</year>
          )
          <fpage>20170355</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kliegr</surname>
          </string-name>
          , Š. Bahník,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fürnkranz</surname>
          </string-name>
          ,
          <article-title>A review of possible efects of cognitive biases on interpretation of rule-based machine learning models</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>295</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Natali</surname>
          </string-name>
          ,
          <article-title>Open, multiple, adjunct. decision support at the time of relational ai, in: HHAI2022: Augmenting Human Intellect</article-title>
          , IOS Press,
          <year>2022</year>
          , pp.
          <fpage>243</fpage>
          -
          <lpage>245</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Barber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kirlik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Karahalios</surname>
          </string-name>
          ,
          <article-title>A slow algorithm improves users' assessments of the algorithm's accuracy</article-title>
          ,
          <source>Proceedings of the ACM on Human-Computer Interaction</source>
          <volume>3</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rastogi</surname>
          </string-name>
          , et al.,
          <article-title>Deciding fast and slow: The role of cognitive biases in ai-assisted decision-making</article-title>
          ,
          <source>Proceedings of the ACM on Human-Computer Interaction</source>
          <volume>6</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Famiglini</surname>
          </string-name>
          , et al.,
          <article-title>Color shadows (part i): Exploratory usability evaluation of activation maps in radiological machine learning</article-title>
          ,
          <source>in: Machine Learning and Knowledge Extraction</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.</given-names>
            <surname>Grosse-Hering</surname>
          </string-name>
          , et al.,
          <article-title>Slow design for meaningful interactions</article-title>
          ,
          <source>in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>3431</fpage>
          -
          <lpage>3440</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>