<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Stimulating Cognitive Engagement in Hybrid Decision-Making: Friction, Reliance and Biases (preface)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Chiara Natali</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Brett Frischmann</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Federico Cabitza</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IRCCS Galeazzi Sant'Ambrogio Hospital</institution>
          ,
          <addr-line>Via Cristina Belgioioso 173, Milan</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Milano-Bicocca</institution>
          ,
          <addr-line>Viale Sarca 336, Milan</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Villanova University</institution>
          ,
          <addr-line>299 N. Spring Mill Rd. Villanova, Pennsylvania</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This workshop critically examined the trend toward rapid and seamless human-AI interactions and considered alternative forms of prosocial engagement. We focused on the role of designers and developers in fostering user empowerment, skill development, and appropriate reliance on AI for responsible decision-making. Our discussions centered on friction-in-design and the core concepts of 'programmed ineficiencies' and 'frictional protocols' that involve design elements intentionally included to promote cognitive engagement and thoughtful interaction with AI, even if they might be slower. The workshop featured contributions on design principles that balance eficiency with engagement, methods for revealing and reducing biases in explainable AI systems, and considerations for a meaningful future with AI. In this first edition, this workshop set the stage for future research and community-building eforts around 'Frictional AI' to encourage more informed and reflective human-AI interactions.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Human-AI Interaction</kwd>
        <kwd>Frictional AI</kwd>
        <kwd>Decision Support Systems</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Interaction protocols</kwd>
        <kwd>Usability</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The workshop brought together a diverse group of scholars, practitioners, and researchers
who critically examined the role of AI designers and developers in shaping user reliance on AI
systems. Moving beyond the conventional attribution of over-reliance to inherent cognitive
biases, the discussions highlighted how intentional design can either exacerbate or mitigate
such biases, ultimately influencing the quality of human knowledge work and decision-making.</p>
      <p>The workshop’s contributions can be broadly categorized into two core themes, encompassing
both theory and practice: the theoretical exploration of biases in Human-AI interaction and the
presentation of practical design applications.</p>
      <p>The first theme focused on a theoretical examination of cognitive biases and their interaction
with AI systems. The contributions involved reflections on the psychological and philosophical
factors influencing human reliance on AI, as well as proposing philosophical accounts over a
meaningful future with AI. This theoretical exploration provided a foundation for understanding
how our knowledge of human biases and reflections on the future of Human-AI interaction
can be leveraged to improve decision-making quality through more reflective and deliberate
interactions with AI.</p>
      <p>The second core theme centered around the practical application of frictional design principles
in real-world settings. Case studies and design frameworks were presented, illustrating how
frictional protocols can be integrated into various AI systems to balance eficiency with cognitive
engagement. Participants shared insights into how these principles could be applied across
diferent domains, from seamful design for human-AI creative systems to adding friction to
human-robot interaction, decision support systems for social media content regulation, and
AImediated communication in medicine. The discussions in this theme provided clear examples of
how frictional design can help prevent automation bias, encourage skill retention, and promote
ethical AI development and use.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Organization</title>
      <sec id="sec-2-1">
        <title>2.1. Workshop Chairs</title>
        <p>• Chiara Natali (University of Milano-Bicocca, Italy)
• Brett M. Frischmann (Villanova University, USA)
• Federico Cabitza (University of Milano-Bicocca, IRCCS Galeazzi Sant’Ambrogio Hospital,</p>
        <p>Italy)</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Programme Committee</title>
        <p>The Programme Committee comprised a multidisciplinary team of experts from fields including
Computer Science, Human-Centered Computing, Human-Computer Interaction, Psychology,
Philosophy, Sociology, and Artificial Intelligence. Their collective expertise was instrumental in
ensuring the rigorous evaluation of workshop submissions.</p>
        <p>• Noah Apthorpe (Colgate University, USA, Computer Science)
• Niels van Berkel (Aalborg University, Denmark, Human-Centred Computing)
• Andrea Campagner (IRCCS Galeazzi Sant’Ambrogio Hospital, Italy, Artificial Intelligence)
• Marta E. Cecchinato (Northumbria University, UK, Human-Computer Interaction)
• Paolo Cherubini (University of Pavia, Italy, Psychology)
• Lewis L. Chuang (Chemnitz University of Technology, Germany, Neuroscience)
• Davide Ciucci (University of Milano-Bicocca, Italy, Computer Science)
• Vincenzo Crupi (University of Turin, Italy, Philosophy)
• Diletta Huyskes (University of Milan, Italy, Sociology)
• Jo Iacovides (University of York, UK, Human-Computer Interaction)
• Sarah Inman (Google, USA, Human-Centered Design)
• Tomáš Kliegr (Prague University of Economics, Czechia, Informatics)
• Tim Miller (University of Queensland, Australia, Artificial Intelligence)
• Mohammad Naiseh (Bournemouth University, England, Artificial Intelligence)
• Enea Parimbelli (University of Pavia, Italy, Engineering)
• Sarah Michele Rajtmajer (Pennsylvania State University, USA, Computer Science)
• Carlo Reverberi (University of Milano-Bicocca, Italy, Psychology)
• David Ribes (University of Washington, USA, Sociology)
• Scott Robbins (University of Bonn, Germany, Ethics of AI)
• Evan Selinger (Rochester Institute of Technology, USA, Philosophy)
• Yan Shvartzshnaider (York University, Canada, Computer Science)
• Alberto Termine (IDSIA USI-SUPSI, Switzerland, Artificial Intelligence)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Summary of the workshop</title>
      <p>The workshop included 8 accepted submissions, with authors from institutions from Italy,
United States of America, Germany Portugal and Sweden.</p>
      <p>The submissions were grouped according to their overarching themes, identifying two
presentation sessions: Human-AI Collaboration and Biases and Frictional AI applications.</p>
      <p>Each session included a reflection roundtable, where all the paper authors discussed the
similarities and diferences of their approaches and answered questions from the audience.
Finally, we discussed future work to build the frictional AI community.</p>
      <sec id="sec-3-1">
        <title>Introductory talks</title>
        <p>• Brett M. FRISCHMANN, Villanova University (USA) - "An Interdisciplinary Research</p>
        <p>
          Agenda for Prosocial Friction-in-Design"
• Chiara NATALI, University of Milan-Bicocca (Italy) - "Frictional AI: Topics and Issues"
Brett M. Frischmann’s talk, "An Interdisciplinary Research Agenda for Prosocial
Frictionin-Design," drew from his 2018 book Re-Engineering Humanity with Evan Selinger [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] and
subsequent research on friction-in-design [
          <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
          ]. He addressed the root of humanity’s
technosocial dilemma: the prevailing economic, social, and political logics that drive the design
of AI systems toward goals like maximizing eficiency, minimizing transaction costs, and
eliminating friction [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Frischmann argued that these design principles, which prioritize speed,
scale, and seamlessness, often undermine human autonomy and social welfare. To counter
these tendencies, he called for prosocial "friction-in-design" principles and regulations that
challenge the conventional wisdom perpetuating these logics. His proposed strategies include
intentionally engineering friction, such as transaction costs and ineficiencies, into AI systems
to resist the dominance of eficiency and productivity logics and to promote human flourishing
through the exercise and development of human capabilities [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Chiara Natali followed with
"Frictional AI: Topics and Issues," providing a comprehensive overview of the key areas and
challenges in applying frictional design to AI systems drawing parallels with slow design [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ],
microboundaries [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], desirable and programmed ineficiencies [
          <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
          ] for constructive distrust [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]
and debiasing strategies against over-confidence [
          <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
          ] and anchoring bias [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. This requires
new methodologies to assess over- and under-reliance [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], such as the Human-AI Interaction
Assessment tool. Together, these talks set the stage for a deeper exploration of how frictional
design can be strategically used to shape the social and ethical impacts of AI.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>First session: Human-AI Collaboration and Biases</title>
        <p>• Regina DE BRITO DUARTE and Joana CAMPOS, INESC-ID, Instituto Superior Tecnico
(Portugal) - "Looking for cognitive bias in Human-AI decision-making"
• Sebastiano MORUZZI, Filippo FERRARI and Filippo RISCICA LIZZO, University of</p>
        <p>Bologna (Italy) - "Biases, Epistemic Filters, and Explainable Artificial Intelligence"
• Christopher D. QUINTANA, Georg THEINER, Villanova University (USA) - "Make Friends,</p>
        <p>Not Tools: Designing AI for Technoamicitia"
• Scott ROBBINS, University of Bonn (Germany) - "Beyond Regulation: How We Can Craft
a Meaningful Future with AI"</p>
        <p>The Human-AI Collaboration and Biases session examines diferent aspects of bias and
interaction in human engagement with AI systems, emphasizing the need to rethink design and user
practices to promote thoughtful and meaningful AI deployment. Regina de Brito Duarte and
Joana Campos examine cognitive biases in AI-assisted decision-making, advocating for balanced
friction to avoid both over-reliance and undue skepticism towards AI recommendations.
Sebastiano Moruzzi, Filippo Ferrari, and Filippo Riscica discuss how "epistemic filters" impact the
outputs and user interactions with XAI and Generative AI, and how understanding and adjusting
them can address technical and cognitive biases. Christopher D. Quintana and Georg Theiner
propose "technoamicitia," a design approach that goes beyond traditional usability metrics to
foster deeper human engagement with AI: their approach aims to support psychological and
moral development, and thus counter the prevailing view of AI as mere tools for eficiency and
productivity. Scott Robbins builds upon the concept of friction by challenging the conventional
focus on regulation and design controls as sole means of achieving ethical AI deployment; he
suggests that norms around the intentional use or restraint of AI can help preserve human
autonomy and ensure that certain meaningful tasks remain within human control.
Second session: Frictional AI Applications
• Caterina FREGOSI, Federico CABITZA, University of Milan-Bicocca (Italy) - "A frictional
design approach: towards Judicial AI and its possible applications"
• Ingar BRINCK, Samantha STEDTLER and Valentina FANTASIA, Lund University (Sweden)
- "Exploring Frictional Design in Human-Robot Interaction: Delayed Movement in a
Turntaking Game"
• Sarah INMAN and Sarah D’ANGELO, Google (USA) - "Enabling Creative Human-AI</p>
        <p>Systems with Seamful Design" (not included in the proceedings)
• Evan SELINGER, Rochester Institute of Technology (USA) - "Balancing Empathy and
Accountability: Exploring Friction-In-Design For AI-Mediated Doctor-Patient
Communication"</p>
        <p>The Frictional AI Applications session highlights diverse approaches to incorporating
intentional friction in AI design to promote critical thinking, creativity, and ethical engagement.
Caterina Fregosi and Federico Cabitza present "Judicial AI," a decision support system that
ofers two contrasting explanations to foster critical thinking and reduce automation bias. They
explore how complex decision pathways can enhance user autonomy. Ingar Brinck, Samantha
Stedtler, and Valentina Fantasia examine frictional design in human-robot interactions and
demonstrate how deliberate delays in a turn-taking game can enhance cognitive engagement
and foster deeper interaction with social robots. Sarah Inman and Sarah D’Angelo propose
applying "seamful design" in software engineering to support creative problem-solving: they
emphasize the value of exposing hidden processes to maintain control and foster creativity
beyond mere productivity. Evan Selinger suggests using generative AI to enhance empathetic
content in doctor-patient communication, addressing the issue of doctors often sounding robotic
due to systemic pressures. To ensure this technology is used ethically and maintains trust,
he advocates for incorporating friction, such as transparency measures and manual revisions,
and establishing governance procedures to hold doctors accountable for how they integrate
AI-generated content into their messages.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion and Remarks</title>
      <p>
        The concept of Frictional AI draws heavily on the idea that some level of friction, or ’seamfulness,’
is essential to prevent overreliance on AI and to maintain human agency in decision-making
processes. As Frischmann and Selinger [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] argued in Re-engineering Humanity, tolerating some
friction in our interactions with technology is vital for sustaining environments that support
human flourishing.
      </p>
      <p>The Frictional AI Workshop has laid the groundwork for future research and collaboration
on this new paradigm in Human-AI Interaction—one that values cognitive engagement and
ethical responsibility as much as it does eficiency and performance.</p>
      <p>Looking ahead, we are confident that the contributions contained in these proceedings will
serve as a valuable resource for scholars and practitioners alike, providing both theoretical
frameworks and practical guidance for integrating Frictional AI into a wide range of applications.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>We extend our sincere gratitude to all the participants, speakers, and the HHAI conference
organizers who contributed to the success of this workshop. Special thanks go to the members
of the Programme Committee for their expertise and commitment.</p>
      <p>C. Natali gratefully acknowledges the PhD grant awarded by the Fondazione Fratelli
Confalonieri, which has been instrumental in facilitating her research pursuits.
F. Cabitza acknowledges funding support provided by the Italian project PRIN PNRR 2022
InXAID - Interaction with eXplainable Artificial Intelligence in (medical) Decision making. CUP:
H53D23008090001 funded by the European Union - Next Generation EU.
on how to measure technology dominance in ai-supported human decision-making, in:
CHI ’23: the Proceedings of the 2023 CHI Conference on Human Factors in Computing
Systems, 2023, to be published.</p>
    </sec>
    <sec id="sec-6">
      <title>A. Online Resources</title>
      <p>• Workshop website,
• Human-AI Interaction Assessment Tool.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Natali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Famiglini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Caccavella</surname>
          </string-name>
          , E. Gallazzi,
          <article-title>Never tell me the odds: Investigating pro-hoc explanations in medical decision making</article-title>
          ,
          <source>Artificial Intelligence in Medicine</source>
          <volume>150</volume>
          (
          <year>2024</year>
          )
          <fpage>102819</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Frischmann</surname>
          </string-name>
          , E. Selinger, Re-engineering humanity, Cambridge University Press,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Frischmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Benesch</surname>
          </string-name>
          ,
          <article-title>Friction-in-design regulation as 21st century time, place, and manner restriction</article-title>
          ,
          <source>Yale JL &amp; Tech. 25</source>
          (
          <year>2023</year>
          )
          <fpage>376</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Frischmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ohm</surname>
          </string-name>
          , Governance seams,
          <source>Harvard Journal of Law &amp; TechnologyVolume</source>
          <volume>37</volume>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Grosse-Hering</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mason</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Aliakseyeu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bakker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Desmet</surname>
          </string-name>
          ,
          <article-title>Slow design for meaningful interactions</article-title>
          ,
          <source>in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>3431</fpage>
          -
          <lpage>3440</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Cox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Gould</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Cecchinato</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Iacovides</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Renfree</surname>
          </string-name>
          ,
          <article-title>Design frictions for mindful interactions: The case for microboundaries</article-title>
          ,
          <source>in: Proceedings of the 2016 CHI conference extended abstracts on human factors in computing systems</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1389</fpage>
          -
          <lpage>1397</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ohm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Frankle</surname>
          </string-name>
          , Desirable ineficiency, Fla. L. Rev.
          <volume>70</volume>
          (
          <year>2018</year>
          )
          <fpage>777</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ciucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Seveso</surname>
          </string-name>
          ,
          <article-title>Programmed ineficiencies in dss-supported human decision making</article-title>
          ,
          <source>in: Modeling Decisions for Artificial Intelligence: 16th International Conference, MDAI</source>
          <year>2019</year>
          , Milan, Italy, September 4-
          <issue>6</issue>
          ,
          <year>2019</year>
          , Proceedings 16, Springer,
          <year>2019</year>
          , pp.
          <fpage>201</fpage>
          -
          <lpage>212</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hildebrandt</surname>
          </string-name>
          ,
          <article-title>Privacy as protection of the incomputable self: From agnostic to agonistic machine learning</article-title>
          ,
          <source>Theoretical Inquiries in Law</source>
          <volume>20</volume>
          (
          <year>2019</year>
          )
          <fpage>83</fpage>
          -
          <lpage>121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kliegr</surname>
          </string-name>
          , Š. Bahník,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fürnkranz</surname>
          </string-name>
          ,
          <article-title>A review of possible efects of cognitive biases on interpretation of rule-based machine learning models</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>295</volume>
          (
          <year>2021</year>
          )
          <fpage>103458</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bertrand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Belloum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Eagan</surname>
          </string-name>
          , W. Maxwell,
          <article-title>How cognitive biases afect xai-assisted decision-making: A systematic review</article-title>
          ,
          <source>in: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>78</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>A. K. P. Bach</surname>
            ,
            <given-names>T. M.</given-names>
          </string-name>
          <string-name>
            <surname>Nørgaard</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          <string-name>
            <surname>Brok</surname>
            ,
            <given-names>N. Van Berkel</given-names>
          </string-name>
          ,
          <article-title>“if i had all the time in the world”: Ophthalmologists' perceptions of anchoring bias mitigation in clinical ai support</article-title>
          ,
          <source>in: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Angius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Natali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Reverberi</surname>
          </string-name>
          ,
          <article-title>Ai shall have no dominion:</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>