<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Balancing Empathy and Accountability: Exploring Friction-In-Design For AI-Mediated Doctor-Patient Communication</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Evan Selinger</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Rochester Institute of Technology</institution>
          ,
          <addr-line>One Lomb Memorial Drive, Rochester, NY</addr-line>
          ,
          <country country="US">United States of America</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Empathetic communication between doctor and patient is crucial for building trust. Unfortunately, doctors routinely sound robotic and fall short of the empathetic ideal. Given the systemic issues that give rise to this problem, we may want to consider a new approach: adding generative AI to patient portals. Responsible governance will be needed to deploy the technology ethically and efectively, including establishing procedures for holding doctors accountable for integrating AI-generated content into their messages. In this context, direct and indirect friction-in-design strategies are worth exploring.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Empathy</kwd>
        <kwd>Friction-in-Design</kwd>
        <kwd>Generative AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>2. Innovating Empathic Communication: Help From Generative AI</title>
      <p>Emerging applications of artificial intelligence like generative AI ofer new avenues for helping
doctors communicate more empathetically. To start assessing these possibilities it is necessary
to understand what empathy is, at its core, and the extent to which generative AI can simulate
aspects of it.</p>
      <p>
        Human empathy has three main components [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].1 Imagine a doctor entering a waiting
room and seeing a patient pulling at their hair while looking down at the floor right before
a procedure. At a mere glance, the physician can tell the patient is upset. That is “cognitive
empathy,” our ability to identify someone else’s emotions. When doctors internalize someone
else’s feelings, like some of a patient’s worry, they experience “emotional empathy.”2 Finally, if
doctors feel moved to help patients, maybe say something reassuring to provide comfort, they
experience “motivational empathy.”
      </p>
      <p>Generative AI lacks emotional and motivational empathy. At most, it can only demonstrate
limited abilities associated with cognitive empathy. 3 Given these limits, it is essential to
acknowledge that the technology cannot care about patients and their families. To believe
otherwise is to overestimate the technology–to impute abilities it lacks and, perhaps, assign it
moral duties that it neither has nor can meet. Likely, anyone who makes these mistakes has
fallen sway to the cognitive bias of anthropomorphism.</p>
      <p>Why, then, is generative AI so efective at producing contextually appropriate, empathetic
text like, “I can understand why this would worry you”? The answer is clear. Generative AI can
mimic empathy linguistically because it excels at detecting patterns in human language. The
data used to train tools like ChatGPT includes literature with empathetic characters and news
coverage of people experiencing hardship. Generative AI uses this information to predict which
reassuring phrases typically appear when people discuss dificult situations. Again, far from
demonstrating the ability to connect with others, this display is merely a simulation of care.
Nevertheless, the output is enough to help doctors, and they are the agents who can experience
all three components of empathy.</p>
      <p>
        Indeed, there are good reasons to believe doctors can use outputs that mimic empathy to
communicate more efectively with patients. Both anecdotal reporting and early scholarly
1For related arguments about the limits of AI and empathy, see [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] Perry, Anat. “AI Will Never Convey the Essence
of Human Empathy.” Nature Human Behaviour 7, no. 11 (November 2023): 1808–9.
https://doi.org/10.1038/s41562023-01675-w
2The human glance can take in so much information and function as such a vital source of motivation, that some
philosophers argue it has ethical dimensions. See [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] Casey, Edward S. The World at a Glance. Bloomington: Indiana
University Press, 2007.
3For a study of the abilities related to cognitive empathy conducted with the earlier ChatGPT3, see
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] Sorin, Vera, Danna Brin, Yiftach Barash, Eli Konen, Alexander Charney, Girish Nadkarni, and Eyal
Klang. “Large Language Models (LLMs) and Empathy – A Systematic Review.” medRxiv, August 7, 2023.
https://doi.org/10.1101/2023.08.07.23293769.
research support this hypothesis.
      </p>
    </sec>
    <sec id="sec-2">
      <title>3. AI Saves the Day in the ER</title>
      <p>Here is a real-life example of ChatGPT helping a doctor. Dr. Josh Tamayo-Saver faced a
dilemma in the emergency room. He was treating a 96-year-old woman who had trouble
breathing because her lungs were filled with fluid. Her three children, all senior citizens, were
panicking. They followed the medical staf around, asking questions and making requests.</p>
      <p>Although the pestering was meant to be helpful, it slowed everyone down and made it hard
for Dr. Tamayo-Saver to help all the vulnerable patients in his care. The worst part of the delay
was the siblings’ insistence that Dr. Tamayo administer an IV to their mother. This option was
potentially fatal.</p>
      <p>
        Dr. Tamayo-Saver patiently explained his reasons. The siblings didn’t back down. Desperate,
he turned to ChatGPT. In seconds, the generative AI composed a clear, detailed, and empathetic
explanation–one so good at covering the appropriate treatment protocol that the
secondguessing stopped. The most astonishing thing is that the AI projected empathy and did not
sound robotic. It opened with, “I truly understand how much you care for your mother, and it’s
natural to feel concerned about her well-being.” [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
      </p>
    </sec>
    <sec id="sec-3">
      <title>4. Online Applications</title>
      <p>The greatest potential for generative AI to help doctors convey empathy isn’t in face-to-face
situations. A better domain is online medical communication systems, like patient portals.
Online communication is increasing, and the volume is exacerbating physician burnout.</p>
      <p>Here is the type of scenario that I am envisioning. Doctors receive online notes from their
patients. They dictate their replies, and an AI attempts to make the replies sound more empathetic.
The physician reviews the updated message, edits it if necessary, and sends the response.</p>
    </sec>
    <sec id="sec-4">
      <title>5. Mitigating the Risk of Overreliance</title>
      <p>Responsibly adopting generative AI in the manner detailed here will require a detailed
governance framework. For example, the following issues will need to be carefully addressed:
promoting choice for physicians and patients, maintaining transparency, promoting medical
accountability, creating and servicing an appropriate generative AI (e.g., efective,
privacypreserving, etc.), and ensuring fair medical billing practices.</p>
      <p>To pick one of these dimensions, the only way to deploy generative AI efectively and
responsibly is for doctors to be held fully accountable for all their communications; this
includes those partially or wholly written by AI. Ironically, if the technology works well much
of the time, a problem can arise. Doctors risk falling sway to automation bias and becoming
complacent. Over time, they may be disinclined to diligently review messages and succumb to
overlooking poor responses.</p>
      <p>Fortunately, proper safeguards can limit this risk. One promising approach is
“friction-indesign.”4 This technique intentionally adds elements to a product or service that make it more
time-consuming or challenging to use. For example, X (formerly Twitter) used friction-in-design
to ask users to pause before sharing articles they didn’t have enough time to read. The goal was
to reduce the spread of misinformation by slightly slowing people down so they would engage
in more deliberate sharing. Another widespread use of the technique is CAPTCHAs–challenges
like identifying objects in images before you can access a website. This delay is meant to
prevent bad outcomes like bots scraping data for malicious purposes.</p>
      <p>Friction-in-design could be applied here in direct and indirect ways. Direct approaches to
friction-by-design options use code to limit the speed at which physicians can reply to patients.
By contrast, indirect ones are reminders designed to motivate doctors to spend additional time
reviewing correspondence without providing enforcement mechanisms. To further clarify these
ideas, let us consider some of the possible direct and indirect ways of designing friction.</p>
    </sec>
    <sec id="sec-5">
      <title>6. Friction-In-Design: Direct Options</title>
      <p>One direct option is to require mandatory physician review. Before sending any AI-mediated
messages, doctors should be prompted to carefully review the content. Prompts could be
phrased in diferent ways, and they could include reminders of the professional responsibility
to remain accountable and ensure messages accurately reflect their intended communication.
Messages would be locked until doctors meet the review requirements. One possible
requirement is timed delays. The system could enforce a minimum amount of time doctors
must spend reviewing messages before they are permitted to be sent.</p>
      <p>Yet another direct possibility is to ofer occasional attention checks. These sporadic prompts
could require doctors to answer a brief question or two about the content of messages before
they are authorized to send them.</p>
    </sec>
    <sec id="sec-6">
      <title>7. Friction-In-Design: Indirect Options</title>
      <p>One indirect possibility is for software to highlight changes that draw physicians’ attention to
the specific areas that deserve review. In this scenario, doctors would not need to prove they
have examined the changes.</p>
      <p>
        Another indirect possibility is to ofer periodic reminders of the importance of carefully
reviewing AI-generated content and the risks of overreliance. Again, as an indirect option, the
mechanism is notice, not enforcement.
4For more on friction-in-design, see [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] Brett Frischmann and Susan Benesch “Friction-In-Design Regulation as 21st
Century Time, Place, and Manner Restriction” Yale Journal of Law and Technology 25, 376 (2023).
      </p>
    </sec>
    <sec id="sec-7">
      <title>8. Additional Research</title>
      <p>Which fiction-in-design approach is best? To answer this question, we need additional research
on efectiveness (i.e., the optimal parameters for each approach and how the approaches
compare), user experience (i.e., how physicians perceive and judge each option), and untended
consequences. This agenda requires interdisciplinary collaboration between experts from
medicine, human-computer-interaction, and ethics. It is only by combining insights from these
diverse fields that we can create responsible, evidence-based guidelines and reliably foster a
deeper understanding of the ethical implications of AI in healthcare communication.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Montemayor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fairweather</surname>
          </string-name>
          ,
          <article-title>In principle obstacles for empathic ai: why we can't replace human empathy in healthcare</article-title>
          ,
          <source>AI &amp; society 37</source>
          (
          <year>2022</year>
          )
          <fpage>1353</fpage>
          -
          <lpage>1359</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Perry</surname>
          </string-name>
          ,
          <article-title>Ai will never convey the essence of human empathy</article-title>
          ,
          <source>Nature Human Behaviour</source>
          <volume>7</volume>
          (
          <year>2023</year>
          )
          <fpage>1808</fpage>
          -
          <lpage>1809</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E. S.</given-names>
            <surname>Casey</surname>
          </string-name>
          , The world at a glance, Indiana University Press Bloomington,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sorin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Brin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Barash</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Konen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Charney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Nadkarni</surname>
          </string-name>
          , E. Klang,
          <article-title>Large language models (llms) and empathy-a systematic review</article-title>
          ,
          <source>medRxiv</source>
          (
          <year>2023</year>
          )
          <fpage>2023</fpage>
          -
          <lpage>08</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Tamayo-Sarver</surname>
          </string-name>
          ,
          <article-title>How a doctor uses chat gpt to treat patients</article-title>
          , Fast Company (
          <year>2023</year>
          ). URL: https://www.fastcompany.com/90895618/how
          <article-title>-a-doctor-uses-chat-gpt-to-treat-patients</article-title>
          ,
          <source>accessed January 8</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Frischmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Benesch</surname>
          </string-name>
          ,
          <article-title>Friction-in-design regulation as 21st century time, place, and manner restriction</article-title>
          ,
          <source>Yale JL &amp; Tech. 25</source>
          (
          <year>2023</year>
          )
          <fpage>376</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>