<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Beyond convenience: the ethical use of AI in everyday life⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Scott Robbins</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bonn</institution>
          ,
          <addr-line>Regina-Pacis-Weg 3 53113, Bonn</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>While there is much scrutiny over the legal, policy, and design of AI, there is little written about how individual users should incorporate AI into their everyday lives. The important work being done to constrain and design AI in ways consistent with human values does little to constrain the use of AI by individuals. The possibilities open to us are seemingly limitless. If we are to use these technologies in a way consistent with our good lives we must know when some friction is necessary - for building skills, for enjoyment, or for keeping a sense of accomplishment and meaning. There is nothing convenient about delegating what is meaningful about being human to technology.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Meaningful Human Control</kwd>
        <kwd>AI Ethics</kwd>
        <kwd>Frictional AI</kwd>
        <kwd>AI</kwd>
        <kwd>LLMs 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Meaningful Human Control (MHC) over Artificial Intelligence (AI) is becoming more
important with the rise of generative AI. LLMs like ChatGPT and Gemini are able to do more
with less oversight than ever before. Importantly, these tools are widely available - giving
more people than ever the ability to take advantage of the capabilities they afford.</p>
      <p>
        Much has been written about designing these tools to enhance human autonomy and
control. Others have proposed legislation to constrain the effects of these technologies
(e.g.,). These proposals are necessary and will hopefully one day be implemented. However,
the implementation of sensible design requirements and legislation will not suffice to
ensure that individuals will understand how they should engage with these technologies
or meaningfully exist in a world where these technologies are widespread. I have previously
written [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] that meaningful human control is, among other things, about humans having
control over what counts as meaningful - and what counts as a meaningful human existence.
Technologies should serve to help us realize what we have decided as meaningful - not tell
us what is meaningful.
      </p>
      <p>Never have humans had the choice to delegate so much of their lives to technologies. We
have outsourced ensuring correct spelling a long time ago - but now we are outsourcing the
creation of sentences and paragraphs - the writing of letters and emails, etc. We have the
ability to make our lives easier, efficient and seamless. So much of the effort required in
performing any given task can simply be outsourced to AI. From cooking, to communicating,
to coding, to deciding what to do on holiday. We have not had a chance to stop and ask where
in our lives friction is more important than convenience. Where friction gives us the chance
to not only develop and maintain skills, we find important, but where friction is - though
intuitively or on the surface undesirable- desirable in and of itself - where friction imbues
the ‘output’ with value. That is, where should we intentionally not use technology or use it
differently so that we maintain the friction that is necessary for our good lives?</p>
      <p>
        This paper puts the spotlight on individuals - and argues that no matter how it shakes
out regarding the responsible regulation and design of technologies like LLMs, individuals
have an interest in using them in a way that is compatible with a meaningful life. We must
develop norms of use that keep us in control over what counts as meaningful (recognizing
the need for pluralism that Frischman and Selinger [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] argue is important). We cannot
simply rely on necessary legislation and design choices to align with human values.
      </p>
      <p>In this paper I argue first that how individuals use AI (especially LLMs) has escaped
scrutiny in the literature. The focus has - so far - been on design choices and legislation.
Second, I show that the possibilities for using AI are nearly unlimited. We need guidance on
how to go forward. Finally, I point to some important issues that should drive our decisions
on whether we should use AI or not – that is, when some friction in our lives is necessary
rather than the seamless outsourcing of tasks to AI.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The Missing Users</title>
      <p>
        If one wants advice about how one should be designing AI – in terms of training data,
models, constraints, etc. there are hundreds of papers telling them what to do – see e.g. [
        <xref ref-type="bibr" rid="ref3 ref4">3,4</xref>
        ].
If governments want to understand what laws should be in place to ensure AI does not harm
society there are also hundreds of papers to reference – see e.g. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. If organizations want to
understand better how to implement AI governance, again, there are plenty of papers – see
e.g. [
        <xref ref-type="bibr" rid="ref6 ref7">6,7</xref>
        ].
      </p>
      <p>Here, I do not want to minimize the importance of this work. Good policy, governance,
and design are all needed to ensure that human rights and values are respected. However,
we are missing guidance for how individuals should use these technologies. This is needed
no matter how the legal, policy, governance, and design debates get settled.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Unlimited Possibilities</title>
      <p>
        It is true that progress has been made regarding constraining AI. The EU AI act, for example,
prohibits certain uses of AI. These include the use of AI to manipulate individuals through
subliminal techniques, classifying people based on their social behavior, predictive policing,
untargeted facial recognition, inferring emotions, etc. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] These developments are
important. However, for us individuals, there is little guidance here.
      </p>
      <p>Most of us are not constrained by these constraints. We are not looking to manipulate or
classify people. The possibilities for us to use AI are still limitless. AI can write, play, and
choose songs for us; find partners, be a partner, write love letters, organize dates; find jobs,
generate CVs; generate ideas, translate ideas into sentences and paragraphs, write emails,
texts and books; monitor and tutor children; plan and manage diets, create recipes, diagnose
health issues, monitor our sleep; etc. There are few places in our lives that AI cannot be used.</p>
      <p>Technology has consistently thrust upon us new possibilities which get rid of old
practices. Modern plumbing made it so that we did not have to gather at the water well.
Modern electricity made is so that we can stay up and work into the night. The internet
made it so that we could communicate with anyone around the world instantaneously. AI is
making it so that we don’t even have to write the messages we use to communicate with.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Necessary Friction</title>
      <p>With all of the possibilities open to us (and many more to come) it could one day be
possible to automate all of our communication. Our social media posts, text messages, and
even our phone calls could be handled by our LLM avatar. This extreme case is (hopefully)
not considered desirable to most people. However, it is not clear how one should draw the
line which prevents them from overusing these technologies.</p>
      <p>It is not in the scope of this paper to draw such a line. However, I want to point out some
things we should be thinking about when we draw our own lines. First, there is the concern
that delegating so much to technology will cause practical and moral deskilling. The friction
of not delegating tasks to technology sometimes develop skills that we find independently
important. For example, we don’t have our kids using calculators to solve their math
equations at school. We think it is important that they can calculate things in their heads.
We can now have LLMs write all our emails; however, writing emails forces us to translate
our thoughts into organized sentences and paragraphs. We can argue about whether this
skill is important or not; however, the point is that when delegating a task or a practice to
AI we have to think about whether we are losing the development and exercise of an
important skill. We must keep in mind that while we may think that it is unimportant for us
because we already have the skill in question, the ability for children to use these
conveniences may inhibit the development of that skill.</p>
      <p>Second, we should be aware of important activities and practices which are constitutive
of our good lives that we should keep for ourselves. It may seem obvious to not delegate
tasks to AI that one enjoys; however, it the fear of being left behind or the fear of something
going wrong may cause us to give in and delegate these tasks to technology.</p>
      <p>
        Finally, some friction in our lives is necessary if we are to feel merit and fulfillment about
the outputs of some tasks [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Delegating work to, for example, LLMs has the possibility of
decreasing our sense of accomplishment, as well as diminishing our sense of ownership of
the output. We have to decide when it is important for us to feel ownership and have a sense
of accomplishment before we can delegate tasks to LLMs.
      </p>
      <p>Thanks to the participants of the Frictional AI Workshop which took place in Malmo
Sweden on June 11, 2024. Also, thanks to Inga Blundell for the helpful discussions and
feedback on earlier versions of this paper.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Robbins</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>The many meanings of meaningful human control</article-title>
          .
          <source>AI Ethics [Internet]</source>
          .
          <source>2023 [cited 2023 Sep</source>
          <volume>16</volume>
          ]; Available from: https://doi.org/10.1007/s43681-023-00320-6
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Frischmann</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Selinger</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Why</surname>
          </string-name>
          <article-title>a Commitment to Pluralism Should Limit How Humanity Is Re-Engineered</article-title>
          . In:
          <article-title>Werbach K, editor</article-title>
          .
          <source>After the Digital Tornado: Networks</source>
          , Algorithms, Humanity [Internet]. Cambridge: Cambridge University Press; 2020
          <source>[cited 2024 Aug</source>
          <volume>21</volume>
          ]. p.
          <fpage>155</fpage>
          -
          <lpage>73</lpage>
          . Available from: https://www.cambridge.org/core/books/after
          <article-title>-the-digital-tornado/why-acommitment-to-pluralism-should-limit-how-humanity-isreengineered/AE64941488BD4012B4461FDACB7FB6AF</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Floridi</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cowls</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>King</surname>
            <given-names>TC</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taddeo</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>How to Design AI for Social Good: Seven Essential Factors</article-title>
          . In: Floridi L, editor. Ethics, Governance, and Policies in Artificial Intelligence [Internet].
          <source>Cham: Springer International Publishing; 2021 [cited 2024 Aug</source>
          <volume>21</volume>
          ]. p.
          <fpage>125</fpage>
          -
          <lpage>51</lpage>
          . Available from: https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -81907-
          <issue>1</issue>
          _
          <fpage>9</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Riedl</given-names>
            <surname>MO</surname>
          </string-name>
          .
          <article-title>Human-centered artificial intelligence and machine learning</article-title>
          .
          <source>Human Behavior and Emerging Technologies</source>
          .
          <year>2019</year>
          ;
          <volume>1</volume>
          :
          <fpage>33</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Black</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Murray</surname>
            <given-names>AD</given-names>
          </string-name>
          .
          <string-name>
            <surname>Regulating</surname>
            <given-names>AI</given-names>
          </string-name>
          <article-title>and Machine Learning: Setting the Regulatory Agenda</article-title>
          .
          <source>European Journal of Law and Technology [Internet]</source>
          .
          <source>2019 [cited 2024 Aug</source>
          <volume>21</volume>
          ];
          <fpage>10</fpage>
          . Available from: https://www.ejlt.org/index.php/ejlt/article/view/722
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Mäntymäki</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Minkkinen</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Birkstedt</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Viljanen</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>Defining organizational AI governance</article-title>
          .
          <source>AI Ethics</source>
          .
          <year>2022</year>
          ;
          <volume>2</volume>
          :
          <fpage>603</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Taeihagh</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <source>Governance of artificial intelligence. Policy and Society</source>
          .
          <year>2021</year>
          ;
          <volume>40</volume>
          :
          <fpage>137</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>European</given-names>
            <surname>Parliament</surname>
          </string-name>
          .
          <source>Regulation (EU)</source>
          <year>2024</year>
          /
          <article-title>1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC</article-title>
          ) No 300/
          <year>2008</year>
          , (EU) No 167/
          <year>2013</year>
          , (EU) No 168/
          <year>2013</year>
          , (EU)
          <year>2018</year>
          /858,
          <string-name>
            <surname>(</surname>
            <given-names>EU</given-names>
          </string-name>
          )
          <year>2018</year>
          /1139 and (EU)
          <year>2019</year>
          /2144 and Directives 2014/90/EU, (EU)
          <year>2016</year>
          /797 and (EU)
          <year>2020</year>
          /1828 (
          <article-title>Artificial Intelligence Act) (Text with EEA relevance) [Internet]</article-title>
          .
          <source>Jun 13</source>
          ,
          <year>2024</year>
          . Available from: http://data.europa.eu/eli/reg/2024/1689/oj/eng
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Kobiella</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flores López</surname>
            <given-names>YS</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Waltenberger</surname>
            <given-names>F</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Draxler</surname>
            <given-names>F</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidt</surname>
            <given-names>A.</given-names>
          </string-name>
          “
          <article-title>If the Machine Is As Good As Me, Then What Use Am I?” - How the Use of ChatGPT Changes Young Professionals' Perception of Productivity and Accomplishment</article-title>
          .
          <source>Proceedings of the CHI Conference on Human Factors in Computing Systems [Internet]</source>
          . New York, NY, USA: Association for Computing Machinery;
          <source>2024 [cited 2024 Aug</source>
          <volume>30</volume>
          ]. p.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . Available from: https://dl.acm.org/doi/10.1145/3613904.3641964
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>