Beyond convenience: the ethical use of AI in everyday life⋆ Scott Robbins1,† 1 University of Bonn, Regina-Pacis-Weg 3 53113, Bonn, Germany Abstract While there is much scrutiny over the legal, policy, and design of AI, there is little written about how individual users should incorporate AI into their everyday lives. The important work being done to constrain and design AI in ways consistent with human values does little to constrain the use of AI by individuals. The possibilities open to us are seemingly limitless. If we are to use these technologies in a way consistent with our good lives we must know when some friction is necessary – for building skills, for enjoyment, or for keeping a sense of accomplishment and meaning. There is nothing convenient about delegating what is meaningful about being human to technology. Keywords Meaningful Human Control, AI Ethics, Frictional AI, AI, LLMs 1 1. Introduction Meaningful Human Control (MHC) over Artificial Intelligence (AI) is becoming more important with the rise of generative AI. LLMs like ChatGPT and Gemini are able to do more with less oversight than ever before. Importantly, these tools are widely available - giving more people than ever the ability to take advantage of the capabilities they afford. Much has been written about designing these tools to enhance human autonomy and control. Others have proposed legislation to constrain the effects of these technologies (e.g.,). These proposals are necessary and will hopefully one day be implemented. However, the implementation of sensible design requirements and legislation will not suffice to ensure that individuals will understand how they should engage with these technologies - or meaningfully exist in a world where these technologies are widespread. I have previously written [1] that meaningful human control is, among other things, about humans having control over what counts as meaningful - and what counts as a meaningful human existence. Technologies should serve to help us realize what we have decided as meaningful - not tell us what is meaningful. *HHAI-WS 2024: Workshops at the Third International Conference on Hybrid Human-Artificial Intelligence (HHAI), June 10—14, 2024, Malmö, Sweden. 1∗ Corresponding author. † srobbins@uni-bonn.de (S. Robbins) 0000-0002-5338-295X (S. Robbins) © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings Never have humans had the choice to delegate so much of their lives to technologies. We have outsourced ensuring correct spelling a long time ago - but now we are outsourcing the creation of sentences and paragraphs - the writing of letters and emails, etc. We have the ability to make our lives easier, efficient and seamless. So much of the effort required in performing any given task can simply be outsourced to AI. From cooking, to communicating, to coding, to deciding what to do on holiday. We have not had a chance to stop and ask where in our lives friction is more important than convenience. Where friction gives us the chance to not only develop and maintain skills, we find important, but where friction is - though intuitively or on the surface undesirable- desirable in and of itself - where friction imbues the ‘output’ with value. That is, where should we intentionally not use technology or use it differently so that we maintain the friction that is necessary for our good lives? This paper puts the spotlight on individuals - and argues that no matter how it shakes out regarding the responsible regulation and design of technologies like LLMs, individuals have an interest in using them in a way that is compatible with a meaningful life. We must develop norms of use that keep us in control over what counts as meaningful (recognizing the need for pluralism that Frischman and Selinger [2] argue is important). We cannot simply rely on necessary legislation and design choices to align with human values. In this paper I argue first that how individuals use AI (especially LLMs) has escaped scrutiny in the literature. The focus has - so far - been on design choices and legislation. Second, I show that the possibilities for using AI are nearly unlimited. We need guidance on how to go forward. Finally, I point to some important issues that should drive our decisions on whether we should use AI or not – that is, when some friction in our lives is necessary rather than the seamless outsourcing of tasks to AI. 2. The Missing Users If one wants advice about how one should be designing AI – in terms of training data, models, constraints, etc. there are hundreds of papers telling them what to do – see e.g. [3,4]. If governments want to understand what laws should be in place to ensure AI does not harm society there are also hundreds of papers to reference – see e.g. [5]. If organizations want to understand better how to implement AI governance, again, there are plenty of papers – see e.g. [6,7]. Here, I do not want to minimize the importance of this work. Good policy, governance, and design are all needed to ensure that human rights and values are respected. However, we are missing guidance for how individuals should use these technologies. This is needed no matter how the legal, policy, governance, and design debates get settled. 3. Unlimited Possibilities It is true that progress has been made regarding constraining AI. The EU AI act, for example, prohibits certain uses of AI. These include the use of AI to manipulate individuals through subliminal techniques, classifying people based on their social behavior, predictive policing, untargeted facial recognition, inferring emotions, etc. [8] These developments are important. However, for us individuals, there is little guidance here. Most of us are not constrained by these constraints. We are not looking to manipulate or classify people. The possibilities for us to use AI are still limitless. AI can write, play, and choose songs for us; find partners, be a partner, write love letters, organize dates; find jobs, generate CVs; generate ideas, translate ideas into sentences and paragraphs, write emails, texts and books; monitor and tutor children; plan and manage diets, create recipes, diagnose health issues, monitor our sleep; etc. There are few places in our lives that AI cannot be used. Technology has consistently thrust upon us new possibilities which get rid of old practices. Modern plumbing made it so that we did not have to gather at the water well. Modern electricity made is so that we can stay up and work into the night. The internet made it so that we could communicate with anyone around the world instantaneously. AI is making it so that we don’t even have to write the messages we use to communicate with. 4. Necessary Friction With all of the possibilities open to us (and many more to come) it could one day be possible to automate all of our communication. Our social media posts, text messages, and even our phone calls could be handled by our LLM avatar. This extreme case is (hopefully) not considered desirable to most people. However, it is not clear how one should draw the line which prevents them from overusing these technologies. It is not in the scope of this paper to draw such a line. However, I want to point out some things we should be thinking about when we draw our own lines. First, there is the concern that delegating so much to technology will cause practical and moral deskilling. The friction of not delegating tasks to technology sometimes develop skills that we find independently important. For example, we don’t have our kids using calculators to solve their math equations at school. We think it is important that they can calculate things in their heads. We can now have LLMs write all our emails; however, writing emails forces us to translate our thoughts into organized sentences and paragraphs. We can argue about whether this skill is important or not; however, the point is that when delegating a task or a practice to AI we have to think about whether we are losing the development and exercise of an important skill. We must keep in mind that while we may think that it is unimportant for us because we already have the skill in question, the ability for children to use these conveniences may inhibit the development of that skill. Second, we should be aware of important activities and practices which are constitutive of our good lives that we should keep for ourselves. It may seem obvious to not delegate tasks to AI that one enjoys; however, it the fear of being left behind or the fear of something going wrong may cause us to give in and delegate these tasks to technology. Finally, some friction in our lives is necessary if we are to feel merit and fulfillment about the outputs of some tasks [9]. Delegating work to, for example, LLMs has the possibility of decreasing our sense of accomplishment, as well as diminishing our sense of ownership of the output. We have to decide when it is important for us to feel ownership and have a sense of accomplishment before we can delegate tasks to LLMs. Acknowledgements Thanks to the participants of the Frictional AI Workshop which took place in Malmo Sweden on June 11, 2024. Also, thanks to Inga Blundell for the helpful discussions and feedback on earlier versions of this paper. References [1] Robbins S. The many meanings of meaningful human control. AI Ethics [Internet]. 2023 [cited 2023 Sep 16]; Available from: https://doi.org/10.1007/s43681-023-00320-6 [2] Frischmann B, Selinger E. Why a Commitment to Pluralism Should Limit How Humanity Is Re-Engineered. In: Werbach K, editor. After the Digital Tornado: Networks, Algorithms, Humanity [Internet]. Cambridge: Cambridge University Press; 2020 [cited 2024 Aug 21]. p. 155–73. Available from: https://www.cambridge.org/core/books/after-the-digital-tornado/why-a- commitment-to-pluralism-should-limit-how-humanity-is- reengineered/AE64941488BD4012B4461FDACB7FB6AF [3] Floridi L, Cowls J, King TC, Taddeo M. How to Design AI for Social Good: Seven Essential Factors. In: Floridi L, editor. Ethics, Governance, and Policies in Artificial Intelligence [Internet]. Cham: Springer International Publishing; 2021 [cited 2024 Aug 21]. p. 125– 51. Available from: https://doi.org/10.1007/978-3-030-81907-1_9 [4] Riedl MO. Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies. 2019;1:33–6. [5] Black J, Murray AD. Regulating AI and Machine Learning: Setting the Regulatory Agenda. European Journal of Law and Technology [Internet]. 2019 [cited 2024 Aug 21];10. Available from: https://www.ejlt.org/index.php/ejlt/article/view/722 [6] Mäntymäki M, Minkkinen M, Birkstedt T, Viljanen M. Defining organizational AI governance. AI Ethics. 2022;2:603–9. [7] Taeihagh A. Governance of artificial intelligence. Policy and Society. 2021;40:137–57. [8] European Parliament. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance) [Internet]. Jun 13, 2024. Available from: http://data.europa.eu/eli/reg/2024/1689/oj/eng [9] Kobiella C, Flores López YS, Waltenberger F, Draxler F, Schmidt A. “If the Machine Is As Good As Me, Then What Use Am I?” – How the Use of ChatGPT Changes Young Professionals’ Perception of Productivity and Accomplishment. Proceedings of the CHI Conference on Human Factors in Computing Systems [Internet]. New York, NY, USA: Association for Computing Machinery; 2024 [cited 2024 Aug 30]. p. 1–16. Available from: https://dl.acm.org/doi/10.1145/3613904.3641964