=Paper=
{{Paper
|id=Vol-2659/dignum
|storemode=property
|title=How to center AI on humans
|pdfUrl=https://ceur-ws.org/Vol-2659/dignum.pdf
|volume=Vol-2659
|authors=Frank Dignum,Virginia Dignum
|dblpUrl=https://dblp.org/rec/conf/ecai/DignumD20
}}
==How to center AI on humans==
How to Center AI on Humans1 Frank Dignum and Virginia Dignum 2 Abstract. In this position paper we investigate what it means for AI natural language based on the input of a user. Robots decide on au- to be human-centered. Although many organisations and researchers tonomous behavior, which might be correct, efficient or stupid but by now have given requirements for human-centeredness, such as: not necessarily fair or unfair. transparancy, respect for human autonomy, fairness and accountabil- It seems we should not take the requirements as given by the EU ity, this does little to indicate how the AI techniques should be de- (or other organizations) too literal, but rather as guidelines about the signed in order to be human-centered. In this paper we argue that type of things that we should think about. Human-centered means human-centered AI involves a shift from AI emulating intelligent that a system should have the human partner always as part of the human tasks, to emulating human intelligence such that we capture focus for deliberation. This means that any task of the AI system enough social intelligence in order for the AI system to be able to should not be done in isolation, but the task should be done for some- center its activity and reasoning on its human users. one, in some context (place and time). And if the actions of the AI system affect people directly or indirectly it should be aware of this and take it into consideration when deliberating. Thus e.g. if a sys- 1 INTRODUCTION tem determines the best positions for windmills in a neighbourhood it should take into account the possible nuisance of the noise of these In the past year many people in Europe have argued that research windmills for people living close by. Thus the AI system should be in AI in Europe should be human-centered. This would fit well with socially aware. In 1942, J. Gambs [8] defined being socially aware the European culture and distinguishes our research in AI from that as: in the USA and China. Although this sounds intuitively correct and governments and the European commission have embraced this per- To know in every fibre of our body; to understand in its many spective, little is known about what human-centered AI should look ramifications and myriad applications the profound psycholog- like. Is it enough to clad AI techniques in a social layer? E.g. by ical principle that men and women have importance only as adding some natural language interface? The EU [14] gives a num- members of a group, that they can realize themselves only by ber of aspects that should be taken into account when developing AI giving themselves freely and generously to their group. systems in order to make them human-centered: This quotation shows in more powerful words that being human cen- tered means that everything one does should be for the benefit of the • Human agency and oversight humans involved. As the quotation is about human social awareness • Technical robustness and safety it can talk about self realization which is one of the primary drivers • Privacy and data governance of people. AI systems do not (necessarily) have this drive for self • Transparency realization and thus dependence of the group of people interacting • Diversity, non-discrimination and fairness with it. However, this aspect can be emulated by the designers of the • Societal and environmental wellbeing AI system by using a value based approach to create the system. I.e. • Accountability using the values of the group for which the AI system is designed as These seem also quite reasonable requirements. However, if e.g. I the starting point to determine what it should strive for (what should develop a natural dialogue interface (which is clearly an AI system) its goals be or what it should optimize). to a service of my organization, which of these requirements apply? In this paper we argue that human-centered AI entails a paradigm Let’s just look at the fifth requirement. shift in how AI techniques are developed and deployed. In the next We clearly should make this dialogue system respect diversity and section we discuss the specific social perspective that is needed. In be non-discriminatory and fair. But what does that mean? Address section 3 we discuss more on how this can lead to genuinely human- people based on their background to respect diversity? Or would centered AI. In section 4 we discuss how human-centered also means this be discriminatory? And how would we define a fair dialogue? humanity centered and leads to what is nowadays is called ”AI for It is clear that these requirements are created mainly with a type of good”. We finish with some conclusions. machine learning systems as AI system in mind. Systems that learn classifications from lots of data can make unfair decisions if a partic- 2 SOCIAL AI ular exceptional situation did not occur before, or did not occur often The vision of human-centered AI, requires that AI systems are so- enough to warrant a correct decision. However, not all AI systems cial. What does this mean and how to realise social AI is however make decisions as their major outcome. Dialogue systems produce a much less clear issue. Several authors, e.g. [13, 4], have argued 1 Copyright c 2020 for this paper by its authors. Use permitted under Cre- that agents should become more aware of the social context in which ative Commons License Attribution 4.0 International (CC BY 4.0). they operate. This awareness is not included in the standard AI mod- 2 UmeåUniversity, Sweden, email: {dignum,virginia}@cs.umu.se els of reasoning, such as the BDI model of agents, which focus on the goals and plans of an individual agent. What these authors argue need to further maximize once utility gets beyond some reason- for is a more social science based approach to the basic deliberation ably achievable threshold. of AI systems. Although one can argue that this is not necessary in • Ability to pursue seemingly incompatible goals concurrently, e.g. order to build an AI system that behaves as if it is social, it will make a simultaneous aim for comfort and sustainability. it a lot easier. Let us try to explain this more in depth. If we talk about human-centered AI, we assume that the AI sys- Our claim is that human-centered AI requires new types of archi- tem’s functions are directed and synchronized with the humans it tectures that are not primarily goal or utility driven, but are instead interacts with. But how is this done? First we need to have at least situation or (social) context based in order fulfil the above charac- some model of human behaviour that is good enough to predict what teristics. In the architecture sketched in Figure 1 a first step into the a human would expect from the AI system. This model can be fairly direction of these social agents is given. The context management simple if the AI system is a mere classification or pattern recogni- of the agent filters the (social) context to lead to standard behaviour tion tool for the human. In these cases the only thing that one should appropriate for that context. Whenever the context is uncertain, not know about the human is the optimization criteria that are used to recognized or not standard a second process of deliberation is started determine the optimal decision of the human given the output of the based on the motives and values of the agent and the current concrete system. E.g. if the system is used to determine whether a suspect of goals. After the performance of each behaviour there is a feedback a crime should get out on bail or not, we should know what is the loop that is used to adapt all the elements of the agent based on the acceptable chance that such a person skips bail or commits a crime rate of success or failure of the behaviour in that particular context. again. However, when the judge subsequentially wants to know how However, there is also an input to the context management from the the AI system got to its classification and thus wants an explanation, internal drives of the agent. I.e. the agent will actively search for a the AI system should start functioning as a partner of the judge. Thus context to satisfy some of its needs if it can. E.g. if one feels lonely the explanation it gives should involve a more complex model of the then one will actively search for a situation in which one meets with judge. Is this a more conservative judge that would put the threshold friends and/or family. Thus context management is not just passively for bail higher? Or is the judge someone that looks more in depth filtering the environment, but also directing focus on parts of a con- at the personal circumstances of the suspect and thus might feel that text or seeking it to get the right context. Sociality-based agents are some input for the system is lacking? Based on a model of the judge fundamental to the new generations of intelligent devices, and in- the explanation should be geared towards one or the other element. teractive characters in smart environments. These agents need to be The above is still a simple example, but it is illustrative for the fact fundamentally pro-active, reactive and adaptive to their social con- that maintaining a kind of BDI or utility based model of the human text, because basically the social context with people is not a static is not sufficient. Most decisions people make are not based on these given situation, but is actively created and maintained based on mu- kinds of rational models. People have basic values that drive their tual satisfaction of motives, values and needs. Thus the agents not decisions, they relate to other people, which makes them sometimes only must build (partial) social models about the humans they inter- follow the lead of someone else, they have personal needs and mo- act with, but also need to take social roles in a mixed human/digital tives that they want to satisfy which influence their decisions as well reality and start co-creating the social reality in which they operate. and finally people keep to habits and practices just in order to keep More work is needed to test and validate social agent architectures life simple (see [12]). such as the exemplary one suggested in Figure 1. If an AI system is human-centered it should interact appropriate An interesting feature of the architecture in Figure 1 is that it is with the human and thus have some awareness of these more com- not just depicting a single AI system, but concerns the shaping of plex (and social) aspects of human deliberation in order to support a AI ecosystems comprising autonomous and collaborative, assistive user to achieve the right optimum. technology in ways that express shared moral values and ethical and In recent years, several researchers in both ABM and MAS, legal principles as expressed in e.g. binding codes such as universal [13, 4, 15], recognise the need for new models of deliberation that human rights and national regulations. This requires the understand- bring together formalization and computational efficiency, with plan- ing, developing, and evaluating AI applications through the lense of ning techniques, and expertise on empirical validation and on adapt- an artificial autonomous system that interacts with others in a given ing and integrating social sciences theories into a unified set of as- environment. sumptions [1]. In particular, these models need to describe how be- It is important to be able to extend this line of research to under- haviour derives from both personal drives such as identities, emo- stand and model the ethical dilemmas that arise from the need to tions, motives, and personal values as well as from social sources combine multiple norms, preferences and interpretations, from dif- such as social practices, norms, organizations [3]. Main characteris- ferent agents, cultures, and situations. In the next two sections we tics of sociality-based reasoning are [5]: will discuss the consequences of a human-centered approach. • Ability to hold and deal with inconsistent beliefs for the sake of 3 HUMAN-CENTERED AI coherence with identity and cultural background. • Ability to combine innate, designed, preferences with behaviour To understand the societal impact of AI one needs to realise that AI learned from observation of interactions. In fact, preferences are systems are more than just the sum of their software components. not only a cause for action but also a result of action, and can AI systems are fundamentally socio-technical, including the social change significantly over time. context where it is developed, used, and acted upon, with its variety • Capability to combine reasoning and learning based on perceived of stakeholders, institutions, cultures, norms and spaces. That is, it situation. Action decisions are not only geared to the optimization is fundamental to recognise that, when considering effects and the of own wealth, but often motivated by altruism, justice, or by an governance of AI technology, or the artefact that embeds that tech- attempt to prevent regret at a later stage. nology, the technical component cannot be separated from the socio- • Pragmatic, context-based, reasoning capabilities. Often there is no technical system (Dignum, 2019). This system includes people and Social Social Social agent agent agent Figure 1. Sketch of a Social System Architecture organisations in many different roles (e.g. developer, manufacturer, of those concepts how they interpret the requirements as mentioned user, bystander, policymaker, etc), their interactions, and the proce- in the introduction. E.g. if ”safety” is the primary value when de- dures and processes that organise these interactions. veloping the software of a self driving car, then the requirement of At the same time, it is as important to understand the properties of transparency might be interpreted as explaining why a certain action AI technology, as determined by the advances in computation tech- of the vehicle was safer than a default expected action. Thus trans- niques and data analytics. AI technology is an artefact, a software parency in this case would not include giving the whole causal chain system (possibly embedded in hardware) designed by humans that, of reasoning that led to the current action, but only that part that is given a complex goal, are able to take a decision based on a process relevant for safety. Moreover, there might be cases where a car pro- of perception, interpretation and reasoning based on data collected ducer does not want to give full transparency of the system as it could about that environment. In many case this process is considered ‘au- lead to exploitation of some particular preferences of the system with tonomous’ (by which it is meant that there may be limited need for adverse effects. E.g. if it is known that any moving object that comes human intervention after the setting of the goals), ‘adaptive’ (mean- closer than 1.5 meter from the vehicle will cause the car to stop, peo- ing that the system is able to update its behaviour to changes in the ple might use this to get right of way on the car preventing it from environment), and ‘interactive’ (given that it acts in a physical or ever turning on a road. digital dimension where people and other systems co-exist). Even From this example we can see two fundamental issues: though many AI systems currently only exhibit one of these proper- 1. The AI techniques used in the AI system should be amenable to ties, it is their combination that is at the basis of the current interest the ethical requirements such as transparency. I.e. it should be pos- on and results of AI, and fuels public’s fears and expectations [6]. sible to explain (or to show) how the system got to a certain deci- Guidelines, principles and strategies must be directed to these sion or behavior. socio-technical systems. It is not the AI artefact that is ethical, trust- 2. It should be possible to adjust the implementation of the require- worthy, or responsible. Rather, it is the social component of the socio- ment such as transparency based on the context in which the sys- technical system that can and should take responsibility and act in tem is used. I.e. requirements such as transparency should not have consideration of an ethical framework such that the overall system one fixed definition for all AI systems, but rather be defined based can be trusted by the society. The ethics of AI is not, as some may on how the AI system is used. claim, a way to give machines some kind of ‘responsibility’ for their actions and decisions, and in the process, discharge people and or- The second statement seems to indicate that we could make any con- ganisations of their responsibility. On the contrary, AI ethics requires crete definition of the requirements ourselves in a way that suits more responsibility and more accountability from the people and or- us best. However, this is not the intention. In order to make this ganisations involved: for the decisions and actions of the AI applica- more precise we could require that any concrete description of e.g. tions, and for their own decision of using AI on a given application transparancy for a specific case should counts-as transparency in the context. sense as given by Grossi [10]. In this work the counts-as relation is This also means that requirements for trustworthy AI, such as defined such that when A counts-as B then A should at least con- those discussed in the introduction, are necessary but not sufficient tain the core of the meaning of B, but might have extra features in to develop human-centered AI. The development of human-centered its penumbra. Thus one could state that a drivers licence (in some AI systems should focus on more fundamental aspects of human re- context) counts-as a valid ID, but club membership card (without a sponsibility such as values and norms. By starting from these funda- photo) would not counts-as an ID. The club membership card misses mental social concepts the designers will be forced to define in terms some of the core features. So, there is freedom in specifying what counts-as a concept, but not unlimited. In a similar vein one could state that the concrete implementation of the transparency require- a system, but should design AI systems in a value based way, tak- ment should be such that one can prove afterward that this imple- ing into account the social context in which the AI system is used. mentation counts-as transparency. This also means that we have to have an eye for ethical dilemmas We conclude that a truly human-centered AI system will exhibit where optimality for humanity (or a larger group) can be different such properties as emergent features from its design, but the mere than for an individual. Making AI systems aware of their social con- adherence to these properties in a mechanical way does not make an text entails that they should be aware of the consequences of their AI system human-centered. actions for the humans they interact with. This means the AI systems should start using more realistic human models to predict expected behavior in the interactions. These models should at least incopro- 4 HUMANITY-CENTERED AI rate social concepts like social practices, norms, values, etc. Given Finally, in this context, it is important to discuss humanity- this social context of human-centered AI it makes sense to develop centeredness. In the previous section, we have mostly discussed the AI systems that are themselves based on social deliberation mecha- interaction with AI systems and its users, and how social awareness nisms. We have provided a first sketch of how such systems might can improve this interaction and ensure trust in the system and its look. But, of course, much work needs to be done in this direction actions. Humanity can either mean an attitude, or moral sentiment before thise type of systems can be fully utilized. of good-will towards fellow humans, or the collective existence of all humans [2]. Both definitions have been studied extensivly in psy- ACKNOWLEDGEMENTS chology and the social sciences, which describe that humanity is nec- essary for our collective existence. However, the interests of individ- This work was partially supported by the Wallenberg Al, Au- ual humans and of humanity as a whole are not always aligned. In tonomous Systems and Software Program (WASP) funded by the fact, individual solutions to shared problems may create a modern Knut and Alice Wallenberg Foundation. tragedy of the commons. For example, climatic changes, population growth, and economic scarcity create shared problems that can be REFERENCES tackled effectively through cooperation and coordination, but indi- vidual solutions to shared problems, such as privatized healthcare or [1] S. Chai, Choosing an Identity: A General Model of Preference and Be- lief Formation, University of Michigan Press, 2001. retirement planning, can lead to inefficient resource allocations and [2] Robin M. Coupland, ‘The humanity of humans: Philosophy, science, coordination failure [9]. health, or rights?’, Health and Human Rights, 7(1), 159–166, (2003). From an ethical perspective, the main issue often is the balance or [3] F. Dignum, V. Dignum, R. Prada, and C.M. Jonker, ‘A conceptual archi- dilemma between the good of the community and that of the individ- tecture for social deliberation in multi-agent organizations’, Multiagent and Grid Systems, 11(3), 147–166, (2015). ual. Social institutions are often the means to offer guidance in these [4] F. Dignum, R. Prada, and G.J. Hofstede, ‘From autistic to social aspects. The last few years have seen a proliferation of guidelines and agents’, in AAMAS 2014, (May 2014). principles for AI as a means to ensure that AI systems are designed [5] Virginia Dignum, ‘Social agents: Bridging simulation and engineering’, and used both for the benefit of individuals and of society, [7, 11]. Communications of the ACM, 60(11), (2017). When AI systems are designed for humanity, requirements of in- [6] Virginia Dignum, Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, Springer International Publishing, clusion, diversity, bias and well-being become leading. To serve hu- 2019. manity’s best interests is the top priority of such AI systems, pos- [7] Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice sibly leading to decisions that are less optimal for a given person Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo or group. For example, AI systems aiming to solve the climate cri- Pagallo, Francesca Rossi, et al., ‘Ai4people—an ethical framework for a good ai society: Opportunities, risks, principles, and recommenda- sis may propose solutions that lower the living comfort levels that tions’, Minds and Machines, 28(4), 689–707, (2018). many are used to in the global North. Ways must be found for peo- [8] John S. Gambs, ‘What does it mean to be socially aware?’, Childhood ple around the world to come to common understandings and agree- Education, 19(2), 51–51, (1942). ments - to join forces to facilitate the innovation of widely accepted [9] Jörg Gross and Carsten K.W. De Dreu, ‘Individual solutions to shared approaches aimed at tackling wicked problems and maintaining con- problems create a modern tragedy of the commons’, Science Advances, 5(4), (2019). trol over complex human-digital networks. AI for Humanity is often [10] D. Grossi, Designing invisible handcuffs, formal investigation in insti- equated with AI for Good, which promotes projects that have a posi- tutions and organizations for multi-agent systems, SIKS Dissertation tive impact on communities and humanitarian issues such as disaster series, Utrecht University, 2007. management, agriculture, the environment, climate change, or pro- [11] Anna Jobin, Marcello Ienca, and Effy Vayena, ‘The global landscape of ai ethics guidelines’, Nature Machine Intelligence, 1(9), 389–399, moting diversity and inclusion. As technology, AI has both the poten- (2019). tial to contribute to solving or inhibit humanity’s main challenges, as [12] D. Kahneman, Thinking, fast and slow, Farrar, Straus & Giroux, 2011. defined by the United Nations in the Sustainable Development Goals [13] G. Kaminka, ‘Curing robot autism: A challenge’, in AAMAS 2013, pp. [16]. 801–804, (May 2013). [14] EU-HLEG on AI, Ethics Guidelines for Trustworthy AI, 2019. [15] B. Silverman, D. Pietrocola, B. Nye, N. Weyer, O. Osin, D. Johnson, 5 CONCLUSIONS and R. Weaver, ‘Rich socio-cognitive agents for immersive training en- vironments: case of nonkin village’, Journal of Autonomous Agents and In this position paper we have argued that human-centered AI en- Multi-Agent Systems, 24(2), 312–343, (March 2012). [16] Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite, Madeline Balaam, tails more than adding some social capabilities, such as explanation Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela facilities, to AI systems. It is also not enough to give more precise Langhans, Max Tegmark, and Francesco Fuso Nerini, ‘The role of arti- or concrete definitions of concepts such as fairness and transparancy. ficial intelligence in achieving the sustainable development goals’, Na- The requirements for human-centered AI are the result of the com- ture Communications, 11(1), 1–10, (2020). bination of humans and AI system. Therefore, in order to have these properties emerge we cannot just impose some fairness condition on