The Challenges for Fairness and Well-being - How Fair is Fair? Achieving Well-being AI - Takashi Kido Keiki Takadama Teikyo University, The University of Electro-Communications Advanced Comprehensive Research Organization Department of Informatics kido.takashi@gmail.com keiki@inf.uec.ac.jp Abstract vention measures promoted digital transformation, generat- In the AAAI Spring Symposium 2022, we discussed fairness ing enormous amounts of data. Therefore, the need for AI and well-being in the context of well-being AI. One of the has increased, as shown in the race to find a COVID-19 vac- important keywords is “well-being.” We define "well-being cine through global collaborations. AI" as Artificial Intelligence that promotes psychological We call for AI-related challenges in new human-AI col- well-being (i.e., happiness) and maximizes human potential laboration and discuss desirable human-AI partnerships for ability. The well-being AI helps understand how our digital experience affects our emotions and quality of life and how providing meaningful solutions to social problems from hu- to design a better well-being system that puts humans at the manity’s perspectives. This challenge is inspired by the “AI center. The second important keyword is “fairness.” AI can for social good” movements, which pursue the positive so- potentially assist humans in making fair decisions. However, cial impacts of using AI, supporting the Sustainable Devel- we must tackle the “bias” problem in AI (and in humans) to opment Goals (SDGs), a set of 17 objectives for the world achieve fairness. Although statistical machine learning pre- to be more equitable, prosperous, and sustainable. In partic- dicts the future based on past data, several types of data biases ular, we focused on two perspectives: well-being and fair- may lead to an AI-based system making incorrect predictions. For AI to be deployed safely, these systems must be well- ness. understood, and we need to understand “How fair is fair” for The first is "well-being". We define "well-being AI" as achieving “Well-being AI.” This paper describes the motiva- Artificial Intelligence that aims to promote psychological tion, scope of interest, and research questions of this sympo- well-being (that is, happiness) and maximize human poten- sium. tial ability. Our environment escalates stress, provides un- limited caffeine, distributes nutrition-free “fast” food, and encourages unhealthy sleep behavior. To address these is- Motivation sues, well-being AI provides a way to understand how our What are the ultimate goals and outcomes of AI? Although digital experience affects our emotions and quality of life, AI has incredible potential to help make humans happy, it and how to design a better well-being system that puts hu- can potentially cause unintentional harm. This symposium mans at the center. aims to combine humanity perspectives with technical AI The second perspective is "fairness". AI has the potential issues and discover new success metrics for well-being AI to assist humans in making fair decisions. However, we instead of productive AI in exponential growth or eco- must tackle the “bias” problem in AI (and in humans) to nomic/financial supremacies. achieve fairness. In the recent trend of big data becoming Especially in the COVID world, people's lives are trans- personal, AI technologies for manipulating the inherent cog- forming on an unprecedented scale. From this fact, it is im- nitive biases have evolved, such as social media (Twitter portant to investigate how people's mindsets are shifting and and Facebook) and commercial recommendation systems. how desirable human-AI partnerships would be. COVID-19 The “echo chamber effect” is known to make it easy for peo- may change human-AI collaborations by easing people's ple with the same opinions in a community. Recently, there concerns about technology. For example, the number of has been a movement to use cognitive biases in the political people working from home has increased and business trips world. Advances in big data and machine learning should have almost disappeared. Meetings are held online, and vir- not overlook new threats to enlightenment thought. tual ceremonies are held using AI bots. The COVID-19 pre- This symposium called for the technical and philosophi- ___________________________________ cal issues of achieving well-being and fairness in the design In T. Kido, K. Takadama (Eds.), Proceedings of the AAAI 2022 Spring Symposium “How Fair is Fair? Achieving Wellbeing AI”, Stanford University, Palo Alto, California, USA, March 21–23, 2022. Copyright © 2022 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 1 and implementation of ethics, machine-learning software, experiments (e.g., metabolic syndrome, diabetes), sleep im- robotics, and social media (but not limited to). For example, provement experiments, healthcare/disabled support sys- interpretable forecasts, sound social media, helpful robotics, tems, and community computing platforms. fighting loneliness with AI/VR, and promoting good health are important aspects of our discussions. (2) How can we define and measure Fairness? To explore basic research to define the “fairness” for “hu- man-in-the-loop computational systems,” providing inspira- Our Scope of Interests tion for new success metrics for fair AI, interdisciplinary re- This symposium discussed important interdisciplinary chal- search such as bias and fairness in machine learning, fair- lenges for guiding future advances of fairness and well-be- ness criteria and metrics, responsible AI, trusting AI, social ing in AI. We have the following scope of interest in this computing for trusting humans-in-the-loop computational symposium: systems, multi-agent simulations on fairness, and game the- ory-based analyses on fairness, were called for in this sym- (1) How can we define and measure the well-being of posium. humans? To discover new success metrics for well-being AI instead ⚫ Interpretable AI of productive AI in exponential growth or economic/finan- Interpretable AI is artificial intelligence whose derived re- cial supremacies, this symposium called for basic research sults can be easily understood by humans. For example, we to define human well-being, which provides inspiration for need to develop powerful tools to understand exactly what new success metrics for well-being AI. Interdisciplinary re- deep neural networks and other quantitative methods are search such as positive psychology, positive computing, performing. To address this issue, we called for theoretical predictive medicine, human well-being, economics beyond and empirical research to understand the possibilities and GDP, social computing for understanding AI job replace- limitations of current AI/ML technologies for interpretable ment and disparity, neuroscience of happiness and pleasure, AI. The topics included human bias vs. computational (data) multi-agent social simulations, cultural algorithms, a flour- bias, interpretability of machine learning systems, account- ishing environment, and cross-cultural analyses for well-be- ability of black box prediction models, interpretable AI for ing values were the topics of this symposium. precision medicine, interpretability in human/robot commu- nications, bias analysis on social media, political orientation ⚫ Well-being AI: Machine Learning and other ad- analyses, accuracy and efficiency issues in health, econom- vanced analyses for Health & Wellness ics, and other fields, causal inference to reason about fair- Advanced machine learning technologies, such as deep ness, and actionable recommendations based on causal in- learning and other quantitative methods, need to be explored ference. in the health and wellness domains. We called for theoretical and empirical research on the well-being AI. Discussions on ⚫ Better Fairness systems design evaluating the possibilities and limitations of current tech- To explore the empirical and technical research on the de- nologies were also called for. sign of better fairness systems, the topics included criteria The topics included deep learning, data mining, and metrics for fairness in robotics, machine learning soft- knowledge modeling for wellness, collective intelli- ware, social media, “human-in-loop systems,” collective gence/knowledge, life log analysis (e.g., vital data analyses, systems, recommendation systems, and personalized search Twitter-based analysis), data visualization, human compu- engines. tation), biomedical informatics, and personalized medicine. (3) Ethical Issues on “AI and Humanity”: desirable hu- ⚫ Better Well-being systems design man-AI partnerships. To explore empirical and technical research on improving To explore the ethical and philosophical discussions on de- well-being system design, the topics included social data sirable human-AI partnerships, the topics included “Ma- analyses and social relation design, mood analyses, human- chine Intelligence vs. Human Intelligence,” “How AI affects computer interaction, health care communication system, our human society or way of thinking,” issues on basic in- natural language dialog system, personal behavior discovery, come, issues on infodemic (e.g., fake news) with social me- Kansei, zone and creativity, compassion, calming technol- dia, and personal identity. More technically, we need to ogy, Kansei engineering, gamification, assistive technolo- deepen our understanding of the possibilities and limitations gies, Ambient Assisted Living (AAL) technology, medical of machine learning and other advanced analyses of health recommendation system, care support system for older and wellness. adults, web service for personal wellness, games for health and happiness, life log applications, disease improvement 2 Conclusion In this paper, we describe the motivation, technical, and philosophical challenges related to “AI Fairness and Well- being” as proposers and organizers of the AAAI2022 sym- posium. This symposium aimed to share the latest progress, current challenges, and potential well-being of AI applica- tions and discussed the evaluation of digital experience and understanding of human well-being. References Kido,T., Takadama, K. 2019. The Challenges for Interpret- able AI for Well-being -Understanding Cognitive Bias and Social Embeddedness- 2019 March, Stanford: http://ceur- ws.org/Vol-2448/SSS19_Paper_Upload_210.pdf Kido,T., Takadama, K. 2018. Wellbeing AI: From Machine Learning To Subjectivity Oriented Computing, AAAI Spring symposium 2018 March, Stanford: https://aaai.org/Library/Symposia/Spring/ss17-08.php Acknowledgments We thank the program committees of this symposium for their valuable support. 3