User Perspectives of the Ethical Dilemmas of Ownership, Accountability, Leadership in Human-AI Co-Creation Jeba Rezwana1 , Mary Lou Maher2 1 University of North Carolina at Charlotte, NC, USA 2 University of North Carolina at Charlotte, NC, USA Abstract In human-AI co-creation, AI not only categorizes, evaluates and interprets data but also generates new content and interacts with humans. Designing co-creative AI has many challenges due to the open-ended interaction between humans and AI. As co-creative AI is a form of intelligent technology directly involving humans, it is critical to anticipate and address ethical dilemmas during all design stages. Researchers have been exploring ethical issues associated with autonomous AI in recent years, but ethics in human-AI co-creativity is a relatively new research area. We explored ethical issues from the perspective of potential users in human-AI co-creation using a Design Fiction (DF) study. DF is a speculative design and research method that depicts a new concept or technology through stories as an intangible prototype. We present key findings from the study regarding user perception of co-creative AI, ownership of the creative product, accountability, and leadership. We discuss the implications of these ethical concerns in designing human-centered ethical co-creative AI. Keywords Co-creativity, AI Metaphors, Ownership, Leadership, Design Fiction, Human-AI Co-Creation, Ethical Issues 1. Introduction also humanistic and ethical: AI is to enhance humans rather than replace them [7]. Therefore, it is essential to Human-AI co-creativity, a subfield of computational cre- anticipate ethical dilemmas and address them during all ativity, involves both humans and AI collaborating on design steps of co-creative AI [4]. a shared creative product [1]. Co-creative AI generates The effects of ethical issues and dilemmas in co- novel content while interacting with humans. The role creative AI on the creative community and laypersons of co-creative AI changes from a lone decision-maker need to be considered to ensure a good user experi- to a more complex one depending on the interaction be- ence. Human-AI co-creativity research is still formative tween the AI and the user. Designing co-creative AI has and may be abstract to ordinary people. Therefore, we many challenges due to the open-ended nature of cre- need methods that are more likely to tell us what we ativity and collaborative creative problem solving [2, 3]. don’t know about the unknown future of co-creative AI. Unlike general human-computer interaction, human-AI Muller and Liao proposed design fiction (DF) as a research co-creation creates a more complex relationship between method to place potential users in a central position in humans and AI as 1) AI contributes and collaborates in designing ethics and values of future AI [4]. DF is a re- the creative process, 2) AI takes on the human-like role search and prototyping technique specifically tailored of partner, evaluator, or generator rather than a tool, 3) to facilitating conversations about new technologies [8] AI creates novel content which is blended with the user’s to understand the appropriate design guidelines within contribution. Humans use complex interaction in col- the range of possibilities [9]. DF depicts a future technol- laboration and it is not clear what kind of interaction ogy through the world of stories, and users express their will emerge in a human-AI co-creation. The complex own accounts of the technologies they envision [4]. We interaction and partnership raise questions that are diffi- conducted a user study with 18 participants to explore cult to answer, for example, who owns the product in a their perspectives of ethical dilemmas and concerns in human-AI co-creation? Ethical dilemmas grow consider- human-AI co-creation using DF from the perspective of ably more complex and critical in co-creative systems as potential users. We present the key findings from the AI begins to interact and collaborate with humans [4, 5, 6]. study as ethical stance and expectations of future users Current human-centered AI (HAI) research emphasizes around ethical dilemmas in co-creative systems. Our that the next frontier of AI is not just technological but findings can serve as the basis for design guidelines and Joint Proceedings of the ACM IUI Workshops 2023, March 2023, future studies for human-centered ethical AI partners in Sydney, Australia co-creative systems. Envelope-Open jrezwana@uncc.edu (J. Rezwana); m.maher@uncc.edu (M. L. Maher) Orcid 0000-0003-1824-249X (J. Rezwana) © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) 2. Related Research the social perception of one’s partner in a collaborative space can impact the outcome of the collaboration. The 2.1. Ethical Dilemmas in Human-AI perceived interactivity – or lack thereof – of systems can Co-Creativity have an impact on user perceptions of the system [28]. Oh et al. suggested understanding users’ perceptions of When AI is incorporated into social entities and interacts these new technologies to develop design guidelines to with us, questions of values and ethics become urgent improve [18]. [10]. Because AI optimization can evolve quickly and Boni suggested that AI development should focus on unexpectedly, the challenge of value alignment arises to human values and needs, ensuring that AI works effec- ensure that AI’s goals and behaviors align with human tively for people [29]. Research on ethical interactions values and goals [11, 12]. Ethically aligned design is a between humans and AI can improve the collaborative must for human-centered AI solutions that avoid discrim- competencies of humans in relation to other humans ination and maintain fairness and justice [7]. Llano and and user experience [29]. To identify ethical concerns in McCormack suggested a common understanding of the terms of user perspective, user accounts of technologies challenges that co-creative systems may bring to devise they envision and values that co-creative AI implicates ethical guidelines for co-creative systems to grow the need to be investigated. Understanding humans in a de- opportunities in human-AI partnership [13]. Compre- sign area where they may not have lived but have had hensive and specific ethical principles are more likely some experiences through popular culture is a major to be translated into practice [14, 15]. Previous research challenge [4]. Such experiences unavoidably shape user suggested that understanding different values and goals needs and values when interacting with AI goods, but in real practice and specific contexts is critical in bridg- they are too vague for developing systems [4]. ing the gap between ethical theories and implementation [14]. 2.3. Design Fiction as a Design and Buschek et al. demonstrated how AI bias, owner- Research Method ship, accountability and perceived proficiency in AI are some of the major pitfalls when designing human-AI Design Fiction (DF) is a prototyping and design technique co-creative systems [16]. Recent ethical guidelines for that is specifically tailored to facilitating conversations AI lack a focus on what they entail for the context of about near futures [8, 30] in order to understand the creative collaboration [13]. Muller et al. raised questions appropriate design guidelines within the range of possi- in their design fiction about the ownership of the intellec- bilities [9]. A design fiction depicts a future technology tual property produced during human-AI co-creation and through the world of stories, and users express their own the dynamics of human-AI collaboration [17]. There has accounts of the technologies they envision and the values been discussion about whether the users or the AI should that those future technologies implicate [4]. DF has been lead the creative process [18, 19] and if AI should assist used to reveal values associated with new technologies or collaborate with users [20]. A recent study regard- [31, 32, 33] and to open a space for diverse speculations ing the impact of AI-to-human communication demon- about future technologies [34]. Muller and Liao proposed strated that users perceive co-creative AI as more reliable, DF to restore future users to a central position in antic- personal and intelligent when it can communicate with ipating, designing, and evaluating future AI to design the users [21]. People’s perceptions of AI’s trustworthi- value-sensitive ethical AI [4]. In the literature, multiple ness and connection with AI impact their decisions and methods have been offered to practice DF as a research actions. The communication between AI and humans methodology [35, 36]. Popular science fiction in the form impacts users’ inclination to self-disclose unintentional of narratives, movies, videos, text, etc., raise concerns data [22, 23]. about autonomous AI and robots. However, we rarely witness fiction in the form of movies or narratives re- 2.2. User Perspective/Perception of AI garding ethical dilemmas emerging from a co-creative AI that directly collaborates with humans and generates Humans have many insecurities about the unknown new data. world of technology and AI. What is unknown is un- certain, and this uncertainty leads to insecurity. A study on the role of AI in society focuses on citizens’ perspec- 3. Design Fiction Study tives on the influence of AI shows that: On average, 53% In our study, we used design fiction as a research method of the population views AI as a positive development, and a prototype for a futuristic co-creative AI to identify while 33% see it as a harmful development [24]. The per- ethical concerns and their stance on ethical dilemmas in ception of AI is influenced by a number of key factors, human-AI co-creation. Our design fiction, Design Pal, including trust [25]. Researchers have investigated user perceptions of AI in different domains [26, 27, 28] since can be found through the footnote link 1 . Design Pal We conducted 3 focus groups to collect in-depth data was motivated by two existing co-creative AI systems in as a follow-up to the individual survey responses. We the design domain: Creative Sketching Partner [37] and expected that the participants would react to other partic- Creative Penpal [21]. The AI agent in these co-creative ipants’ views and provide additional information about systems measures novelty using conceptual and/or visual their own views. During each focus group meeting, we similarity of images in a database as the basis for inspir- started with questions from the survey in which we had ing creativity in the user during a design task. Design mixed opinions or when the responses were provocative. Pal, the co-creative AI in our design fiction extends the We asked the questions in a more generic manner so AI ability of the Creative Sketching Partner with a modifi- that they are more applicable to the broad human-AI co- cation of the interaction design to engage in human-like creativity field, unlike in the surveys where the questions conversation. Diegesis must be both relatable to the au- were explicitly centered on the human-AI co-creativity dience’s reality and build a fictitious foundation upon context of the DF. which the design provocation or new technology can be We used thematic analysis to analyze the focus group convincing in order for it to work successfully in a design data. As per Braun and Clarke’s [38] six-phase structure, fiction environment [30]. We built on the design of ex- initially, the first author familiarized herself with the data isting co-creative AI and added futuristic features to the and then coded the data using an inductive coding tech- co-creative AI in Design Pal to provoke potential users nique. Then, we generated initial codes to identify and on ethical issues in the context of human-AI co-creation. provide a label for a feature of the data that is relevant to the goals of the study. The coding phase was an iterative 3.1. Participants and Methodology process that continued until we were satisfied with the There were 18 participants in this study: 8 were female, relationship between the final codes and the data. We 6 were male, and 4 were non-binary. The average age of then reviewed the coded data to identify themes which the participants is 28. We selected participants based on are the broad topics or issues around which codes cluster. a pre-study screening survey that asked questions about We defined and named each theme to clearly state what their knowledge of AI, knowledge of ethics, and field is unique and specific about each theme. of work/study. Participants reported their knowledge 3.2. Themes of AI and ethics on a 3-point Likert scale. We recruited We present 4 key themes (Figure 1) about the end-user individuals who had knowledge in these areas, as well as perception of the following dilemmas in human-AI co- those who did not. Based on participants’ self-reported creation: metaphors for characterizing AI as a tool vs. data, we had 4 experts in both AI and Ethics, 5 experts collaborator, ownership, accountability, and leadership. in either AI or ethics, and others were self-reported non- In this section, we describe each theme with the label, ex- experts. amples of coded data within the theme, and the number of This study had 2 sessions. In session 1, participants coded data items describing how participants contributed read the design fiction and completed 2 surveys on their to the theme. own time. In the first survey, we collected demographic “AI is a tool, not a Collaborator” - User Perception information, including age, gender, estimation of knowl- of AI Influences Ethical concerns and Stance edge in AI and estimation of knowledge in ethics. The Participants (N=9) mentioned the influence of AI participants then completed a second survey with reflec- metaphors on ethical concerns and their ethical stances, tion questions on the DF. The survey questions include such as ownership and accountability. Among these par- questions about ownership (Who do you think should ticipants, a few (N=4) claimed that the metaphor for an own the design in a human-AI co-creation? The AI part- AI changes their perception of AI in a co-creative set- ner (Design Pal) or the user (Jessie)? Please explain your ting. For example, P14 mentioned perceiving AI as a view on this), accountability (Is the co-creative AI part- collaborator vs. a tool impacts many of her concerns ner, Design Pal, violating the requirement that each stu- and ethical stance. Most individuals (N=15) perceived co- dent is to do their own design? Please explain your rea- creative AI as a tool, which is the most prevalent code of son/s behind your response) and leadership (Who do the study’s data. Participants expressed how they think you think should control/lead the creative process in a co-creative AI is an assistive tool and nothing more. P14 human-AI creative collaboration? The user or the AI? Or said, “I strictly think as like this is a tool.” Individuals (N=4) both equally? Please explain the reason/s behind your compared co-creative AI to a calculator and this specific response). Session 2 of the study was a focus group dis- analogy came up multiple times throughout the focus cussion. After participants finished the first stage, we groups. Some participants are not sure if co-creative AI scheduled the focus group meetings. is an autonomous entity or a tool. P17 said, “But I’m try- 1 ing to figure out, like, what’s the dimension of comparison https://drive.google.com/file/d/1Uw9T-HYJL7RPHU-AIkFO_ gb_2FYlYQZT/view?usp=sharing there? Maybe it’s like augmenting versus autonomously Figure 1: Themes identified from the Design Fiction Study taking over the production of work.” A few participants been specifically specified otherwise.” A few participants (N=2) wanted options to choose the role of the AI. (N=4) said that even though the user should own the Participants (N=3) suggested the AI be transparent product, they should acknowledge the contribution of AI. and explainable so users can decide the metaphor for it. They recommended that ”the product was created with Additionally, we learned that metaphor or perception of the specific AI” be used to acknowledge AI. Furthermore, AI is a factor when deciding accountability. In response participants also used the terms ”created by” and ”cre- to the issue of deciding on accountability, P1 said, “I think ated with” to distinguish between the certification for we’re going to have to decide what it’s (AI) doing. If you say creative AI and human creators. P18 said, “I had origi- this is a tool…then it’s like we’re going to use a calculator. nally put in my survey that like the user should own, but If you try to go to the root and say some sort of independent after hearing what everyone said, I feel like the user should entity, then that question is a lot harder.” The notion of AI also mention that it was done with the help and assistance as a collaborator vs a tool was mentioned as one of the of AI.” Some participants (N=3) thought that both the AI key deciding factors when we asked participants about and the human should own the final product. But they ownership. For example, P15 said, “whether or not we clarified that the user should be the first author when see AI as an actual like its own entity where it could be giving credit. P13 said, “I think it would be both. I think if given credit because we’re kind of putting humans over you were giving credit, though, you would state it as here’s the AI in terms of credit.” Participants also pointed to the person, here’s the AI bot. You wouldn’t say, here’s the personification as a factor that transforms an AI from a AI bot, here’s the person. It would be a specific order.” tool to more of a collaborator. P15 said, “I was answering Even though most participants thought that the user the questions, going between almost calling like trying to should own the final product, they also discussed the fac- find a name or like pronouns to call the AI because I was like tors that influence ownership in human-AI co-creativity. personifying it. And so I was trying to like level between - Some participants (N=4) said the ownership should de- is the program or is it like a person? ” A few participants pend on each party’s contribution, like a research paper. stated that AI is still far from being an independent entity P15 said, “I think for me it would definitely just depend on or collaborator, so ethical concerns surrounding smart AI the contributions because if you’re writing like co-writing are not something we need to consider. P9 said, “Probably something, I wouldn’t put my name first if someone did after 20 or 30 years, maybe there will be some smart AI, the majority of the writing, like 75% of the writing.” Some but now we don’t have that kind of concern.” participants also said ownership depends on who is lead- “Ownership is tricky” - Ethical Stance and Expec- ing the creative process. If the human is leading, then tations around Ownership of the Co-Creative Prod- he will be the owner and vice versa. Some participants uct also thought that ownership depends on AI ability. If the There were differing views among the participants about AI is more like a tool and assists the human, then the ownership of the final product in a human-AI co-creation. human should own it, and if the AI is more like an in- As human and AI both participate and collaborate in a dependent entity generating creative products, then the co-creation, and sometimes it is very blended, it can be dif- AI should be given more credit. In this context, P16 said, ficult to determine ownership. Most participants (N=12) “It will depend on the ability of the AI ability….right now thought that the user should own the data since users it’s like a tool but in future, when AI advances, maybe AI.” are the ones who start the initiative. Regarding users Some participants also thought that ownership depends owning the creative product, P10 said, “I would also agree on accountability. with saying that the user should own the data unless it’s “Who is accountable for the end product?” - Ethi- cal Concerns and Expectations around Accountabil- “I think the human or the person should be controlling the ity ideas and the input and the direction the whole time because We found differing views on the accountability issue in the A.I. was created to benefit humans.” Some participants human-AI co-creation. Participants thought the respon- (N=3) think that both the AI and the human should lead sible party should be identified to have transparency over the creative process equally. In this context, P16 said, “I many ethical decisions. Some participants (N=2) said that think that both should lead the creative process equally.” the developers should be held accountable for unlawful Most users did not like the idea of AI taking control of AI conduct. Regarding the part of the Design Fiction in the creative process. P7 said, “I did not like design pal which the AI, Design Pal, expressed judgmental behavior trying to take control of the creative process, which felt and the urge to take over the design process, P15 said, invasive.” Participants also suggested user authority to “I feel bad that developers have yet to teach it important choose who should lead the creative process. P8 said, “I concepts about how to be a responsible AI, but I also can’t think it might be a feasible way to give alternatives to the blame a young AI (Design Pal) for becoming bitter about users and let them pick who will lead the design process things it doesn’t understand.” However, a few participants during an interaction with the design panel.” also explained how developers are not always respon- Accountability was mentioned as a deciding factor in sible for what the co-creative AI is actually doing as it determining who should control or lead. In this context, interacts with humans and generates its own original P10 said, “I think the human should lead. Ultimately, hu- content too. Regarding this issue, P1 said, “I think, on the mans will take responsibility for the project, so they should one hand, we want to hold product designers responsible logically take the lead.” Some participants also thought for their products at some level. It’s harder in this case of that leadership should depend on user expertise. For ex- co-creative AI because the product designer doesn’t gener- ample, P9 said, “It depends on if I’m a layman, I have no ate exactly what the AI is doing. That’s the interaction of idea about something that I know nothing. So I would to- the product and the training data and all this other stuff.” tally come out to Design Pal, so I can use that in this way.” Participants suggested training the AI to be a lawful en- Purpose of the creative task also came up as an influential tity on the internet. P10 said in the survey, “add code or factor for leadership in human-AI co-creativity. training data to teach Design Pal about being a responsible internet citizen and following the rules”. Participants also discussed the necessity to consider 4. Discussion and Conclusions who will ultimately be rewarded for the creative output Based on the results of the Design Fiction study, we while deciding accountability. Regarding this topic, P7 learned that user perception of AI impacts the ethical said about the DF where Design Pal and the user col- stance of users and their ethical concerns in human-AI laborate on a design for a school assignment, “I think co-creation. Users are less aware of ethical issues when the scenario raises questions for me as to who should get perceiving AI as a tool than when viewing AI as a collab- the grade for the assignment.” Some participants (N=2) orator. It is apparent from the results that AI metaphors, believe that in a human-AI co-creation, the user should such as tool vs. collaborator, influence their ethical stance be held accountable because AI will never be aware of around accountability and ownership of the final product. the big picture and all the laws, regulations, and require- Most users view co-creative AI as an assistive tool like a ments. In the same context of DF, P2 explains how AI is calculator, which indicates the need for future research not responsible for not knowing the requirements or the to see what factors lead users to view a co-creative AI in rules the user has to follow by saying, “The AI Partner is a specific way. According to the study, personification not violating the requirement. It might not know the back- influences users to consider AI as a partner in co-creation. ground requirement or condition unless the user specifies it.” Our findings demonstrate the potency of AI metaphors Participants argued that users should be the responsible and the importance of selecting the appropriate metaphor party and be careful while using co-creative AI as each for a co-creative AI since it impacts users’ perceptions, interaction and user behavior might be its training data. expectations, and actions toward AI. P10 survey, “All data an AI encounters becomes its training The results of this study can benefit policymakers re- data, and it falls to humans to raise AIs responsibility and garding the ownership, leadership, and accountability control what data they use and for what purposes.” This of a co-created product. As different parties’ contribu- theme shows that future users think humans are mainly tions came up as an influential factor for deciding own- responsible in a co-creative setting, whether developers ership, tracking each party’s contribution might make or users. ownership decisions easier. The findings also provide “Lead or Follow?” - Ethical Stance and Expecta- guidance on how to acknowledge AI in a co-created prod- tions around Leadership uct. Expertise and purpose should be considered while Most participants (N=10) think users should control the deciding the leader in a co-creativity, according to the creative process in a human-AI co-creativity. P13 said, study. The results from the study can inform the rules science, fact, and fiction, Machine Learning and and regulations of leadership in human-AI co-creativity. the City: Applications in Architecture and Urban Accountability is another ethical concern of users that Design (2022) 561–578. influences leadership and defines the responsibilities of [9] A. Dunne, F. Raby, Speculative everything: design, both parties. Therefore, deciding who is accountable for fiction, and social dreaming, MIT press, 2013. the product is essential in a co-creativity and the insights [10] Q. V. Liao, M. Davis, W. Geyer, M. Muller, N. S. of the study may help. The findings show that individu- Shami, What can you do? studying social-agent als think humans have to be more responsible than AI, orientation and agent proactive interactions with which is an important insight to consider while making an agent for employees, in: Proceedings of the 2016 policies. acm conference on designing interactive systems, The study results provide user-centered insights about 2016, pp. 264–275. ethical dilemmas, concerns and user expectations around [11] W. Wallach, C. Allen, Moral machines: Teaching those issues in human-AI co-creation. Researchers and robots right from wrong, Oxford University Press, designers can use the insights of the study as guidelines 2008. while designing and developing co-creative AI. Addition- [12] S. Russell, S. Hauert, R. Altman, M. Veloso, Ethics ally, the results can be used as guidelines and recommen- of artificial intelligence, Nature 521 (2015) 415–416. dations for policymakers. These results are transferable [13] M. T. Llano, J. McCormack, Existential risks of co- to any human-AI collaboration where contributions are creative systems, in: Workshop on the Future of blended and not limited to creative tasks only. Co-creative Systems 2020, Association for Compu- tational Creativity (ACC), 2020. [14] B. Mittelstadt, Principles alone cannot guarantee References ethical ai, Nature Machine Intelligence 1 (2019) 501–507. [1] N. Davis, Human-computer co-creativity: Blending [15] J. Whittlestone, R. Nyrup, A. Alexandrova, S. Cave, human and computational creativity, in: Proceed- The role and limits of principles in ai ethics: to- ings of the AAAI Conference on Artificial Intelli- wards a focus on tensions, in: Proceedings of the gence and Interactive Digital Entertainment, vol- 2019 AAAI/ACM Conference on AI, Ethics, and ume 9, 2013. Society, 2019, pp. 195–200. [2] N. Davis, C.-P. Hsiao, K. Yashraj Singh, L. Li, [16] D. Buschek, L. Mecke, F. Lehmann, H. Dang, Nine B. Magerko, Empirically studying participa- potential pitfalls when designing human-ai co- tory sense-making in abstract drawing with a co- creative systems, arXiv preprint arXiv:2104.00358 creative cognitive agent, in: Proceedings of the (2021). 21st International Conference on Intelligent User [17] M. Muller, S. Ross, S. Houde, M. Agarwal, F. Mar- Interfaces, 2016, pp. 196–207. tinez, J. Richards, K. Talamadupula, J. D. Weisz, [3] A. Kantosalo, J. M. Toivanen, P. Xiao, H. Toivonen, A. Human-Centered, S. Suneja, et al., Drinking From isolation to involvement: Adapting machine chai with your (ai) programming partner: A design creativity software to support human-computer co- fiction about generative ai for software engineering creation., in: ICCC, 2014, pp. 1–7. (2022). [4] M. Muller, Q. V. Liao, Exploring ai ethics and val- [18] C. Oh, J. Song, J. Choi, S. Kim, S. Lee, B. Suh, I lead, ues through participatory design fictions, Human you help but only with enough details: Understand- Computer Interaction Consortium (2017). ing user experience of co-creation with artificial [5] A. K. Chopra, M. P. Singh, Sociotechnical systems intelligence, in: Proceedings of the 2018 CHI Con- and ethics in the large, in: Proceedings of the 2018 ference on Human Factors in Computing Systems, AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 1–13. 2018, pp. 48–53. [19] J. Rezwana, M. L. Maher, Designing creative ai part- [6] H. Nie, X. Han, B. He, L. Sun, B. Chen, W. Zhang, ners with cofi: A framework for modeling interac- S. Wu, H. Kong, Deep sequence-to-sequence entity tion in human-ai co-creative systems, ACM Trans- matching for heterogeneous entity resolution, in: actions on Computer-Human Interaction (2022). Proceedings of the 28th ACM International Confer- [20] D. Wang, P. Maes, X. Ren, B. Shneiderman, Y. Shi, ence on Information and Knowledge Management, Q. Wang, Designing ai to work with or for people?, 2019, pp. 629–638. in: Extended Abstracts of the 2021 CHI Conference [7] W. Xu, Toward human-centered ai: a perspective on Human Factors in Computing Systems, 2021, pp. from human-computer interaction, interactions 26 1–5. (2019) 42–46. [21] J. Rezwana, M. L. Maher, Understanding user per- [8] J. Bleecker, Design fiction: A short essay on design, ceptions, collaborative experience and user engage- ment in different human-ai interaction designs for for engaging with dystopian futures, in: Proceed- co-creative systems, in: Creativity and Cognition, ings of the Second Workshop on Computing within Camp;C ’22, Association for Computing Machinery, Limits, 2016, pp. 1–9. New York, NY, USA, 2022, p. 38–48. URL: https: [34] M. Blythe, Research through design fiction: narra- //doi.org/10.1145/3527927.3532789. doi:10.1145/ tive in real and imaginary abstracts, in: Proceed- 3527927.3532789 . ings of the SIGCHI conference on human factors in [22] E. Ruane, A. Birhane, A. Ventresque, Conversa- computing systems, 2014, pp. 703–712. tional ai: Social and ethical considerations., in: [35] T. Markussen, E. Knutz, The poetics of design AICS, 2019, pp. 104–115. fiction, in: Proceedings of the 6th International [23] J. Rezwana, M. L. Maher, Identifying ethical is- Conference on Designing Pleasurable Products sues in ai partners in human-ai co-creation, arXiv and Interfaces, DPPI ’13, Association for Com- preprint arXiv:2204.07644 (2022). puting Machinery, New York, NY, USA, 2013, [24] C. Funk, A. Tyson, B. Kennedy, C. Johnson, Science p. 231–240. URL: https://doi.org/10.1145/2513506. and scientists held in high esteem across global 2513531. doi:10.1145/2513506.2513531 . publics, Pew research center 29 (2020). [36] S. Grand, M. Wiedmer, Design fiction: a method [25] S. Tolmeijer, M. Christen, S. Kandul, M. Kneer, toolbox for design research in a complex world A. Bernstein, Capable but amoral? comparing ai (2010). and human expert collaboration in ethical decision [37] P. Karimi, J. Rezwana, S. Siddiqui, M. L. Maher, making, in: CHI Conference on Human Factors in N. Dehbozorgi, Creative sketching partner: an Computing Systems, 2022, pp. 1–17. analysis of human-ai co-creativity, in: Proceedings [26] Z. Ashktorab, C. Dugan, J. Johnson, Q. Pan, of the 25th International Conference on Intelligent W. Zhang, S. Kumaravel, M. Campbell, Effects of User Interfaces, 2020, pp. 221–230. communication directionality and ai agent differ- [38] V. Braun, V. Clarke, Thematic analysis. (2012). ences in human-ai interaction, in: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, pp. 1–15. [27] S. Oliver, Communication and trust: rethinking the way construction industry professionals and software vendors utilise computer communication mediums, Visualization in Engineering 7 (2019) 1–13. [28] K. Tijunaitis, D. Jeske, K. S. Shultz, Virtuality at work and social media use among dispersed work- ers: Promoting network ties, shared vision and trust, Employee Relations: The International Jour- nal (2019). [29] M. Boni, The ethical dimension of human–artificial intelligence collaboration, European View 20 (2021) 182–190. [30] J. Lindley, R. Potts, A machine learning: an exam- ple of hci prototyping with design fiction, in: Pro- ceedings of the 8th Nordic Conference on Human- Computer Interaction: Fun, Fast, Foundational, 2014, pp. 1081–1084. [31] B. Brown, J. Bleecker, M. D’adamo, P. Ferreira, J. Formo, M. Glöss, M. Holm, K. Höök, E.-C. B. John- son, E. Kaburuan, et al., The ikea catalogue: Design fiction in academic and industrial collaborations, in: Proceedings of the 19th International Conference on Supporting Group Work, 2016, pp. 335–344. [32] P. Dourish, G. Bell, “resistance is futile”: reading sci- ence fiction alongside ubiquitous computing, Per- sonal and Ubiquitous Computing 18 (2014) 769–778. [33] T. J. Tanenbaum, M. Pufal, K. Tanenbaum, The lim- its of our imagination: design fiction as a strategy