Governance of Artificial Intelligence – A Framework Towards Ethical AI Applications Jens F. Lachenmaier,2, Maximilian Werling 1 and Dominik Morar1 1 Ferdinand-Steinbeis-Institut, Filderhauptstr. 142, Stuttgart 1, Germany 2 University of Stuttgart – Chair of Information Systems 1, Keplerstr. 17, Stuttgart, 2, Germany Abstract Artificial intelligence (AI) has extensive potential in changing businesses. Various applications have been identified that are either already implemented, or under development. However, many – especially small and medium-sized – enterprises struggle with the potential problems that AI might cause. Leaders and managers are often willing to implement AI in their companies, but are looking for guidance, how they can ensure that the AI will have no negative impact on customers, employees or their business. To address this area of conflict, a governance framework is presented, which guides the development of AI solutions to address potential ethical challenges. The framework is rooted in the body of knowledge of the information systems discipline – especially in general IT governance frameworks and other proposed governance structures considering AI – and its content has been adapted specific to ethical issues in AI development and usage based on experts’ insights. Keywords Governance, Ethics, Artificial Intelligence 1 1. Introduction, problem, and motivation Artificial Intelligence (AI) is increasingly used by companies and public authorities. Multiple studies predict a massive market growth in the numbers of applications and the related profits in the near future. [1-2] Moreover, the technology of artificial intelligence has even been described as a game changer since it enables solutions that can address problems with a high accuracy and efficiency that were not possible a few years ago.[3] At the same time, cases of unethical AI decisions have become public. Famous cases of unethical AI behavior that made the news include a racist chatbot, a biased recruitment system, and offensive image classification algorithms. [4-7] Such cases have flawed companies’ images, or could potentially affect stock markets. Even though, there have been no consequences for the affected organizations directly linked to these issues, the cases have caused concerns amongst decision makers in the private sector as they react to the customer’s perception of their brands [7] and may eventually have to pay fines. In the public domain this led to the call for ethical AI, which is reflected in law making processes and other initiatives. [8-9] However, since laws cannot prohibit all potential pitfalls of AI in advance and laws are only a limited extract from ethics in general, it remains the responsibility of the companies that develop and run the AI, to ensure its behavior is within certain boundaries that are acceptable from the standpoint of society, their customers or the public domain. [10] Our first task will therefore be to identify these boundaries and to define ethical AI behavior. In the field of digital ethics and corporate digital responsibility, it is argued that there is a tradeoff between innovation based on digitalization and ethics. [11-12] Companies therefore need to position themselves and create structures to address this topic internally. In the literature, it is assumed that the LWDA: Proceedings of the LWDA 2023 Workshops: BIA, DB, IR, KDML and WM, October 09–11, 2023, Marburg, Germany jens.lachenmaier@bwi.uni-stuttgart.de (J. Lachenmaier); Maximilian.werling@ferdinand-steinbeis- institut.de@mail.com (M. Werling); dominik.morar@ferdinand-steinbeis-institut.de (D. Morar) 0000-0003-2728-7390 (J. Lachenmaier) © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings concept of governance could be used to mitigate the potential conflicts between innovation and ethics [13]. Therefore, in this paper, we present a framework that should specifically address this challenge. 2. Research design The goal of our research is to design a governance framework, which is able to identify and mitigate the ethical problems that potentially arise from the use of AI applications depending on the specific situation and use cases of the individual legal entity. The question that we are aiming to answer in this article – as a part of our overall goal - is: “What constitutes a governance framework that can be implemented by organizations to ensure that their AI applications not facing ethical challenges?” This implies a design-oriented approach since the artifact is a framework and this framework needs to be developed in iterations with increasing details and the continuous addition of ideas [14]. Our research design is therefore based on design-oriented information systems research and the properties of design science by Hevner et al. addressing both relevance and scientific rigor. [15-16] Over the last decade, sub genres of design research have been identified and classified, which allow for a more precise description of the intentions of the research [17]. Since we involve and address companies directly and are aiming to build an applicable solution, our approach can more specifically be classified as dual scientific research [18]. Analysis of the problem Design of the governance Evaluation of the and state of research framework governance framework Literature reviews regarding Workshop with experts to Analysis of initital literature governance, AI use cases, as discuss the framework and to identify problem well as AI and ethics its applicability Selection of governace Literature review to identify topics from the identified research gap governance frameworks Expert interviews to Expert interviews to confirm generate the governance ethical challenges with AI framework content Figure 1: Research Design Our research design encompasses the phases of analysis, design, and evaluation as outlined by Österle et al. Our results are based on structured literature reviews, and qualitative expert interviews based on interview guidelines as methods of data collection [19-20]. An overview is given in Fig. 1. In this section, we will give an overview regarding the applied methods in all phases. All literature reviews are in line with Levy & Ellis, Brendel et al. and vom Brocke et al. and consist of the steps search, filtering, content analysis, and structured output [21-23]. The search process is documented according to PRISMA 2020 [24]. The details will be reported in the respective sections below. The qualitative expert interviews, which were conducted in 2021, had the goal to collect recommendations regarding AI governance that will have an impact on AI ethics. The experts are from various fields to be able to address the topic from different perspectives (cf. Table 1). We used an interview guideline with three versions depending on the participant’s field of expertise. The three versions emphasized AI vendors, AI users, and ethics. Each interview had a duration of about one and a half hours and they were held online using video conferencing solutions. The majority of the interviews (9 out of 11) was recorded, and transcribed for further content analysis; during the minority (2 out of 11) the researchers took notes. To avoid any misinterpretations or bias, 3-4 researchers took part in each interview and the results that we extracted from the answers of the interviewees were cross- checked by one other author. Table 1 List of expert interviews Expert’s role Field of expertise Type Lead AI manager, responsible for ethics AI ethics AI user Senior data scientist, trainer on data AI projects in finance, insurance, AI user science in industrial company and production Senior researcher on big data Big data AI user Lead developer of AI solutions Privacy AI vendor Managing director of AI development Various AI projects, applied AI vendor company governance Researcher on ethics Ethics Ethics of AI Senior researcher on the social impact of AI and society Ethics of AI AI Consultant and managing director of AI Combination of AI and ethics Governance and ethics consultancy expert Lawyer on GDPR Jurisdiction Governance expert Managing director of business intelligence Business intelligence governance Governance & analytics consultancy expert, AI vendor Project leader on AI ethics standardization AI ethics No classification 3. Definition of ethical AI To be able to define the scope of our research, we need to confine the phenomena of ethical AI or ethical AI applications. To achieve this, we need a common, and applicable understanding of the combination of AI and ethics. Artificial intelligence has first been defined as a technology that is able to solve problems that need functions of a human brain, without involving humans. [25] Nowadays, the central capability of AI is machine learning, which can also be used as a synonym to AI as it is its main component. [26-28] Ethics is a broad concept that has its origin in the social sciences covering many different aspects of human life and interactions. Again, it is necessary to deal with ethics in a way that is feasible and that will allow the generation of recommendations regarding governance structures. Therefore, we choose to focus on ethical values and principles that are relevant in combination with AI, or the development of IT solutions. [12; 29] As a next step, we searched in the AIS eLibrary, Business Source Premier, and IEE Explore, to find literature on the keywords “AI AND Ethics” already in 2020, when we started our research. At that time, we were able to identified three extensive meta studies regarding ethical AI [30-31]. Due to their high citation index, we expect them to represent the main stream in research. These meta studies ranked the mentioning of values in the context of AI by no. of appearance in practice, and in science. The extensive meta studies all came to the conclusion, that AI is most often discussed in relation to the principles of “privacy”, “transparency”, “non- maleficence”, “fairness”, and “accountability”. These five ethical values are named most of-ten by far and therefore, we consider an AI application to be ethical, when it adheres to these five principles. This was confirmed by the experts in our interviews. We asked them an open question about which ethical challenges they expect to come up in the realm of AI and they named the same ones as those that we found in the literature. 4. State of research – related work To establish the research gap and to incorporate insights from the available knowledge body, we conducted a literature review on AI governance. The keywords “AI AND Ethics AND Governance” have been used to search for related articles in databases. We selected the AIS eLibrary, business source premier (ebscohost), and web of science as these databases cover publications from business administration and information systems where we expect results on governance. Within the last few years, there have been numerous publications on AI and ethics [32], which poses a challenge in the search process which we had to face. The keyword search produced many entries in the databases (more than 3.000 entries on web of science alone) and were therefore limited to the abstracts of publications to narrow them down to the most promising articles that focus on AI governance. Figure 2 depicts the process of the literature search process. In total, 90 records were identified that matched the keywords. 22 of those records were removed before screening since the full text was not available. The remaining 68 records were screened further. Overall, 53 reports were excluded due to the following reasons: 1) 44 publications were focused on aspects not relevant to the scope of this paper, in most cases because they did not develop or address governance frameworks, 2) 7 publications were either research in progress, editorial pieces, or panel summaries and 3) 2 publications were not in English language. Thus, the final 15 publications [33-47] were read by at least one researched, discussed, and analyzed in detail. Figure 2: Search process to identify related work In the next step, we classified the relevant publications according to concepts that are within the scope of our paper. We analyzed the relevancy of the 15 publications based on five categories: 1) Which role do ethics in values play for the guidelines of AI (Value-alignment of AI), 2) whether a governance framework was presented, 3) if the research questions were relevant in the context of or applicable to small and medium sized businesses (SME) 4) How adaptable the guidelines were to different AI use cases. Figure 3: Result classification of the literature review The review of the literature shows, that values and ethics were frequently discussed as a basis for necessary regulation of AI, which underlines the importance of the topic in research. Some papers described guidelines; however, they are not operationalizing these ethical guidelines into applicable governance frameworks. Examples in the literature were either very high-level approaches or abstract models. Especially the SME context, which we want to address with our tailoring, was mostly missing from literature discussions. Since the discussed frameworks were high level and abstract, they mostly were versatile and applicable to a broad range of AI use cases. Based on our findings, there is still a gap regarding applicable and adjustable frameworks that provide direct guidance when companies try to implement governance structures. 5. The governance framework and the design factors 5.1. How the framework and the design factors were derived In this section, we explain and document, how we came up with the framework, its content, and the design factors that can be used to adjust the framework to a specific organization. Besides the input from the literature review regarding the current state of research, we used a literature review to identify well-established governance frameworks from analogous domains in addition. We chose only well-established frameworks because they have been implemented in various companies and have been refined over time, which means that they are stable and incorporate a lot of experience. To find such frameworks, that also are related to AI, we limited our search to books on IT- governance, data governance, and business analytics governance. In the southern library of Germany, we identified nine existing frameworks, including Cobit, DMBok, and frameworks presented by research groups, such as [48-52]. We analyzed these existing frameworks and evaluated the relevance of each component regarding AI and ethics critically. Therefore, we excluded aspects such as architecture or tool selection as these do not influence AI ethics. Afterwards, we added the results from the related work – especially [44], whose framework is specific to health care -, and came up with a total of 12 governance areas. In parallel, during the expert interviews, we asked the interviewees to provide input regarding these governance rules and structures that may be relevant for ethical AI. They came up with a total of 78 recommendations. These recommendations that we received during the interviews were afterwards reflected regarding their potential, re-vised in terms or wording, duplicates were removed, and then assigned to the twelve governance areas. We finally made sure to address the challenges that are related to the ethical values, which means that transparency, privacy, discrimination, and accountability are represented in the framework. To identify the design factors that can be used to tailor the framework to specific companies and needs, the authors selected an initial set from the literature (e.g., [53]) and verified it based on a critical discussion and a list of AI applications in Germany [54]. 5.2. The AI governance framework for ethical AI applications Our governance framework consists of 12 governance areas, and six design factors, which are listed below (cf. Fig. 4). Each component of the framework is briefly explained and a reason is given, why it matters. This reason usually links the component to the values that render the component necessary. In addition, some examples of governance mechanisms (rules, processes, roles & responsibilities) are given that are mapped to the area (cf. Table 2). Compliance and Data Privacy Risk Management Monitoring DF1: Industry DF4: Criticality Potential and Innovation Suppliers and external Build and Run AI Management partners solutions DF5: Focal DF2: Sourcing object User perspective on AI Enterprise Knowledge IT Security usage Management DF3: Personal Stakeholders Accountabibily DF6: Impact data Strategy Figure 4: The governance framework and the six initial design factors Table 2 Details on the framework Governance Area Reason / Link to ethics Example Data Privacy Privacy Process for pseudonymization and anonymization Compliance and Monitoring Internal structures to report Establish an internal misconduct ombudsperson Risk Management Identify potential ethical Define acceptable/ challenges; non-maleficence, unacceptable risks fairness, and transparency Build and Run AI Solutions Ensure ethical behavior during Train data scientist on development and also in the longer awareness; Define term contingency plans Potential and Innovation To identify new possibilities that Establish partnerships Management impact AI solutions (e.g., new with universities and approaches in explainable AI to researchers increase transparency) Suppliers and external Partners Transparency; fairness Request certifications from suppliers, such as Trust AI Labels IT Security Non-maleficence Ensure continuity of AI solution User perspective on AI usage Avoid miss-use or miss- Involve experts on user interpretation; non-maleficence interface design and provide training Enterprise Knowledge Learn from mistakes and identify Document AI projects Management gaps regarding rules and with a standardized governance; all ethics report to identify best practices and lessons learned Stakeholders Ethical challenges cannot be solved Identify and involve by a company alone, they need to stakeholders at an early be discussed with the affected stage people Strategy Address the potential trade-offs Define code of conduct between ethical challenges and benefits of AI based on company policy Accountability Accountability Define RACI matrix on topics and assign roles The six design factors are: • Industry: Certain industries have traits that require a more intense governance due to external demand, such as banking or health care. • Sourcing: Depending on the existing degree of expertise, some governance areas might be less relevant, such as external partners or knowledge management. • Personal data: Whether personal data is involved or not, has a direct influence on privacy issues. • Focal object: The focal object of an AI solution could for example be a machine or a person or a process. Depending on this, stakeholders will demand for higher levels of governance. • Criticality: An AI application can vary in its criticality. For example, when it is used in court or when medical decisions are made based on the results, as in cancer recognition, a higher reliability and adherence to values (non-maleficence) will be needed. • Impact: A company that is very public and open about its activities may be more open for image loss or fines than others. This also depends on the users of the AI. When the AI is publicly available, the risks of infringement and detection are higher than when it is limited to a certain audience. 6. Results from evaluation, discussion and limitations The framework was evaluated during a workshop with five experts. Three of them had already participated in the expert interviews before; two additional experts were included for the purpose of adding new insights. The two experts were industry experts with responsibilities in coordinating AI activities at their companies, which means they are in a position that is asked to implement governance structures in their respective departments and companies. The evaluation goal was for the experts to evaluate the applicability of the framework in practice, the relevance of the proposed measures, the plausibility of our ethical AI definition, as well as the feasibility of a tailoring based on the design factors. During the workshop, three AI solutions and the characteristic of the design factors, were presented and the experts were tasked to select governance mechanisms fitting to the design factors, and the solutions. • Applicability of the framework: The experts were able to fulfil their task without missing information or the need to ask for further details or add additional governance mechanisms. • The relevance of the measures: The experts selected specific measures to implement in a given scenario. They agree that these measures will help to ensure an ethical AI usage. However, the recommendations need to be more specific to the situation in order to provide guidance on how to implement the governance structures in a company. • The plausibility of the ethical AI definition: The experts agreed that the selected values matter when building or using AI solutions. They were able to understand the link between values and governance recommendations. • The feasibility of a tailoring based on the design factors: The design factors were explicitly discussed, and they are sufficient to describe the situation of a company that is willing to introduce a governance framework. The impact of the research can be estimated based on the number of SMEs that consider AI, but are hesitating because they fear problems and loss of reputation. This happens in all domains and industries. We hope to reduce their doubts by providing means of handling and avoiding potential issues. Limitations are the low number of experts in the survey. We have not involved experts from very small companies yet, which means that the fit of the framework to this case has not been verified. In addition, there is no real-world implementation of the framework so far. Finally, we cannot know if the recommendations – once they are implemented – will be able to stop every case of unethical AI usage, which might for example even be caused by intentional misconduct. 7. Contribution and next steps The presented framework is a possible brick in an effort to ensure AI applications behave in an ethical manner. It needs to be adjusted to each company and to the specific applications. The evaluation is promising and we will continue with our research. Our contribution is the presentation of an inclusive governance framework that is derived from experts and interviews, which focuses on ethics and is adjustable to various situations. Currently, we are building a web tool that will select governance measures based on the input of a user who needs support in designing his specific governance. The user will be asked about the design factors and received specific and detailed instructions. We will also extend the content of our framework further, based on more expert inter-views, and provide more detailed guidance on how to implement the suggestions in a real-world environment. Acknowledgements The research is funded by the Baden-Württemberg Stiftung. References [1] Daniel Zhang, N. M., Erik Brynjolfsson, John Etchemendy, Terah Lyons, James , Manyika, H. N., Juan Carlos Niebles, Michael Sellitto, Ellie Sakhaee, Yoav Shoham, & Jack Clark, R. P. (2022) The AI Index 2022 Annual Report. Stanford. [2] Tata Consultancy Services & Bitkom Research (2020) Deutschland lernt KI. [3] Rao, A. S. & Verwei, G. (2021) Sizing the prize. [4] Dastin, J. (2018) Amazon scraps secret AI recruiting tool that showed bias against women, 2018. Available online: [5] Noble, S. U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism, New York University Press. [6] Schwartz, O. (2019) In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation. IEEE Spectrum. [7] Pazzanese, C. (2020) Great promise but potential for peril. The Harvard Gazette. [8] European Commission, Ethics guidelines for trustworthy AI (2019). [9] European Commission (2021) Proposal for a Regulation of the European parliament and of the council Laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. Brussels: European Commission. [10] Bird, E., Fox-Skelly, J., Jenner, N., Larbey, R., Weitkamp, E. & Winfield, A. (2020) The ethics of artificial intelligence: Issues and initiatives European Parliamentary Research Service. [11] Lobschat, L., Mueller, B., Eggers, F., Brandimarte, L., Diefenbach, S., Kroschke, M. & Wirtz, J. (2021) Corporate digital responsibility. Journal of Business Research, 122, 875-888. [12] Spiekermann, S. (2015) Ethical IT Innovation: A Value-Based System Design Approach. [13] The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019) Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent SystemsIEEE. [14] Vaishnavi, V. & Kuechler, B. (2021) Design Science Research in Information Systems, AIS. [15] Hevner, A. R., March, S. T., Park, J. & Ram, S. (2004) Design Science in Information Systems Research. MIS Quarterly, 28(1), 75-105. [16] Österle, H., Becker, J., Frank, U., Hess, T., Karagiannis, D., Krcmar, H., Loos, P., Mertens, P., Oberweis, A. & Sinz, E. J. (2011) Memorandum on design-oriented information systems research. European Journal of Information Systems, 20(1), 7-10. [17] Peffers, K., Tuunanen, T. & Niehaves, B. (2018) Design science research genres: introduction to the special issue on exemplars and criteria for applicable design science research. European Journal of Information Systems, 27(2), 129-139. [18] Weber, P., Hiller, S. & Heiner, L. (2021) Dual Scientific Research Framework – Generating Real World Impact and Scientific Progress in Internet of Things Ecosystems, PACIS. [19] Denzin, N. K. (2017) Sociological Methods: A Sourcebook. Oxon, New York: Routledge. [20] Yin, R. K. (2018) Case Study Research and Applications: Design and Methods, 6th. Los Angeles, London, et al.: Sage. [21] Brendel, A. B. T., Simon; Marrone, Mauricio; Lichtenberg, Sascha; Kolbe, Lutz M. (2020) What to do for a Literature Review? – A Synthesis of Literature Review Practices, AMCIS. [22] Brocke, J. v. S., Alexander; Niehaves, Bjoern; Niehaves, Bjorn; Reimer, Kai; Plattfaut, Ralf; Cleven, Anne (2009) Reconstructing the giant: On the importance of rigour in documenting the literature search process, ECIS. [23] Levy, Y. & Ellis, T. J. (2006) A Systems Approach to Conduct an Effective Literature Re-view in Support of Information Systems Research. Informing Sci. Int. J. an Emerg. Transdiscipl., 9, 181- 212. [24] Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., Stewart, L. A., Thomas, J., Tricco, A. C., Welch, V. A., Whiting, P. & Moher, D. (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Systematic Reviews, 10(1), 89. [25] Minsky, M. (1968) Semantic Information Processing. Cambridge, London: MIT Press. [26] Aggarwal, C. C. (2021) An Introduction to Artificial Intelligence, in Aggarwal, C. C. (ed), Artificial Intelligence: A Textbook. Cham: Springer International Publishing, 1-34. [27] Choi, R. Y., Coyner, A. S., Kalpathy-Cramer, J., Chiang, M. F. & Campbell, J. P. (2020) Introduction to Machine Learning, Neural Networks, and Deep Learning. Translational Vision Science & Technology, 9(2). [28] Joshi, A. V. (2020) Introduction to AI and ML, in Joshi, A. V. (ed), Machine Learning and Artificial Intelligence. Cham: Springer International Publishing, 3-7. [29] Hallensleben, S., Hustedt, C., Fetic, L., Fleischer, T., Grünke, P., Hagendorff, T., Hauer, M., Hauschke, A., Heesen, J., Herrmann, M., Hillerbrand, R., Hubig, C., Kaminski, A., Krafft, T., Loh, W., Otto, P. & Puntschuh, M. (2020) From Principles to Practice: An interdisciplinary framework to operationalize AI ethics. Gütersloh: Bertelsmann Stiftung. [30] Hagendorff, T. (2020) The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99-120. [31] Jobin, A., Ienca, M. & Vayena, E. (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. [32] Borenstein, J., Grodzinsky, F. S., Howard, A., Miller, K. W. & Wolf, M. J. (2021) AI Ethics: A Long History and a Recent Burst of Attention. Computer, 54(1), 96-102. [33] Almeida, P., Santos, C. & Farias, J. S. (2020) Artificial intelligence regulation: A meta-framework for formulation and governance, in Association for Information, S. (ed), Proceedings of the 53rd Hawaii International Conference on System Sciences. [34] Almeida, P. G. R. d., Santos, C. D. d. & Farias, J. S. (2021) Artificial intelligence regulation: A framework for governance. Ethics and Information Technology, 23(3), 505–525. [35] Ashok, M., Madan, R., Joha, A. & Sivarajah, U. (2022) Ethical framework for Artificial Intelligence and Digital technologies. International Journal of Information Management, 62. [36] Hickman, E. & Petrin, M. (2021) Trustworthy AI and Corporate Governance: the EU’s ethics guidelines for trustworthy artificial intelligence from a company law perspective. Euro-pean Business Organization Law Review, 22(4), 593–625. [37] Ho, C. W. L., Soon, D., Caals, K. & Kapur, J. (2019) Governance of automated image analysis and artificial intelligence analytics in healthcare. Clinical radiology, 74(5), 329–337. [38] Ibáñez, J. C. & Olmeda, M. V. (2021) Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study. AI & SOCIETY, 1–25. [39] Jantunen, M., Halme, E., Vakkuri, V., Kemell, K.-K., Rebekah, R., Mikkonen, T., Nguyen Duc, A. & Abrahamsson, P. (2021) Building a Maturity Model for Developing Ethically Aligned AI Systems. Selected Papers of the IRIS (12). [40] Larsen, B. C. (2021) A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized, in Association for Computing, M. (ed), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM Digital Library. New York, NY, United States: Association for Computing Machinery, 683–694. [41] Larsson, S. (2020) On the governance of artificial intelligence through ethics guidelines. Asian Journal of Law and Society, 7(3), 437–451. [42] Munoko, I., Brown-Liburd, H. L. & Vasarhelyi, M. (2020) The ethical implications of using artificial intelligence in auditing. Journal of Business Ethics, 167(2), 209–234. [43] Orr, W. & Davis, J. L. (2020) Attributions of ethical responsibility by Artificial Intelligence practitioners. Information, Communication & Society, 23(5), 719–735. [44] Reddy, S., Allan, S., Coghlan, S. & Cooper, P. (2020) A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27(3), 491–497. [45] Seppälä, A., Birkstedt, T. & Mäntymäki, M. (2021) From Ethical AI Principles to Governed AI, in Association for Information, S. (ed), Proceedings of the 42nd International Conference on Information Systems (ICIS). [46] Wang, Y., Xiong, M. & Olya, H. (2020) Toward an understanding of responsible artificial intelligence practices, Proceedings of the 53rd Hawaii international conference on system sciences, 4962–4971. [47] Wu, W., Huang, T. & Gong, K. (2020) Ethical principles and governance technology development of AI in China. Engineering, 6(3), 302–309. [48] Baars, H. & Kemper, H.-G. (2021) Entwicklung und Betrieb integrierter BIA-Lösungen, in Baars, H. & Kemper, H.-G. (eds), Business Intelligence & Analytics – Grundlagen und praktische Anwendungen: Ansätze der IT-basierten Entscheidungsunterstützung. Wiesbaden: Springer Fachmedien Wiesbaden, 323-388. [49] DAMA International (2017) DAMA-DMBOK, 2ndTechnics Publications. [50] Gluchowski, P. (2020) Data Governance. Heidelberg: dpunkt. [51] ISACA (2018) COBIT 2019 Framework: Introduction & Methodology. Schaumburg. [52] Weill, P. & Ross, J. (2004) IT Governance: How Top Performers Manage IT Decision Rights for Superior Results. [53] DIN/DKE (2020) German Standardization Roadmap on Artificial Intelligence. [54] Plattform Lernende Systeme (2021) Applications, 2021. Available online: https://www.plattform- lernende-systeme.de/map-on-ai-map.html.