<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Governance of Artificial Intelligence - A Framework Towards Ethical AI Applications</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jens F. Lachenmaier</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maximilian Werling</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dominik Morar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ferdinand-Steinbeis-Institut</institution>
          ,
          <addr-line>Filderhauptstr. 142, Stuttgart 1</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Stuttgart - Chair of Information Systems 1</institution>
          ,
          <addr-line>Keplerstr. 17, Stuttgart, 2</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Artificial intelligence (AI) has extensive potential in changing businesses. Various applications have been identified that are either already implemented, or under development. However, many - especially small and medium-sized - enterprises struggle with the potential problems that AI might cause. Leaders and managers are often willing to implement AI in their companies, but are looking for guidance, how they can ensure that the AI will have no negative impact on customers, employees or their business. To address this area of conflict, a governance framework is presented, which guides the development of AI solutions to address potential ethical challenges. The framework is rooted in the body of knowledge of the information systems discipline - especially in general IT governance frameworks and other proposed governance structures considering AI - and its content has been adapted specific to ethical issues in AI development and usage based on experts' insights.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Governance</kwd>
        <kwd>Ethics</kwd>
        <kwd>Artificial Intelligence 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction, problem, and motivation</title>
      <p>Artificial Intelligence (AI) is increasingly used by companies and public authorities. Multiple studies
predict a massive market growth in the numbers of applications and the related profits in the near future.
[1-2] Moreover, the technology of artificial intelligence has even been described as a game changer
since it enables solutions that can address problems with a high accuracy and efficiency that were not
possible a few years ago.[3]</p>
      <p>At the same time, cases of unethical AI decisions have become public. Famous cases of unethical
AI behavior that made the news include a racist chatbot, a biased recruitment system, and offensive
image classification algorithms. [4-7] Such cases have flawed companies’ images, or could potentially
affect stock markets. Even though, there have been no consequences for the affected organizations
directly linked to these issues, the cases have caused concerns amongst decision makers in the private
sector as they react to the customer’s perception of their brands [7] and may eventually have to pay
fines.</p>
      <p>In the public domain this led to the call for ethical AI, which is reflected in law making processes
and other initiatives. [8-9] However, since laws cannot prohibit all potential pitfalls of AI in advance
and laws are only a limited extract from ethics in general, it remains the responsibility of the companies
that develop and run the AI, to ensure its behavior is within certain boundaries that are acceptable from
the standpoint of society, their customers or the public domain. [10] Our first task will therefore be to
identify these boundaries and to define ethical AI behavior.</p>
      <p>In the field of digital ethics and corporate digital responsibility, it is argued that there is a tradeoff
between innovation based on digitalization and ethics. [11-12] Companies therefore need to position
themselves and create structures to address this topic internally. In the literature, it is assumed that the
concept of governance could be used to mitigate the potential conflicts between innovation and ethics
[13]. Therefore, in this paper, we present a framework that should specifically address this challenge.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Research design</title>
      <p>The goal of our research is to design a governance framework, which is able to identify and mitigate
the ethical problems that potentially arise from the use of AI applications depending on the specific
situation and use cases of the individual legal entity. The question that we are aiming to answer in this
article – as a part of our overall goal - is: “What constitutes a governance framework that can be
implemented by organizations to ensure that their AI applications not facing ethical challenges?”</p>
      <p>This implies a design-oriented approach since the artifact is a framework and this framework needs
to be developed in iterations with increasing details and the continuous addition of ideas [14]. Our
research design is therefore based on design-oriented information systems research and the properties
of design science by Hevner et al. addressing both relevance and scientific rigor. [15-16] Over the last
decade, sub genres of design research have been identified and classified, which allow for a more
precise description of the intentions of the research [17]. Since we involve and address companies
directly and are aiming to build an applicable solution, our approach can more specifically be classified
as dual scientific research [18].</p>
      <p>Analysis of the problem
and state of research</p>
      <p>Design of the governance
framework</p>
      <p>Evaluation of the
governance framework
Analysis of initital literature</p>
      <p>to identify problem
Literature review to identify</p>
      <p>research gap
Expert interviews to confirm
ethical challenges with AI</p>
      <p>Our research design encompasses the phases of analysis, design, and evaluation as outlined by
Österle et al. Our results are based on structured literature reviews, and qualitative expert interviews
based on interview guidelines as methods of data collection [19-20]. An overview is given in Fig. 1.</p>
      <p>In this section, we will give an overview regarding the applied methods in all phases. All literature
reviews are in line with Levy &amp; Ellis, Brendel et al. and vom Brocke et al. and consist of the steps
search, filtering, content analysis, and structured output [21-23]. The search process is documented
according to PRISMA 2020 [24]. The details will be reported in the respective sections below.</p>
      <p>The qualitative expert interviews, which were conducted in 2021, had the goal to collect
recommendations regarding AI governance that will have an impact on AI ethics. The experts are from
various fields to be able to address the topic from different perspectives (cf. Table 1). We used an
interview guideline with three versions depending on the participant’s field of expertise. The three
versions emphasized AI vendors, AI users, and ethics. Each interview had a duration of about one and
a half hours and they were held online using video conferencing solutions. The majority of the
interviews (9 out of 11) was recorded, and transcribed for further content analysis; during the minority
(2 out of 11) the researchers took notes. To avoid any misinterpretations or bias, 3-4 researchers took
part in each interview and the results that we extracted from the answers of the interviewees were
crosschecked by one other author.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Definition of ethical AI</title>
      <p>To be able to define the scope of our research, we need to confine the phenomena of ethical AI or ethical
AI applications. To achieve this, we need a common, and applicable understanding of the combination
of AI and ethics. Artificial intelligence has first been defined as a technology that is able to solve
problems that need functions of a human brain, without involving humans. [25] Nowadays, the central
capability of AI is machine learning, which can also be used as a synonym to AI as it is its main
component. [26-28] Ethics is a broad concept that has its origin in the social sciences covering many
different aspects of human life and interactions. Again, it is necessary to deal with ethics in a way that
is feasible and that will allow the generation of recommendations regarding governance structures.
Therefore, we choose to focus on ethical values and principles that are relevant in combination with AI,
or the development of IT solutions. [12; 29]</p>
      <p>As a next step, we searched in the AIS eLibrary, Business Source Premier, and IEE Explore, to find
literature on the keywords “AI AND Ethics” already in 2020, when we started our research. At that
time, we were able to identified three extensive meta studies regarding ethical AI [30-31]. Due to their
high citation index, we expect them to represent the main stream in research. These meta studies ranked
the mentioning of values in the context of AI by no. of appearance in practice, and in science. The
extensive meta studies all came to the conclusion, that AI is most often discussed in relation to the
principles of “privacy”, “transparency”, “non- maleficence”, “fairness”, and “accountability”. These
five ethical values are named most of-ten by far and therefore, we consider an AI application to be
ethical, when it adheres to these five principles.</p>
      <p>This was confirmed by the experts in our interviews. We asked them an open question about which
ethical challenges they expect to come up in the realm of AI and they named the same ones as those
that we found in the literature.</p>
    </sec>
    <sec id="sec-4">
      <title>4. State of research – related work</title>
      <p>To establish the research gap and to incorporate insights from the available knowledge body, we
conducted a literature review on AI governance. The keywords “AI AND Ethics AND Governance”
have been used to search for related articles in databases. We selected the AIS eLibrary, business source
premier (ebscohost), and web of science as these databases cover publications from business
administration and information systems where we expect results on governance. Within the last few
years, there have been numerous publications on AI and ethics [32], which poses a challenge in the
search process which we had to face. The keyword search produced many entries in the databases (more
than 3.000 entries on web of science alone) and were therefore limited to the abstracts of publications
to narrow them down to the most promising articles that focus on AI governance. Figure 2 depicts the
process of the literature search process.</p>
      <p>In total, 90 records were identified that matched the keywords. 22 of those records were removed
before screening since the full text was not available. The remaining 68 records were screened further.
Overall, 53 reports were excluded due to the following reasons: 1) 44 publications were focused on
aspects not relevant to the scope of this paper, in most cases because they did not develop or address
governance frameworks, 2) 7 publications were either research in progress, editorial pieces, or panel
summaries and 3) 2 publications were not in English language. Thus, the final 15 publications [33-47]
were read by at least one researched, discussed, and analyzed in detail.</p>
      <p>In the next step, we classified the relevant publications according to concepts that are within the
scope of our paper. We analyzed the relevancy of the 15 publications based on five categories: 1) Which
role do ethics in values play for the guidelines of AI (Value-alignment of AI), 2) whether a governance
framework was presented, 3) if the research questions were relevant in the context of or applicable to
small and medium sized businesses (SME) 4) How adaptable the guidelines were to different AI use
cases.</p>
      <p>The review of the literature shows, that values and ethics were frequently discussed as a basis for
necessary regulation of AI, which underlines the importance of the topic in research. Some papers
described guidelines; however, they are not operationalizing these ethical guidelines into applicable
governance frameworks. Examples in the literature were either very high-level approaches or abstract
models. Especially the SME context, which we want to address with our tailoring, was mostly missing
from literature discussions. Since the discussed frameworks were high level and abstract, they mostly
were versatile and applicable to a broad range of AI use cases. Based on our findings, there is still a gap
regarding applicable and adjustable frameworks that provide direct guidance when companies try to
implement governance structures.</p>
    </sec>
    <sec id="sec-5">
      <title>5. The governance framework and the design factors</title>
      <sec id="sec-5-1">
        <title>5.1. How the framework and the design factors were derived</title>
        <p>In this section, we explain and document, how we came up with the framework, its content, and the
design factors that can be used to adjust the framework to a specific organization.</p>
        <p>Besides the input from the literature review regarding the current state of research, we used a
literature review to identify well-established governance frameworks from analogous domains in
addition. We chose only well-established frameworks because they have been implemented in various
companies and have been refined over time, which means that they are stable and incorporate a lot of
experience. To find such frameworks, that also are related to AI, we limited our search to books on
ITgovernance, data governance, and business analytics governance. In the southern library of Germany,
we identified nine existing frameworks, including Cobit, DMBok, and frameworks presented by
research groups, such as [48-52]. We analyzed these existing frameworks and evaluated the relevance
of each component regarding AI and ethics critically. Therefore, we excluded aspects such as
architecture or tool selection as these do not influence AI ethics. Afterwards, we added the results from
the related work – especially [44], whose framework is specific to health care -, and came up with a
total of 12 governance areas.</p>
        <p>In parallel, during the expert interviews, we asked the interviewees to provide input regarding these
governance rules and structures that may be relevant for ethical AI. They came up with a total of 78
recommendations. These recommendations that we received during the interviews were afterwards
reflected regarding their potential, re-vised in terms or wording, duplicates were removed, and then
assigned to the twelve governance areas. We finally made sure to address the challenges that are related
to the ethical values, which means that transparency, privacy, discrimination, and accountability are
represented in the framework.</p>
        <p>To identify the design factors that can be used to tailor the framework to specific companies and
needs, the authors selected an initial set from the literature (e.g., [53]) and verified it based on a critical
discussion and a list of AI applications in Germany [54].</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. The AI governance framework for ethical AI applications</title>
        <p>Our governance framework consists of 12 governance areas, and six design factors, which are listed
below (cf. Fig. 4). Each component of the framework is briefly explained and a reason is given, why it
matters. This reason usually links the component to the values that render the component necessary. In
addition, some examples of governance mechanisms (rules, processes, roles &amp; responsibilities) are
given that are mapped to the area (cf. Table 2).</p>
        <p>DF1: Industry
DF2: Sourcing
DF3: Personal
data</p>
        <p>Data Privacy
Build and Run AI
solutions
IT Security
Stakeholders</p>
        <p>Compliance and</p>
        <p>Monitoring</p>
        <p>Risk Management
Potential and Innovation</p>
        <p>Management</p>
        <p>Suppliers and external</p>
        <p>partners
User perspective on AI
usage</p>
        <p>Enterprise Knowledge</p>
        <p>Management</p>
        <p>DF4: Criticality</p>
        <p>DF5: Focal
object
Strategy</p>
        <p>Accountabibily</p>
        <p>DF6: Impact</p>
        <sec id="sec-5-2-1">
          <title>Suppliers and external Partners</title>
        </sec>
        <sec id="sec-5-2-2">
          <title>IT Security</title>
        </sec>
        <sec id="sec-5-2-3">
          <title>User perspective on AI usage</title>
        </sec>
        <sec id="sec-5-2-4">
          <title>Enterprise</title>
        </sec>
        <sec id="sec-5-2-5">
          <title>Management</title>
        </sec>
        <sec id="sec-5-2-6">
          <title>Reason / Link to ethics Example</title>
        </sec>
        <sec id="sec-5-2-7">
          <title>Privacy Process for pseudonymization and anonymization</title>
        </sec>
        <sec id="sec-5-2-8">
          <title>Internal structures to report Establish an internal</title>
          <p>misconduct ombudsperson</p>
        </sec>
        <sec id="sec-5-2-9">
          <title>Identify potential ethical Define acceptable/ challenges; non-maleficence, unacceptable risks fairness, and transparency</title>
        </sec>
        <sec id="sec-5-2-10">
          <title>Ensure ethical behavior during Train data scientist on</title>
          <p>development and also in the longer awareness; Define
term contingency plans</p>
        </sec>
        <sec id="sec-5-2-11">
          <title>Innovation To identify new possibilities that Establish partnerships impact AI solutions (e.g., new with universities and approaches in explainable AI to researchers increase transparency)</title>
        </sec>
        <sec id="sec-5-2-12">
          <title>Transparency; fairness</title>
        </sec>
        <sec id="sec-5-2-13">
          <title>Knowledge Learn from mistakes and identify</title>
          <p>gaps regarding rules and
governance; all ethics</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Results from evaluation, discussion and limitations</title>
      <p>The framework was evaluated during a workshop with five experts. Three of them had already
participated in the expert interviews before; two additional experts were included for the purpose of
adding new insights. The two experts were industry experts with responsibilities in coordinating AI
activities at their companies, which means they are in a position that is asked to implement governance
structures in their respective departments and companies. The evaluation goal was for the experts to
evaluate the applicability of the framework in practice, the relevance of the proposed measures, the
plausibility of our ethical AI definition, as well as the feasibility of a tailoring based on the design
factors. During the workshop, three AI solutions and the characteristic of the design factors, were
presented and the experts were tasked to select governance mechanisms fitting to the design factors,
and the solutions.</p>
      <p>• Applicability of the framework: The experts were able to fulfil their task without missing
information or the need to ask for further details or add additional governance mechanisms.
• The relevance of the measures: The experts selected specific measures to implement in a given
scenario. They agree that these measures will help to ensure an ethical AI usage. However, the
recommendations need to be more specific to the situation in order to provide guidance on how to
implement the governance structures in a company.
• The plausibility of the ethical AI definition: The experts agreed that the selected values matter
when building or using AI solutions. They were able to understand the link between values and
governance recommendations.
• The feasibility of a tailoring based on the design factors: The design factors were explicitly
discussed, and they are sufficient to describe the situation of a company that is willing to introduce
a governance framework.</p>
      <p>The impact of the research can be estimated based on the number of SMEs that consider AI, but are
hesitating because they fear problems and loss of reputation. This happens in all domains and industries.
We hope to reduce their doubts by providing means of handling and avoiding potential issues.</p>
      <p>Limitations are the low number of experts in the survey. We have not involved experts from very
small companies yet, which means that the fit of the framework to this case has not been verified. In
addition, there is no real-world implementation of the framework so far. Finally, we cannot know if the
recommendations – once they are implemented – will be able to stop every case of unethical AI usage,
which might for example even be caused by intentional misconduct.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Contribution and next steps</title>
      <p>The presented framework is a possible brick in an effort to ensure AI applications behave in an ethical
manner. It needs to be adjusted to each company and to the specific applications. The evaluation is
promising and we will continue with our research. Our contribution is the presentation of an inclusive
governance framework that is derived from experts and interviews, which focuses on ethics and is
adjustable to various situations.</p>
      <p>Currently, we are building a web tool that will select governance measures based on the input of a
user who needs support in designing his specific governance. The user will be asked about the design
factors and received specific and detailed instructions.</p>
      <p>We will also extend the content of our framework further, based on more expert inter-views, and
provide more detailed guidance on how to implement the suggestions in a real-world environment.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgements</title>
      <p>The research is funded by the Baden-Württemberg Stiftung.
[13] The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019) Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
SystemsIEEE.
[14] Vaishnavi, V. &amp; Kuechler, B. (2021) Design Science Research in Information Systems, AIS.
[15] Hevner, A. R., March, S. T., Park, J. &amp; Ram, S. (2004) Design Science in Information Systems</p>
      <p>
        Research. MIS Quarterly, 28(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 75-105.
[16] Österle, H., Becker, J., Frank, U., Hess, T., Karagiannis, D., Krcmar, H., Loos, P., Mertens, P.,
Oberweis, A. &amp; Sinz, E. J. (2011) Memorandum on design-oriented information systems research.
      </p>
      <p>
        European Journal of Information Systems, 20(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 7-10.
[17] Peffers, K., Tuunanen, T. &amp; Niehaves, B. (2018) Design science research genres: introduction to
the special issue on exemplars and criteria for applicable design science research. European Journal
of Information Systems, 27(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), 129-139.
[18] Weber, P., Hiller, S. &amp; Heiner, L. (2021) Dual Scientific Research Framework – Generating Real
      </p>
      <p>World Impact and Scientific Progress in Internet of Things Ecosystems, PACIS.
[19] Denzin, N. K. (2017) Sociological Methods: A Sourcebook. Oxon, New York: Routledge.
[20] Yin, R. K. (2018) Case Study Research and Applications: Design and Methods, 6th. Los Angeles,</p>
      <p>
        London, et al.: Sage.
[21] Brendel, A. B. T., Simon; Marrone, Mauricio; Lichtenberg, Sascha; Kolbe, Lutz M. (2020) What
to do for a Literature Review? – A Synthesis of Literature Review Practices, AMCIS.
[22] Brocke, J. v. S., Alexander; Niehaves, Bjoern; Niehaves, Bjorn; Reimer, Kai; Plattfaut, Ralf;
Cleven, Anne (2009) Reconstructing the giant: On the importance of rigour in documenting the
literature search process, ECIS.
[23] Levy, Y. &amp; Ellis, T. J. (2006) A Systems Approach to Conduct an Effective Literature Re-view in
Support of Information Systems Research. Informing Sci. Int. J. an Emerg. Transdiscipl., 9,
181212.
[24] Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D.,
Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M.,
Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness,
L. A., Stewart, L. A., Thomas, J., Tricco, A. C., Welch, V. A., Whiting, P. &amp; Moher, D. (2021)
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Systematic
Reviews, 10(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 89.
[25] Minsky, M. (1968) Semantic Information Processing. Cambridge, London: MIT Press.
[26] Aggarwal, C. C. (2021) An Introduction to Artificial Intelligence, in Aggarwal, C. C. (ed),
      </p>
      <p>
        Artificial Intelligence: A Textbook. Cham: Springer International Publishing, 1-34.
[27] Choi, R. Y., Coyner, A. S., Kalpathy-Cramer, J., Chiang, M. F. &amp; Campbell, J. P. (2020)
Introduction to Machine Learning, Neural Networks, and Deep Learning. Translational Vision
Science &amp; Technology, 9(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ).
[28] Joshi, A. V. (2020) Introduction to AI and ML, in Joshi, A. V. (ed), Machine Learning and
      </p>
      <p>
        Artificial Intelligence. Cham: Springer International Publishing, 3-7.
[29] Hallensleben, S., Hustedt, C., Fetic, L., Fleischer, T., Grünke, P., Hagendorff, T., Hauer, M.,
Hauschke, A., Heesen, J., Herrmann, M., Hillerbrand, R., Hubig, C., Kaminski, A., Krafft, T., Loh,
W., Otto, P. &amp; Puntschuh, M. (2020) From Principles to Practice: An interdisciplinary framework
to operationalize AI ethics. Gütersloh: Bertelsmann Stiftung.
[30] Hagendorff, T. (2020) The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines,
30(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 99-120.
[31] Jobin, A., Ienca, M. &amp; Vayena, E. (2019) The global landscape of AI ethics guidelines. Nature
      </p>
      <p>
        Machine Intelligence, 1(
        <xref ref-type="bibr" rid="ref9">9</xref>
        ), 389-399.
[32] Borenstein, J., Grodzinsky, F. S., Howard, A., Miller, K. W. &amp; Wolf, M. J. (2021) AI Ethics: A
      </p>
      <p>
        Long History and a Recent Burst of Attention. Computer, 54(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 96-102.
[33] Almeida, P., Santos, C. &amp; Farias, J. S. (2020) Artificial intelligence regulation: A meta-framework
for formulation and governance, in Association for Information, S. (ed), Proceedings of the 53rd
Hawaii International Conference on System Sciences.
[34] Almeida, P. G. R. d., Santos, C. D. d. &amp; Farias, J. S. (2021) Artificial intelligence regulation: A
framework for governance. Ethics and Information Technology, 23(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 505–525.
[35] Ashok, M., Madan, R., Joha, A. &amp; Sivarajah, U. (2022) Ethical framework for Artificial
      </p>
      <p>
        Intelligence and Digital technologies. International Journal of Information Management, 62.
[36] Hickman, E. &amp; Petrin, M. (2021) Trustworthy AI and Corporate Governance: the EU’s ethics
guidelines for trustworthy artificial intelligence from a company law perspective. Euro-pean
Business Organization Law Review, 22(
        <xref ref-type="bibr" rid="ref4">4</xref>
        ), 593–625.
[37] Ho, C. W. L., Soon, D., Caals, K. &amp; Kapur, J. (2019) Governance of automated image analysis
and artificial intelligence analytics in healthcare. Clinical radiology, 74(
        <xref ref-type="bibr" rid="ref5">5</xref>
        ), 329–337.
[38] Ibáñez, J. C. &amp; Olmeda, M. V. (2021) Operationalising AI ethics: how are companies bridging the
gap between practice and principles? An exploratory study. AI &amp; SOCIETY, 1–25.
[39] Jantunen, M., Halme, E., Vakkuri, V., Kemell, K.-K., Rebekah, R., Mikkonen, T., Nguyen Duc,
A. &amp; Abrahamsson, P. (2021) Building a Maturity Model for Developing Ethically Aligned AI
Systems. Selected Papers of the IRIS (
        <xref ref-type="bibr" rid="ref12">12</xref>
        ).
[40] Larsen, B. C. (2021) A Framework for Understanding AI-Induced Field Change: How AI
Technologies are Legitimized and Institutionalized, in Association for Computing, M. (ed),
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM Digital
Library. New York, NY, United States: Association for Computing Machinery, 683–694.
[41] Larsson, S. (2020) On the governance of artificial intelligence through ethics guidelines. Asian
      </p>
      <p>
        Journal of Law and Society, 7(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 437–451.
[42] Munoko, I., Brown-Liburd, H. L. &amp; Vasarhelyi, M. (2020) The ethical implications of using
artificial intelligence in auditing. Journal of Business Ethics, 167(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), 209–234.
[43] Orr, W. &amp; Davis, J. L. (2020) Attributions of ethical responsibility by Artificial Intelligence
practitioners. Information, Communication &amp; Society, 23(
        <xref ref-type="bibr" rid="ref5">5</xref>
        ), 719–735.
[44] Reddy, S., Allan, S., Coghlan, S. &amp; Cooper, P. (2020) A governance model for the application of
      </p>
      <p>
        AI in health care. Journal of the American Medical Informatics Association, 27(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 491–497.
[45] Seppälä, A., Birkstedt, T. &amp; Mäntymäki, M. (2021) From Ethical AI Principles to Governed AI,
in Association for Information, S. (ed), Proceedings of the 42nd International Conference on
Information Systems (ICIS).
[46] Wang, Y., Xiong, M. &amp; Olya, H. (2020) Toward an understanding of responsible artificial
intelligence practices, Proceedings of the 53rd Hawaii international conference on system sciences,
4962–4971.
[47] Wu, W., Huang, T. &amp; Gong, K. (2020) Ethical principles and governance technology development
of AI in China. Engineering, 6(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 302–309.
[48] Baars, H. &amp; Kemper, H.-G. (2021) Entwicklung und Betrieb integrierter BIA-Lösungen, in Baars,
H. &amp; Kemper, H.-G. (eds), Business Intelligence &amp; Analytics – Grundlagen und praktische
Anwendungen: Ansätze der IT-basierten Entscheidungsunterstützung. Wiesbaden: Springer
Fachmedien Wiesbaden, 323-388.
[49] DAMA International (2017) DAMA-DMBOK, 2ndTechnics Publications.
[50] Gluchowski, P. (2020) Data Governance. Heidelberg: dpunkt.
[51] ISACA (2018) COBIT 2019 Framework: Introduction &amp; Methodology. Schaumburg.
[52] Weill, P. &amp; Ross, J. (2004) IT Governance: How Top Performers Manage IT Decision Rights for
      </p>
      <p>Superior Results.
[53] DIN/DKE (2020) German Standardization Roadmap on Artificial Intelligence.
[54] Plattform Lernende Systeme (2021) Applications, 2021. Available online:
https://www.plattformlernende-systeme.de/map-on-ai-map.html.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Daniel</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. M.</given-names>
            ,
            <surname>Erik</surname>
          </string-name>
          <string-name>
            <surname>Brynjolfsson</surname>
          </string-name>
          , John Etchemendy, Terah Lyons, James , Manyika,
          <string-name>
            <given-names>H. N.</given-names>
            ,
            <surname>Juan Carlos</surname>
          </string-name>
          <string-name>
            <given-names>Niebles</given-names>
            , Michael Sellitto, Ellie Sakhaee, Yoav Shoham, &amp;
            <surname>Jack</surname>
          </string-name>
          <string-name>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. P.</surname>
          </string-name>
          (
          <year>2022</year>
          )
          <article-title>The AI Index 2022 Annual Report</article-title>
          . Stanford.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Tata</given-names>
            <surname>Consultancy</surname>
          </string-name>
          <string-name>
            <given-names>Services</given-names>
            &amp; Bitkom
            <surname>Research</surname>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>Deutschland lernt KI</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Rao</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Verwei</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2021</year>
          )
          <article-title>Sizing the prize</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Dastin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>Amazon scraps secret AI recruiting tool that showed bias against women</article-title>
          ,
          <year>2018</year>
          . Available online:
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Noble</surname>
            ,
            <given-names>S. U.</given-names>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>Algorithms of Oppression: How Search Engines Reinforce Racism</article-title>
          , New York University Press.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Schwartz</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>In 2016, Microsoft's Racist Chatbot Revealed the Dangers of Online Conversation</article-title>
          . IEEE Spectrum.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Pazzanese</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>Great promise but potential for peril</article-title>
          .
          <source>The Harvard Gazette.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <article-title>Ethics guidelines for trustworthy AI (</article-title>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          (
          <year>2021</year>
          )
          <article-title>Proposal for a Regulation of the European parliament and of the council Laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts</article-title>
          . Brussels: European Commission.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Bird</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fox-Skelly</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jenner</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larbey</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weitkamp</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Winfield</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>The ethics of artificial intelligence: Issues and initiatives European Parliamentary Research Service</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Lobschat</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mueller</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eggers</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brandimarte</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Diefenbach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kroschke</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Wirtz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2021</year>
          )
          <article-title>Corporate digital responsibility</article-title>
          .
          <source>Journal of Business Research</source>
          ,
          <volume>122</volume>
          ,
          <fpage>875</fpage>
          -
          <lpage>888</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Spiekermann</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2015</year>
          )
          <article-title>Ethical IT Innovation: A Value-Based System Design Approach</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>