<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.1145/3383219.3383220</article-id>
      <title-group>
        <article-title>Time for AI (Ethics) Maturity Model Is Now</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ville Vakkuri</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marianna Jantunen</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Erika Halme</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kai-Kristian Kemell</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anh Nguyen-Duc</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tommi Mikkonen</string-name>
          <email>tommi.mikkonen@helsinki</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pekka Abrahamsson</string-name>
          <email>pekka.abrahamsson@jyu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Helsinki, Department of Computer Science</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Jyva ̈skyla ̈, Faculty of Information Technology ville.vakkuri - marianna.s.p.jantunen - erika.a.halme- kai-kristian.o.kemell -</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of South Eastern Norway, School of Business</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>1</fpage>
      <lpage>10</lpage>
      <abstract>
        <p>There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of Artificial Intelligence (AI). Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development. This paper argues that AI software is still software and needs to be approached from the software development perspective. The software engineering paradigm has introduced maturity model thinking, which provides a roadmap for companies to improve their performance from the selected viewpoints known as the key capabilities. We want to voice out a call for action for the development of a maturity model for AI software. We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system, called a maturity model for the development of AI systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The ethics of Artificial Intelligence (AI) have been an
emerging topic in the field of AI development
        <xref ref-type="bibr" rid="ref3">(Jobin, Ienca,
and Vayena 2019)</xref>
        , and the ethical consequences of AI
systems have been researched in significant amounts in the
recent years
        <xref ref-type="bibr" rid="ref12 ref16">(Ryan and Stahl 2020)</xref>
        . Now that AI has become
prevalent in many decision-making processes that have the
chance to directly or indirectly impact or alter lives, in fields
such as healthcare (Panesar 2019) and transportation
        <xref ref-type="bibr" rid="ref13">(Sadek
2007)</xref>
        , concerns regarding the currently existing and
hypothetical ethical impacts of AI systems have been voiced by
many. With AI systems becoming pervasive, there emerges
an increasing need for guidance in creating AI systems that
align with our perception of ethical behavior.
      </p>
      <p>As Jobin, Ienca, and Vayena (2019) suggest, there is
an apparent agreement that AI should be ethical. Still, the
details of what “ethical AI” constitutes and ”which
ethical requirements, technical standards and best practices are
needed for its realization”, is up for debate. The ethics of
AI systems appear open for initiatives, or as Greene,
Hoffmann, and Stark (2019) put it, ‘up for grabs’. These
initiatives offer goals and definitions for what is expected of
ethical AI systems. As stated in Ethically Aligned Design: A
Vision for Prioritizing Human Well-being with Autonomous
and Intelligent Systems, First Edition, regardless of the
ethical framework we follow, the systems should be expected
to honor holistic definitions of societal prosperity, not
pursuing one-dimensional goals such as increased
productivity or gross domestic product. Awad et al. (2018) proposed
that we were entering an era where intelligent systems can
be tasked to “not only to promote well-being and
minimize harm, but also to distribute the well-being they
create, and the harm they cannot eliminate”. Societal and policy
guidelines should be established to ensure that they remain
human-centric, serving humanity’s values and ethical
principles (Ethically Aligned Design: A Vision for Prioritizing
Human Well-being with Autonomous and Intelligent
Systems, First Edition).</p>
      <p>Coming up with policies and enforcing them within an
organization might seem challenging and unrewarding.
Demands for ethical AI are declared from all directions, but
the rewards and consequences of making or not making
ethical initiatives and commitments seem unclear. When
companies and research institutions make “ethically motivated
‘self-commitments’” in the AI industry, efforts to
formulate a binding legal framework are discouraged, and any
demands of AI ethics laws remain relatively vague and
superficial (Hagendorff 2020). As Greene, Hoffmann, and Stark
(2019) suggest, many high-profile companies, organizations,
and communities have signaled their commitment to ethics,
but the resulting articulated value statements ”prompt more
questions than answers”. A problem may also emerge in the
situation where – as presented by Hagendorff (2020) – AI
ethics, like ethics in general, “lacks mechanisms to reinforce
its own normative claims”. It might be that the consequences
of not enforcing and applying ethical principles in AI
development are not severe enough to motivate companies to
follow through.</p>
      <p>
        Despite these challenges, many organizations have
reacted to ethical concerns on AI, for example, by
forming ad-hoc expert committees to draft policy documents
        <xref ref-type="bibr" rid="ref3">(Jobin, Ienca, and Vayena 2019)</xref>
        and producing statements
that describe ethical principles, values and other abstract
requirements for AI development and deployment (Mittelstadt
2019). At least 84 public-private AI ethics principles and
values initiatives were identified by Mittelstadt (2019), and
the topic evolves dynamically through new initiatives and
their iterations. Such initiatives can ”help focus public
debate on a common set of issues and principles, and raise
awareness among the public, developers and institutions of
the ethical challenges that accompany AI”
        <xref ref-type="bibr" rid="ref21">(Mittelstadt 2019;
Whittaker et al. 2018)</xref>
        .
      </p>
      <p>
        So far, these principles and values used to form various
guidelines for implementing AI ethics have been the primary
tools intended to help companies develop ethical AI systems
(as we discuss in detail in the next section). However, as
already noted in existing literature (Mittelstadt 2019), these
guidelines alone cannot guarantee ethical AI systems, and
seem to suffer from a lack of industry adoption
        <xref ref-type="bibr" rid="ref20">(Vakkuri
et al. 2020)</xref>
        . What, then, should be done instead? In this
paper, we look at the issue from the point of view of Software
Engineering (SE).
      </p>
      <p>One approach to tackling this issue, from the point of view
of SE, would be to focus on methods, practices, and tools for
AI ethics, in order to make these principles and values more
tangible to the developers working on these AI systems.
Some already exist, as discussed by Morley et al. (2019),
although they are mostly technical ones focused specifically
on, e.g., managing some aspects of machine learning.
Another approach, on which we focus here, is the development
of a maturity model. Maturity models, which we discuss
further in the third section, are used in SE to evaluate the
maturity level of organizational processes related to software
development. Could a maturity model for AI ethics help
organizations develop ethical AI?</p>
    </sec>
    <sec id="sec-2">
      <title>AI Ethics Guidelines</title>
      <p>To respond to the concerns and discussions around the
ethical and societal impacts of intelligent technology, guidelines
for ethical AI development have been published in the
recent years by a variety of organizations ranging from
corporations to governmental and research institutions. Still,
there appears to be no acknowledged single standard in
the field, but the guidelines often appear to be either one
“keyword” principles such as accountability or transparency
(Ethically Aligned Design: A Vision for Prioritizing Human
Well-being with Autonomous and Intelligent Systems, First
Edition) or descriptive sentences that present the
organization’s approach, such as “We want to develop safe, robust,
and explainable AI products” (Bolle 2020). The guidelines
may serve different purposes for each organization – a
corporation’s motivation to publishing a set of ethical
guidelines to follow can be expected to be different from that of a
research institution.</p>
      <p>
        Tending to the need of standards, several organizations
have stepped in to publish their own guidelines. As phrased
by Fjeld et al. (2020), ”seemingly every organization with
a connection to technology policy has authored or endorsed
a set of principles for AI”. As an example of the
aforementioned policy forming committees
        <xref ref-type="bibr" rid="ref3">(Jobin, Ienca, and Vayena
2019)</xref>
        , some major publications from influential institutions,
such as The IEEE Ethically Aligned Design: A Vision for
Prioritizing Human Well-being with Autonomous and
Intelligent Systems, First Edition; and Ethics Guidelines for
Trustworthy AI by the High-Level Expert Group appointed
by the European Commission, have introduced practical
design approaches and suggested standards and principles for
ethical AI development and implementation. Research
institutions are only the tip of the iceberg, however; a variety
of other institutions, such as governments and corporations,
have stepped in to publish their own AI ethics guidelines, as
discovered by, for example, Jobin, Ienca, and Vayena (2019).
Even the Vatican has published their initiative, teaming up
with IBM and Microsoft to draft a call for AI ethics
        <xref ref-type="bibr" rid="ref15">(Stotler
2020)</xref>
        .
      </p>
      <p>
        While not legally binding, the effort invested in such
guidelines by multiple stakeholders in the field are
noteworthy and influential
        <xref ref-type="bibr" rid="ref3">(Jobin, Ienca, and Vayena 2019)</xref>
        ,
contributing to the discussion of AI ethics. Guidelines can be
seen as “part of a broader debate over how, where, and why
these technologies are integrated into political, economic,
and social structures” (Greene, Hoffmann, and Stark 2019)
(p. 2122). We can witness how guidelines have contributed
positively to the development of AI ethics discussion by
observing the number of organizations that published their sets
of guidelines. Based on the number of organizations that
use the common vocabulary of ”keyworded” guidelines,
discussing transparency, fairness, and other such principles, it
seems as though guidelines may have developed into a type
of ”common language” for AI ethics discussion; a familiar
format that is easy to adopt and quick to communicate.
      </p>
      <p>
        Researchers have conducted reviews on AI ethics
guidelines, considering their implications (e.g.
        <xref ref-type="bibr" rid="ref12">Ryan and Stahl
(2020)</xref>
        ) and looking for unanimity among them (e.g. Jobin,
Ienca, and Vayena (2019); Hagendorff (2020)). In the light
of these reviews, certain prevalent guidelines have emerged.
For example, Jobin, Ienca, and Vayena (2019) identified
a “global convergence emerging around five ethical
principles”, namely transparency, justice and fairness,
nonmaleficence, responsibility and privacy.
      </p>
      <p>However, guidelines alone do not cater to the whole
spectrum of AI ethics challenges. Firstly, although some
similarities emerge between sources and studies, there is no
guarantee of unanimity of their application; even if each
organization were to adhere to the exact same set of guidelines, their
practical application is not guaranteed to be synchronized.
There may be questions related to, for example,
interpretation, emphasis and level of commitment, that organizations
need to make for themselves.</p>
      <p>In particular, when considering organizations employing
guidelines in their AI product development, the guidelines
often provide us with the answer to the question “what” is
done, but not “how”. This concept seems supported by
Morley et al. (2020) when they discuss the same effect on the
mainstream ethical debate on AI. Another problem
following from reliance in guidelines is, that their impact on human
decision-making is not guaranteed, and they may remain
ineffective (Hagendorff 2020).</p>
      <p>
        As reported by
        <xref ref-type="bibr" rid="ref19">Vakkuri et al. (2019)</xref>
        , there appears to
be a gap between research and practice in the field of AI
ethics when it comes to the procedures of companies, as the
academic discussions have not carried over to industry;
developers consider ethics important in principle, but perceive
them to be distant from the issues they face in their work.
In a survey of industry practices, including 211 companies,
106 which develop AI products, it was found that
companies have mixed levels of maturity in implementing AI ethics
        <xref ref-type="bibr" rid="ref20">(Vakkuri et al. 2020)</xref>
        . In terms of guidelines, the survey
discovered that the various AI ethics guidelines had not, in fact,
had a notable effect on industry practices, confirming the
suspicions of Mittelstadt (2019).
      </p>
      <p>
        The high variety in both industry practices and AI ethics
guidelines may make it difficult to assess AI systems
development, especially aspects such as trustworthiness or other
ethics-related topics. To answer a need of standardized
evaluation practices, we propose a look into maturity models,
and their utility in evaluating software development
practices. Maturity models or maturity practices for AI with
different emphases have already been introduced, such as the
AI-RFX Procurement Framework by The Institute for
Ethical AI and Machine Learning
        <xref ref-type="bibr" rid="ref12 ref15 ref16 ref18">(The Institute for Ethical AI
and Machine Learning 2020)</xref>
        and The AI Maturity
Framework
        <xref ref-type="bibr" rid="ref11">(Ramakrishnan et al. 2020)</xref>
        . Next, we discuss maturity
models in general, before discussing them further in the
specific context of AI and AI ethics in the fourth section.
      </p>
    </sec>
    <sec id="sec-3">
      <title>What are Maturity Models?</title>
      <p>
        Maturity models are intended to help companies appraise
their process maturity and develop it. They serve as points
of reference for different stages of maturity in an area. In the
context of SE, they are intended to help organizations move
from ad hoc processes to mature and disciplined software
processes
        <xref ref-type="bibr" rid="ref2">(Herbsleb et al. 1997)</xref>
        . Since the Software
Engineering Institute launched the Capability Maturity Model
(CMM) almost twenty years ago (Paulk et al. 1993),
hundreds of maturity models have been proposed by researchers
and practitioners across multiple domains, providing
framework to assess current effectiveness of an organization and
supports figuring out what capabilities they need to acquire
next in order to improve their performance.
      </p>
      <p>
        Though maturity models are numerous in SE, the Scaled
Agile Framework (SAFe) and Capability Maturity Model
Integration (CMMI) are some typical high-profile
examples of maturity models in the field of SE. SAFe is a
mixture of different software development practices and
focuses mainly on scaling agile development in larger
organizations. CMMI, on the other hand, focuses on
improvements related to software development processes. In
general, Software Process Improvement tools are rooted in
Shewhart-Deming’s plan-do-check-act (PDCA) paradigm,
where CMMI, for example, represents a prescriptive
framework in which the improvements are based on best practices
        <xref ref-type="bibr" rid="ref4 ref9">(Pernsta˚l et al. 2019)</xref>
        .
      </p>
      <p>
        Maturity models have been studied in academic research
as well. Studies have focused on both their benefits and
the potential drawbacks. For example, a past version of the
CMMI has been criticized for creating processes too heavy
for the organizations to handle
        <xref ref-type="bibr" rid="ref14 ref5">(Sony 2019; Meyer 2013)</xref>
        ,
and in general being resource-intensive to adopt for smaller
organizations (O’Connor and Coleman 2009). SAFe, on the
other hand, has been criticized for adding bureaucracy to
Agile (Ebert and Paasivaara 2017), leaning towards the
waterfall approach.
      </p>
      <p>Nonetheless, these models are widely used in the
industry, either independently, or in conjunction with other
frameworks, tools, or methods. SAFe, for example, has been
adopted by 70 of the Forbes 100 companies. CMMI has even
been adopted in fields other than software development.
Academic studies aside, companies seem to have taken a
liking to maturity models in the context of software.</p>
      <p>Indeed, this apparent popularity of these models out on
the field has, in part, motivated us to write this early
proposal maturity models in the context of AI ethics as well.
In an area where we struggle with a gap between research
and practice, we argue that looking at frameworks,
models, and other tools that are actively used out on the field
is a good starting point for further steps. Thus far, guidelines
have been used to make AI ethics principles more tangible,
but further steps are still needed, and a maturity model could
be one such step.</p>
    </sec>
    <sec id="sec-4">
      <title>What about an AI Ethics Maturity model or an AI Maturity Model?</title>
      <p>Despite the criticism towards maturity models discussed
above, maturity models are widely used in the industry.
Conversely, the AI ethics guidelines that have been somewhat
well-received in the academia seem to not have seen much
interest out on the field. We thus propose that an AI
development maturity model might take us closer to
standardizeable and ethically sound AI development practices.</p>
      <p>AI systems are particularly software-intensive systems.
Only a small fraction of a typical industrial AI system is
composed of Machine Learning (ML) or AI code. The rest
consists of computing infrastructure, data, process
management tools, etc. However, considering the overall analytic
capability of AI systems, we need to have code for the
ML model itself, visualization of the ML model outcome,
data management, and integration of ML into other software
modules. This code is hardly trivial and requires proper
engineering principles and practices (Carleton et al. 2020). This
lends support to the idea of an AI maturity model.</p>
      <p>
        Seeing as there are already numerous software maturity
models, a question worth asking is whether they would
already solve this issue. I.e., do we really need an AI ethics
maturity model? In comparison to traditional non-AI
software code, AI systems are sensitive to some special quality
attributes, such as technical debt, due to various AI-specific
issues. While traditional software are deterministic with a
pre-defined test oracle, AI/ ML models are probabilistic.
ML models learn from data and the model quality attributes,
such as accuracy change throughout the process of
experimenting. Moreover, ethical requirements, or attributes such
as fairness, trustworthiness, transparency, and explainability
        <xref ref-type="bibr" rid="ref3">(Jobin, Ienca, and Vayena 2019)</xref>
        , have unique meanings in
the context of AI, and they are not sufficiently addressed in
existing software models. Moreover, data is the central
component of the engineering process with a lot of new
problems, such as dealing with missing values, data granularity,
design and management of the database, data lake, and the
quality of the training data in comparison to real-world data.
These differences complicate applying traditional software
models to AI.
      </p>
      <p>
        Several AI-specific models have been published, for
example, a Microsoft nine-step pipeline (Amershi et al. 2019),
a five-step “stairway to heaven” AI model
        <xref ref-type="bibr" rid="ref4">(Lwakatare et al.
2019)</xref>
        , and a maturity framework for AI process
        <xref ref-type="bibr" rid="ref1">(Akkiraju et al. 2020)</xref>
        . However, they are not particularly
focused on the quality or ethical aspects of developing AI
systems. Besides, while these models reflect processes in
particular organizational contexts, there is currently no general
model that could be adopted in SMEs and startup
companies (Nguyen-Duc et al. 2020). Hence, a generic AI (ethics)
maturity model is still needed to benchmark and promote
the proper engineering practices and processes to plan, to
implement, and to integrate ethical requirements. Moreover,
this model should facilitate standardizing and disseminating
best practices to developers, scientists and organizations.
      </p>
      <p>In devising a maturity model for this area, one important
question is whether such a model should be an AI Ethics
Maturity Model or simply an AI Maturity Model. Both
approaches, we argue, would have their own potential benefits
and potential drawbacks.</p>
      <p>First, an AI Ethics Maturity Model. Being a field-specific
model, an AI ethics maturity model would address the
numerous AI ethics needs discussed in academic literature
and public discussion alike. Such a maturity model could
be devised so that it would directly complement the
ongoing principle and guideline discussion, and help bring it
into practice. Moreover, focusing on ethics over SE would
make it potentially suitable for any organization regardless
of their chosen development approach, although one should
still keep in mind its suitability for iterative development
approaches.</p>
      <p>On the other hand, were the model too focused on AI
ethics issues or design-level issues, the practical SE side
could be lacking. This could result in a situation where the
maturity model would still face the issue of being
impractical, much like the existing guidelines. In general, the model
might risk being detached from industry practice.
Companies should be closely involved when devising such a model
in order to mitigate these potential drawbacks.</p>
      <p>Secondly, an AI Maturity Model, an approach where the
focus is not on AI ethics as such. An AI Maturity Model
would arguably be more technical; speaking the language of
the developers, so to say. This would likely make the
maturity model more attractive from the point of view of industry.
AI Ethics could (or would) still be present, but be embedded
into the more practice-focused model as simply one aspect
of the model. Moreover, such a model could advance the AI
maturity discussion as a whole and not only from the point
of view of ethics.</p>
      <p>On the other hand, this approach would force us to
question whether the existing AI maturity related models work
in more detail, and if not, why not? If they do not, what
approach should the new model take to tackle the existing
issues? Moreover, how would this model relate to existing
software maturity models, and why would those not be
applicable in the AI context? Additionally, how would the
development effort be communicated to those already involved
with existing software maturity models? Would the model be
competing with existing ones or be a complementary one to
be used in conjunction?</p>
      <p>Whichever approach is chosen, this would be a large
endeavor, as we discuss further in the next section. The
discussion on AI ethics has gone a long way in the past decade.
Though this discussion is still on-going in terms of
principles, the time to act is now when it comes to bringing this
discussion into practice. Whether or not AI ethics is a part of
AI development, AI systems will become increasingly
common, and thus it is important to already make further efforts
at bridging the gap. Choices have to be made on which AI
ethics principles and issues to focus on in such models.</p>
    </sec>
    <sec id="sec-5">
      <title>Call for Action</title>
      <p>In a nutshell, we propose the development of an AI (ethics)
maturity model to cover the entire sphere of technical and
ethical quality requirements. Such maturity model would
help the field move from ad hoc implementation of ethics
(or total negligence), to a more mature process level, and
ultimately, if possible, automation (Figure 1). Furthermore,
we argue that this model should not be an effort for a single
researcher or research group, but a multidisciplinary project
that builds on a combination of theoretical models and
empirical results.</p>
      <p>The first step in creating an AI (ethics) maturity model
would be the formulation of requirements for different
aspects of AI (ethics) maturity. We may require different types
of commonly acknowledged agreements on issues that AI
maturity entails. We also need to refine a topic still shrouded
in vagueness to some extent, AI ethics, into solid,
universally applicable requirements.</p>
      <p>In this paper, we have introduced lots of challenges related
to the variety of practices and motivations that stakeholders
involved in AI systems development face, and this
nonconformity can pose challenges in making an AI maturity model
applicable universally, as much as that can be realistically
striven for. In order to improve the universal applicability of
a maturity model, we should look into ways to form
agreements, preferably ones that are agreed on as universally as
possible, to avoid unnecessarily limiting the model’s use.</p>
      <p>
        In addition to the numerous AI ethics guidelines and
the principles presented in them
        <xref ref-type="bibr" rid="ref3">(Jobin, Ienca, and Vayena
2019)</xref>
        , we should also consider looking into standards as a
starting point for agreements in this context. As suggested
by Cihon (2019), AI presents ”novel policy challenges” that
require a coordinated response globally - and standards
developed by international standards bodies can support the
governance of AI development. Such widely acknowledged
agreements could be harnessed to build unity and alignment
in defining maturity in AI systems development. AI-related
standards might answer the problem of vagueness and
disagreement, when setting up requirements for what ethical AI
maturity should look like.
      </p>
      <p>
        Several organizations have already published, discussed,
or suggested standards, so the work is already underway and
there are already standards to utilize. Some standards to
consider regarding ethical AI might be, for example:
• ISO/IEC JTC 1/SC 421, standard for Artificial
intelligence, created in 2017, an undergoing work that includes
some published and several under development ISO
standards, and
• Standards under IEEE P70002 - Standard for Model
Process for Addressing Ethical Concerns During System
Design, that includes several standards that are relevant to AI
systems. For example IEEE P7001 - Standards for
Transparency of Autonomous Systems and IEEE P7006 -
Standard for Personal Data Artificial Intelligence (AI) Agent
Requirements set for AI systems by internationally
accepted standards, together with guidelines that have reached
a consensus across different domains of business and
research, can perhaps be used as building blocks in forming
an ethically aligned AI maturity model. The numerous AI
ethics guidelines should also help in this regard. While the
existing AI ethics guidelines, as guidelines, have faced the
issue of not being widely adopted out on the field
        <xref ref-type="bibr" rid="ref20">(Vakkuri
et al. 2020)</xref>
        , the principles in them are still relevant.
Incorporating those principles into a more practical form – a
maturity model is one – is what this call for action is ultimately
about when it comes to AI ethics.
      </p>
      <p>
        In distilling the discussion on AI ethics principles, IEEE’s
Ethically Aligned Design (Ethically Aligned Design: A
Vision for Prioritizing Human Well-being with Autonomous
and Intelligent Systems, First Edition) presents an
extensive set of guidelines. The EU has also produced a
report that has tried to make these principles more actionable
through checklists of questions to be asked during
development (Ethics Guidelines for Trustworthy AI). The ECCOLA
method has also been an attempt at making these principles
more actionable
        <xref ref-type="bibr" rid="ref12 ref16 ref20">(Vakkuri, Kemell, and Abrahamsson 2020)</xref>
        .
      </p>
      <p>While the discussion on these AI ethics principles is
ongoing, decisions are needed to fight vagueness and to incite
action. In this regard, we could bring up the Agile
Manifesto3. A product of its time, it was a declaration of what
Agile software development should be like, written by a small
group of people. However, it has helped define what Agile
software development is and has encouraged organizations
to discuss maturity in the context of Agile.
1https://www.iso.org/committee/6794475.html
2https://standards.ieee.org/initiatives/artificial-intelligencesystems/standards.html
3http://agilemanifesto.org/</p>
      <p>Amershi, S.; Begel, A.; Bird, C.; DeLine, R.; Gall, H.;
Kamar, E.; Nagappan, N.; Nushi, B.; and Zimmermann, T.
2019. Software engineering for machine learning: A case
study. In 2019 IEEE/ACM 41st International Conference
on Software Engineering: Software Engineering in Practice
(ICSE-SEIP), 291–300. IEEE.</p>
      <p>Awad, E.; Dsouza, S.; Kim, R.; Schulz, J.; Henrich, J.;
Shariff, A.; Bonnefon, J.-F.; and Rahwan, I. 2018. The moral
machine experiment. Nature 563(7729): 59–64.
Bolle, M. 2020. Code of ethics for AI. Technical report,
Robert Bosch GmbH. URL https://www.bosch.com/stories/
ethical-guidelines-for-artificial-intelligence/.</p>
      <p>Carleton, A. D.; Harper, E.; Menzies, T.; Xie, T.; Eldh, S.;
and Lyu, M. R. 2020. The AI Effect: Working at the
Intersection of AI and SE. IEEE Software 37(4): 26–35.
Cihon, P. 2019. Standards for AI governance: international
standards to enable global coordination in AI research &amp;
development. Future of Humanity Institute. University of
Oxford .</p>
      <p>Ebert, C.; and Paasivaara, M. 2017. Scaling agile. Ieee
Software 34(6): 98–103.</p>
      <p>Ethically Aligned Design: A Vision for Prioritizing Human
Well-being with Autonomous and Intelligent Systems, First
Edition. 2019. URL
https://standards.ieee.org/content/ieeestandards/en/industry-connections/ec/autonomoussystems.html.</p>
      <p>Ethics Guidelines for Trustworthy AI. 2019. URL
https://ec.europa.eu/digital-single-market/en/news/ethicsguidelines-trustworthy-ai.</p>
      <p>Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; and Srikumar,
M. 2020. Principled artificial intelligence: Mapping
consensus in ethical and rights-based approaches to principles for
AI. Berkman Klein Center Research Publication 2020-1.
Greene, D.; Hoffmann, A. L.; and Stark, L. 2019. Better,
nicer, clearer, fairer: A critical assessment of the movement
for ethical artificial intelligence and machine learning. In
Proceedings of the 52nd Hawaii International Conference
on System Sciences.
Mittelstadt, B. 2019. Principles alone cannot guarantee
ethical AI. Nature Machine Intelligence 1–7.</p>
      <p>Morley, J.; Floridi, L.; Kinsey, L.; and Elhalal, A. 2019.
From What to How. An Overview of AI Ethics Tools,
Methods and Research to Translate Principles into Practices.
arXiv preprint arXiv:1905.06876 .</p>
      <p>Morley, J.; Floridi, L.; Kinsey, L.; and Elhalal, A. 2020.
From what to how: an initial review of publicly available
AI ethics tools, methods and research to translate principles
into practices. Science and engineering ethics 26(4): 2141–
2168.
O’Connor, R.; and Coleman, G. 2009. Ignoring ”best
practice”: Why Irish software SMES are rejecting CMMI and
ISO 9000. Australasian J. of Inf. Systems 16. doi:10.3127/
ajis.v16i1.557.</p>
      <p>Panesar, A. 2019. Machine Learning and AI for Healthcare.
Springer.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Akkiraju</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Sinha</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mahmud</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Gundecha,
          <string-name>
            <given-names>P.</given-names>
            ;
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            ;
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            ; and
            <surname>Schumacher</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>Characterizing Machine Learning Processes: A Maturity Framework</article-title>
          . In Hagendorff, T.
          <year>2020</year>
          .
          <article-title>The ethics of AI ethics: An evaluation of guidelines</article-title>
          .
          <source>Minds and Machines</source>
          <volume>1</volume>
          -22.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Herbsleb</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Zubrow</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Goldenson</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Hayes</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ; and Paulk,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>1997</year>
          .
          <article-title>Software quality and the capability maturity model</article-title>
          .
          <source>Communications of the ACM</source>
          <volume>40</volume>
          (
          <issue>6</issue>
          ):
          <fpage>30</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Jobin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ienca</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Vayena</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>The global landscape of AI ethics guidelines</article-title>
          .
          <source>Nature Machine Intelligence</source>
          <volume>1</volume>
          (
          <issue>9</issue>
          ):
          <fpage>389</fpage>
          -
          <lpage>399</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Lwakatare</surname>
            ,
            <given-names>L. E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Raj</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bosch</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Olsson,
          <string-name>
            <given-names>H. H.</given-names>
            ; and
            <surname>Crnkovic</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>A Taxonomy of Software Engineering Challenges for Machine Learning Systems: An Empirical Investigation</article-title>
          . In Kruchten, P.;
          <string-name>
            <surname>Fraser</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Coallier</surname>
          </string-name>
          , F., eds.,
          <source>Agile Processes in Software Engineering and Extreme Programming</source>
          ,
          <fpage>227</fpage>
          -
          <lpage>243</lpage>
          . Springer International Publishing.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Meyer</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>What is wrong with CMMI.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          https://bertrandmeyer.com/
          <year>2013</year>
          /05/12/what-is
          <string-name>
            <surname>-</surname>
          </string-name>
          wrongwith-cmmi/.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          1993.
          <article-title>Capability Maturity Model for Software (Version 1</article-title>
          .1).
          <source>Technical Report CMU/SEI-93-TR-024</source>
          , Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA. URL http://resources.sei.cmu.edu/library/asset-view.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          cfm?AssetID=
          <fpage>11955</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Pernsta˚l</surname>
            , J.; Feldt,
            <given-names>R.</given-names>
          </string-name>
          ; Gorschek,
          <string-name>
            <surname>T.</surname>
          </string-name>
          ; and Flore´n,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>FLEX-RCA:</surname>
          </string-name>
          <article-title>a lean-based method for root cause analysis in software process improvement</article-title>
          .
          <source>Software Quality Journal</source>
          <volume>27</volume>
          (
          <issue>1</issue>
          ):
          <fpage>389</fpage>
          -
          <lpage>428</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Ramakrishnan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Salveson</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Abuhamad</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ; Chantry,
          <string-name>
            <given-names>C.</given-names>
            ;
            <surname>Diamond</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.-P.</surname>
          </string-name>
          ; Donelson,
          <string-name>
            <given-names>P.</given-names>
            ;
            <surname>Ebert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ;
            <surname>Koleilat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            ;
            <surname>Marble</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ;
            <surname>Ok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ;
            <surname>Ryan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>Sobolewski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ;
            <surname>Suen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ;
            <surname>Truong</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          ; and Zurof,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>The AI Maturity Framework</article-title>
          .
          <source>Technical report</source>
          , Element AI.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Ryan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Stahl</surname>
            ,
            <given-names>B. C.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications</article-title>
          .
          <source>Journal of Information, Communication and Ethics in Society .</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Sadek</surname>
            ,
            <given-names>A. W.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>Artificial intelligence applications in transportation</article-title>
          .
          <source>Transportation Research Circular 1-7.</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Sony</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Implementing sustainable operational excellence in organizations: an integrative viewpoint</article-title>
          .
          <source>Production &amp; Manufacturing Research</source>
          <volume>7</volume>
          (
          <issue>1</issue>
          ):
          <fpage>67</fpage>
          -
          <lpage>87</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Stotler</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>The Vatican Teams with Microsoft and IBM to Call for AI Ethics</article-title>
          . URL https://www.futureofworknews.com/topics/futureofwork/ articles/444670-vatican
          <article-title>-teams-with-microsoft-ibm-call-aiethics</article-title>
          .htm.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>The Institute for Ethical AI and Machine Learning</source>
          .
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>AI-RFX Procurement</surname>
          </string-name>
          <article-title>Framework v1.0</article-title>
          . URL https://ethical.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <year>2020</year>
          .
          <source>The Current State of Industrial Practice in Artificial Intelligence Ethics. IEEE Software</source>
          <volume>37</volume>
          (
          <issue>4</issue>
          ):
          <fpage>50</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Vakkuri</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kemell</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kultanen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Siponen, M. T.; and Abrahamsson,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Ethically Aligned Design of Autonomous Systems: Industry viewpoint and an empirical study</article-title>
          .
          <source>arXiv preprint arXiv:1906</source>
          .07946 .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Vakkuri</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kemell</surname>
          </string-name>
          , K.
          <article-title>-</article-title>
          K.; and Abrahamsson,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>ECCOLA : a Method for Implementing Ethically Aligned AI Systems</article-title>
          .
          <source>In Proceedings of the 46th Euromicro Conference on Software Engineering and Advanced Applications, Euromicro Conference on Software Engineering and Advanced Applications (SEAA2020)</source>
          ,
          <fpage>195</fpage>
          -
          <lpage>204</lpage>
          . IEEE. doi:
          <volume>10</volume>
          .1109/seaa51224.
          <year>2020</year>
          .
          <volume>00043</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Whittaker</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Crawford</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Dobbe</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ; Fried,
          <string-name>
            <given-names>G.</given-names>
            ;
            <surname>Kaziunas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ;
            <surname>Mathur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ;
            <surname>West</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            ;
            <surname>Richardson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ; Schultz, J.; and
            <surname>Schwartz</surname>
          </string-name>
          ,
          <string-name>
            <surname>O.</surname>
          </string-name>
          <year>2018</year>
          .
          <source>AI now report 2018</source>
          . AI Now Institute at New York University New York.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>