<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>AI Ethics in Industry: A Research Framework</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Jyväskylä</institution>
          ,
          <addr-line>PO Box 35, FI-40014 Jyväskylä</addr-line>
          ,
          <country country="FI">Finland</country>
        </aff>
      </contrib-group>
      <fpage>49</fpage>
      <lpage>60</lpage>
      <abstract>
        <p>Artificial Intelligence (AI) systems exert a growing influence on our society. As they become more ubiquitous, their potential negative impacts also become evident through various real-world incidents. Following such early incidents, academic and public discussion on AI ethics has highlighted the need for implementing ethics in AI system development. However, little currently exists in the way of frameworks for understanding the practical implementation of AI ethics. In this paper, we discuss a research framework for implementing AI ethics in industrial settings. The framework presents a starting point for empirical studies into AI ethics but is still being developed further based on its practical utilization.</p>
      </abstract>
      <kwd-group>
        <kwd>Artificial intelligence</kwd>
        <kwd>AI ethics</kwd>
        <kwd>AI development</kwd>
        <kwd>Responsibility</kwd>
        <kwd>Accountability</kwd>
        <kwd>Transparency</kwd>
        <kwd>Research framework</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Artificial Intelligence (AI) and Autonomous Systems (AS) have become
increasingly prevalent in software development endeavors, changing the role of ethics in
software development. One key difference between conventional software systems and AI
systems is that the idea of active users in the context of AI systems can be questioned.
More often than not, individuals are simply objects for AI systems that they either
perform actions upon or use for data collection purposes. On the other hand, users of AI
systems are usually organizations as opposed to individuals. This is problematic in
terms of consent, not least because one may not even be aware of being used for data
collection purposes by an AI.</p>
      <p>
        To this end, existing studies have argued that developing AI/AS is a
multi-disciplinary endeavor rather than a simple software engineering one
        <xref ref-type="bibr" rid="ref6">(Charisi et al. 2017)</xref>
        .
Developers of these systems should be aware of the ethical issues involved in these
systems in order to be able to mitigate their potential negative impacts. While discussion
on AI ethics among the academia has been active in the recent years, various public
voices have also expressed concern over AI/AS following recent real-world incidents
(e.g. in relation to unfair systems
        <xref ref-type="bibr" rid="ref12">(Flores, Bechtel &amp; Lowenkamp 2016)</xref>
        ).
      </p>
      <p>
        However, despite the increasing activity in the area of AI ethics, there is currently a
gap between research and practice. Few empirical studies on the topic exist, and the
state of practice remains largely unknown. The IEEE Ethically Aligned Design
guidelines have suggested that they have not been widely adopted by practitioners.
Additionally, in a past study, we have presented preliminary results supporting the notion of a
gap in the area
        <xref ref-type="bibr" rid="ref29 ref30">(Vakkuri, Kemell, Kultanen, Siponen, &amp; Abrahamsson 2019b)</xref>
        . Other
past studies have shown that developers are not well-informed on ethics in general
(McNamara, Smith &amp; Murphy-Hill 2018). This gap points towards a need for tooling
and methods in the area, as well as a need for further empirical studies on the topic.
      </p>
      <p>To provide a starting point for bridging the gap between research and practice in
terms of empirical research, we present a framework for AI ethics in practice. The
framework is built around extant conceptual research in the area of AI ethics, intended
to serve a framework for empirical studies into AI ethics. The framework has been
utilized in practice to collect empirical data and based on this utilization we discuss the
framework in this paper.</p>
      <p>The rest of this paper is organized as follows. In section 2 we discuss the theoretical
background of the study by going over existing research in the area. Then, in section 3,
we present the research framework discussed in this paper. In section 4 we go over the
results of an empirical study in which the framework was utilized. In section 5, we
discuss the framework and its implications. Section 6 concludes the paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Background: The Current State of AI Ethics</title>
      <p>
        The academic discussion on AI ethics has thus far largely focused on defining the
area through central constructs and principles. Thus far, the focus has been on four main
principles for AI ethics: transparency
        <xref ref-type="bibr" rid="ref26 ref27 ref7">(Dignum, 2017; The IEEE Global Initiative on
Ethics of Autonomous and Intelligent Systems 2019; Turilli &amp; Floridi 2009)</xref>
        ,
accountability
        <xref ref-type="bibr" rid="ref26 ref7">(Dignum 2017; The IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems 2019)</xref>
        , responsibility
        <xref ref-type="bibr" rid="ref7">(Dignum 2017)</xref>
        , and fairness (e.g.
        <xref ref-type="bibr" rid="ref12">(Flores et al.
2016)</xref>
        ). However, not all four of these values are universally agreed to form the core of
AI ethics (e.g.
        <xref ref-type="bibr" rid="ref22">(Morley, Floridi, Kinsey &amp; Elhalal 2019)</xref>
        ) and effectiveness of using
values or principles to approach AI Ethics has been criticized in and of itself
(Mittelstadt 2019).
      </p>
      <p>
        Various real-world incidents out on the field (e.g.,
        <xref ref-type="bibr" rid="ref3">(Reuters 2019)</xref>
        ) have recently
began to spark public discussion on AI ethics. This has led to governments,
standardization institutions, and practitioner organizations reacting by producing their own
demands and guidelines for involving ethics into AI development, with many guidelines
and regulations in the works. Countries such as France
        <xref ref-type="bibr" rid="ref31">(Villani et al., 2018)</xref>
        , Germany
        <xref ref-type="bibr" rid="ref10 ref9">(Ethics commission's complete report on automated and connected driving 2017)</xref>
        and
Finland
        <xref ref-type="bibr" rid="ref10 ref9">(Finland’s age of artificial intelligence report 2017)</xref>
        have emphasized the role
of ethics in AI /AS. On an international level, the EU began to draft its own AI ethics
guidelines which were presented in April 2019
        <xref ref-type="bibr" rid="ref2">(AI HLEG 2019)</xref>
        . Moreover, the IEEE
P7000™ Standards Working Groups ISO has founded its own standardization
subcommittee (ISO/IEC JTC 1/SC 42 artificial intelligence.). Finally, larger practitioner
organizations have also presented their own guidelines concerning ethics in AI (e.g., Google
guidelines
        <xref ref-type="bibr" rid="ref23">(Pichai 2018)</xref>
        ), Intel’s recommendations for public policy principles on AI
        <xref ref-type="bibr" rid="ref24">(Rao 2017)</xref>
        , Microsoft’s guidelines for conversational bots
        <xref ref-type="bibr" rid="ref21">(Microsoft, 2018)</xref>
        ).
      </p>
      <p>
        Attempts to bring this on-going academic discussion out on the field have been
primarily made in the form of guidelines and principles lacking practices to implement
them
        <xref ref-type="bibr" rid="ref22">(Morley et al. 2019)</xref>
        . Out of these guidelines, perhaps the most prominent ones up
until now have been the IEEE guidelines for Ethically Aligned Design
        <xref ref-type="bibr" rid="ref26">(The IEEE
Global Initiative on Ethics of Autonomous and Intelligent Systems 2019)</xref>
        , born from
the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems alongside
its IEEE P7000™ Standards Working Groups, which were branded under the concept
of EAD
        <xref ref-type="bibr" rid="ref26">(The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems,
2019)</xref>
        .
      </p>
      <p>
        These guidelines, however, are unlikely to see large-scale industry adoption based
on what we already know about ethical guidelines in IT. In their study on the effects of
the ACM ethical guidelines
        <xref ref-type="bibr" rid="ref15">(Gotterbarn et al. 2018)</xref>
        , McNamara et al. (2018)
discovered that the guidelines had had little impact on developer behavior. The IEEE EAD
        <xref ref-type="bibr" rid="ref26">(The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019)</xref>
        guidelines already suggest that this is likely to be the case in AI ethics as well, although
there currently exists no empirical data to confirm this assumption.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Research Model</title>
      <p>
        Academic literature has discussed various principles as a way to address ethics as a
part of the development of AI and AI-based systems. Currently, four constructs are
considered central ones in AI ethics: Transparency
        <xref ref-type="bibr" rid="ref26 ref27 ref7">(Dignum, 2017; The IEEE Global
Initiative on Ethics of Autonomous and Intelligent Systems., 2019; Turilli &amp; Floridi,
2009)</xref>
        , Accountability
        <xref ref-type="bibr" rid="ref27 ref7">(Dignum, 2017; Turilli &amp; Floridi, 2009)</xref>
        , Responsibility
        <xref ref-type="bibr" rid="ref7">(Dignum, 2017)</xref>
        , and Fairness e.g.
        <xref ref-type="bibr" rid="ref12">(Flores et al., 2016)</xref>
        . Perhaps notably, a recent EU report
        <xref ref-type="bibr" rid="ref2 ref26">(High-Level Expert Group on Artificial Intelligence, 2019)</xref>
        also discussed
Trustworthiness as its key construct, a value all systems should aim for, according to the report.
Morley et al. (2019) presented an entirely new set of more abstract constructs intended
to summarize the existing discussion and the plethora of principles discussed so far in
addition to the ones mentioned here. They presented five constructs in the form of:
Beneficence, Non-maleficence, Autonomy, Justice, and Explicability.
      </p>
      <p>
        To categorize the field of AI ethics three categories have been presented: (1) Ethics
by Design (integrating ethics into system behavior); (2) Ethics in Design (software
development methods etc. supporting implementation of ethics); and (3) Ethics for Design
(standards etc. that ensure the integrity of developers and users)
        <xref ref-type="bibr" rid="ref8">(Dignum, 2018)</xref>
        . In
this model, we focus on the latter two categories.
      </p>
      <p>Out of the aforementioned four principles that have been proposed to form the basis
of ethical development of AI systems, we consider accountability, responsibility, and
transparency (the so-called ART principles (Dignum (2017)) a starting point for
understanding the involvement of ethics in ICT projects. These three constructs form the
basis of ethical AI and attempts to identify their possible relations, as well as relations
of other constructs that may be involved in the process.</p>
      <p>
        To make these principles tangible, a subset of constructs in the form of actions (Fig.
1 (1.1-3.5)), discussed in detail in subsection 3.1, was formed under each key concept.
These actions were outlined based on the IEEE guidelines for EAD
        <xref ref-type="bibr" rid="ref26">(The IEEE Global
Initiative on Ethics of Autonomous and Intelligent Systems, 2019)</xref>
        . The actions were
split into two categories, Ethics in Design and Ethics for Design(ers), based on
Dignum’s (2018) typology of AI ethics.
3.1
      </p>
      <sec id="sec-3-1">
        <title>The ART Model</title>
        <p>
          Transparency is a key ethical construct that is related to understanding AI systems.
Dignum (2017) discusses transparency as transparency of AI systems and specifically
algorithms and data used. Arguably, transparency is a pro-ethical circumstance that
makes it possible to implement AI ethics in the first place
          <xref ref-type="bibr" rid="ref27">(Turilli and Floridi 2009)</xref>
          .
Without understanding how the system works, it is impossible to understand why it
malfunctioned and consequently to establish who is responsible. Additionally, both the
EU AI Ethics guidelines
          <xref ref-type="bibr" rid="ref2">(AI HLEG 2019)</xref>
          and EAD guidelines
          <xref ref-type="bibr" rid="ref26">(The IEEE Global
Initiative on Ethics of Autonomous and Intelligent Systems 2019)</xref>
          consider transparency
an important ethical principle.
        </p>
        <p>
          In the research framework presented in this paper, we consider transparency not only
in relation to AI systems but also in relation to AI systems development. I.e., we also
consider it important that we understand what decisions were made, by whom, and why
during development. Different practices support this type of transparency (e.g. audits
and code documentation
          <xref ref-type="bibr" rid="ref29 ref30">(Vakkuri, Kemell, &amp; Abrahamsson 2019a)</xref>
          ).
        </p>
        <p>For the system to be considered transparent (line 1.a), feature traceability (1.1) (EAD
Principle 5) should be present, and the system should be predictable in its behavior (1.2)
(EAD Principles 5 and 6). For development to be considered transparent (line 1.c), the
decision-making strategies of the endeavor should be clear (1.4) (EAD Principles 5 and
6), and decisions should be traceable back to individual developers (1.3) (EAD
Principles 1, 5, and 6). As a pro-ethical circumstance, transparency also produces the
possibility to assess accountability and responsibility (line 1.b) in relation to both
development and the system.</p>
        <p>Accountability refers to determining who is accountable or liable for the decisions
made by the AI. Dignum (2017) defines accountability to be the explanation and
justification of one’s decisions and one’s actions to the relevant stakeholders. In the context
of this research framework, accountability is used not only in the context of systems,
but also in a more general sense. We consider, for example, how various accountability
issues (legal, social) were taken into consideration during the development.</p>
        <p>
          As mentioned earlier, transparency is the pro-ethical condition here that makes
accountability possible (denoted by line 1.b). We must understand how the system works
in order to establish accountability. Similarly, we should be able to determine why it
works that way by understanding what decisions made during development led to the
system working that way. We consider accountability in a broad sense, thus including
also legal and social concerns related to the system. Much like transparency,
accountability is also considered a key construct in AI ethics
          <xref ref-type="bibr" rid="ref26">(The IEEE Global Initiative on
Ethics of Autonomous and Intelligent Systems 2019)</xref>
          and it holds an important role in
preventing misuse of AI systems and supporting wellbeing through AI systems.
        </p>
        <p>In our research model, accountability is perceived through the concrete actions of
the developers concerning the systems itself, 2.1 Preparing for anything unexpected:
(actions that are taken to prevent or control unexpected situation) (EAD Principle 8),
2.2 Preparing for misuse/error scenarios (actions that are taken to prevent or control
misuse/error scenarios) (EAD Principles 7 and 8), 2.3 Error handling (practices to deal
with errors in software) (EAD Principles 4 and 7) and 2.4 data security (actions taken
to ensure cyber security of system and secure handling of data) (EAD Principle 3).</p>
        <p>
          Finally, Dignum (2017) considers responsibility a chain of responsibility that links
the actions of the system to all the decisions made by its stakeholders. We do not
consider this definition to be actionable and instead draw from the EAD guidelines
          <xref ref-type="bibr" rid="ref26">(The
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019)</xref>
          to
consider responsibility as an attitude or moral obligation to act ethically. It is thus internally
motivated rather than the externally motivated accountability (e.g. legal responsibility).
        </p>
        <p>While accountability relates to the connection between one’s decisions and the
stakeholders of the system, responsibility is more focused on the internal processes of the
developers not necessarily directly related to any one action. In order to act responsibly,
one needs to understand the meanings of their actions. Therefore, in the research
framework responsibility is perceived through the actions of the developers concerning, 3.1
perception of responsibility (developers have a sense of responsibility and perception
what is responsibility in software development) (EAD Principles 2, 4 and 6); 3.2
distribution of responsibility (who is seen responsible e.g. for any harm caused by the
system) (EAD Principle 6); 3.3 encountered problems (how errors and error scenarios are
tackled and who is responsible for tackling them) (EAD Principles 7 and 8); 3.4 feelings
of concern (developers are concerned about issues related to their software); and 3.5
data sensitivity (developers attitude toward data privacy and data security) (EAD
Principles 2 and 3).
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Operationalizing the Research Framework</title>
        <p>
          The commitment net model of Abrahamsson (2002) was utilized to analyze the data
gathered using this research framework
          <xref ref-type="bibr" rid="ref29 ref30">(Vakkuri et al. 2019a)</xref>
          . This was done to have
an existing theoretical framework to analyze the data with, and especially one aimed at
the context of software development.
        </p>
        <p>
          From this commitment net model, we focused on concerns which were analyzed to
understand what ethical issues were of interest to the developers. Actions were then
studied to understand how these concerns were actually tackled, or whether they were
tackled at all. In commitment net model, actions are connected to concerns because
when actions are taken, they are always driven from concerns
          <xref ref-type="bibr" rid="ref1">(Abrahamsson, 2002)</xref>
          .
However, concerns can also exist without any actions taken to address them, although
this points to a lack of commitment on the matter.
        </p>
        <p>The dynamic between actions and concerns was considered a tangible way to
approach the topic of practices for implementing AI ethics. Actions were directly likened
to (software development) practices in this context. On the other hand, concerns were
considered to be of interest in understanding e.g. whether the developers perhaps
wanted to implement ethics but were unable to do so.</p>
        <p>In this fashion, existing theories can be used in conjunction with the framework to
either make it more actionable for implementing ethics, or for helping analyze or gather
data using the framework.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Empirical Utilization of the Framework</title>
      <p>
        The framework was utilized successfully in a recent study
        <xref ref-type="bibr" rid="ref29 ref30">(Vakkuri et al. 2019a)</xref>
        .
The empirical portion of the focal paper is summarized briefly in this section in order
to demonstrate how to benefit from the framework. However, the focus of this paper is
on the research framework itself rather than these empirical results.
4.1
      </p>
      <sec id="sec-4-1">
        <title>Study Design</title>
        <p>The research framework was utilized to carry out a multiple case study of three case
companies. Each company was a software company developing AI solutions for the
healthcare industry. More specifically, the case studies focused on one specific project
inside each of the case companies.
Case</p>
        <p>Case Description
A
B
C</p>
        <p>Statistical tool for detecting
marginali</p>
        <p>zation
Voice and NLP based tool for
diagnos</p>
        <p>tics
NLP based tool for indoor navigation</p>
        <p>Respondent[Reference]</p>
        <p>Data analyst [R1]
Consultant [R2]</p>
        <p>Project
[R3]</p>
        <p>coordinator
Developer [R4]
Developer [R5]
Project manager [R6]
Developer [R7]
Developer [R8]</p>
        <p>
          Data from the cases were gathered using semi-structured interviews, for which the
strategy was prepared according to the guidelines of Galletta (2013). The research
framework, described in the preceding section, was utilized to construct the research
instrument with which the data was collected. The questions prepared for the
semistructured interviews focused on the components of the framework. The interviews
were recorded and the transcripts were analyzed for the empirical study. The transcripts
were analyzed using a grounded theory
          <xref ref-type="bibr" rid="ref16 ref25">(Strauss and Corbin 1998 and later Heath 2004)</xref>
          inspired approach. Each transcript was first analyzed separately, after which the results
of the analysis were compared across cases to find similarities.
          <xref ref-type="bibr" rid="ref29 ref30">(Vakkuri et al. 2019a)</xref>
          4.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Empirical Results</title>
        <p>The findings of the empirical study conducted using this framework were
summarized into four Primary Empirical Conclusions (PECs). The PECs were communicated
as follows:
●
●
●
●</p>
        <p>PEC1 Responsibility of developers and development is under-discussed
PEC2 Developers recognize transparency as a goal, but it is not formally
pursued
PEC3 Developers feel accountable for error handling on programming level
and have the means to deal with it
PEC4 While the developers speculate potential socioethical impacts of the
resulting system, they do not have means to address them.</p>
        <p>
          These results served to further underline the gap between research and practice in
the area. Whereas developers were to some extent aware of some of the goals of the AI
ethics principles, these were seldom formally pursued in any fashion.
          <xref ref-type="bibr" rid="ref29 ref30">(Vakkuri et al.
2019a)</xref>
          5
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Discussion</title>
      <p>
        Rather than discussing the implications of the empirical findings of the study
utilizing this framework (as was already done in
        <xref ref-type="bibr" rid="ref29 ref30">(Vakkuri et al. 2019a)</xref>
        , we discuss the
research framework and its implications. As extant studies on AI ethics have been largely
conceptual, and e.g. the IEEE EAD guidelines have remarked that much work was still
needed to bridge the gap between research and practice in the area
        <xref ref-type="bibr" rid="ref26">(IEEE 2019)</xref>
        , this
framework provides an initial step towards bridging the gap in this area. It provides a
starting point for empirical studies in the area by highlighting important constructs and
themes to e.g. discuss in interviews.
      </p>
      <p>However, as this framework was constructed in late 2017, it is now two years old.
Since its inception, much has happened in the field of AI ethics and AI in general. The
discussion has progressed, and whereas in 2017 the ART model was the current topic
of discussion and Fairness an emerging construct, now Fairness has also become a
central construct in AI ethics discussion (e.g ACM Conference on Fairness,
Accountability, and Transparency, https://fatconference.org/) , especially in the discourse of the
United States and the Anglosphere.</p>
      <p>
        Moreover, a recent EU report on Trustworthy AI systems
        <xref ref-type="bibr" rid="ref2">(AI HLEG 2019)</xref>
        discussed
Trustworthiness as a goal for AI systems, presenting another potentially important
construct for the field. However, trustworthiness differs from the existing constructs in that
it is not objective and even more difficult to build into a system. Whereas transparency
is a tangible attribute of a system or a project that can be evaluated, trustworthiness is
ultimately attributed to a system (and its socio-economic context) by an external
stakeholder. E.g., a member of the general public may trust or distrust a system, considering
it trustworthy.
      </p>
      <p>The discussion on principles in the field continues to be active. Morley et al. (2019)
recently proposed a new set of constructs intended to summarize the discussion thus
far. Only time will tell whether this novel set of constructs becomes as widely used as
the existing constructs such as transparency.</p>
      <p>
        Yet, we maintain that it is pivotal that attempts such as this are made to bring
empiricism into this otherwise highly theoretical discussion. Although the field is still
evolving, the industry is not waiting for the discussion to finish. AI systems are developed
with or without the involvement of AI ethics. To this end, even if the academia does
not act, governments and other national and supranational organizations are drafting
their own guidelines
        <xref ref-type="bibr" rid="ref2">(E.g. AI HLEG, 2019)</xref>
        and regulations (e.g.
https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html) for AI systems.
The academia should aim to participate in this discussion and these actions, even
without a unified consensus on the key constructs and principles for AI ethics. As such, the
framework presents one way to approach this area of research through an empirical
lens.
      </p>
      <p>
        The framework nonetheless does require further development. Aside from including
constructs such as fairness, we argue that it should be essentialized. Essentializing
refers to a process discussed by Jacobson, Lawson, Ng, McMahon &amp; Goedicke (2017) in
the context of the Essence Theory of Software Engineering
        <xref ref-type="bibr" rid="ref18">(Jacobson, Ng, McMahon,
Spence &amp; Lidman 2012)</xref>
        where a Software Engineering (SE) practice is essentialized.
Essentialization, according to Jacobson et al. (2017), refers to the process of distilling
e.g. a software engineering practice into its essential components in order to
communicate it clearly and in a unified fashion, while communicating it in the Essence language.
      </p>
      <p>In the context of this framework, we see essentialization as one way to make it more
understandable for industry experts. Essentializing a practice, or method, or a
framework has three steps, according to Jacobson et al. (2017):
1. Identifying the elements - this is primarily identifying a list of elements that
make up a practice. The output is essentially a diagram.
2. Drafting the relationships between the elements and the outline of each
element - at this point, the cards are created.
3. Providing further details - Usually, the cards will be supplemented with
additional gui-delines, hints and tips, examples, and references to other resources,
such as articles and books</p>
      <p>
        In this fashion, the framework could be essentialized by e.g. making Essence alphas
out of the principles such as transparency. Alphas, in the context of Essence, are things
to work with which are measured in order to see progress on the endeavor
        <xref ref-type="bibr" rid="ref18">(Jacobson et
al. 2012)</xref>
        . One could thus consider them goals. The framework could then be extended
by practices which seek to help an organization progress in achieving these ethical
principles.
      </p>
      <p>As it stands, the framework can be utilized for empirical studies in the area of AI
ethics. It presents a practice-focused view of AI ethics. However, it does not cover all
the aspects of the AI ethics discussion in 2019 (and beyond). Depending on the context,
one may wish to extend it to include fairness as the fourth key principle for AI ethics,
and/or trustworthiness.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusions &amp; Future work</title>
      <p>
        In this paper, we have presented a framework for approaching AI ethics through
practice. Having conducted an empirical study using the framework
        <xref ref-type="bibr" rid="ref29 ref30">(Vakkuri et al.
2019a)</xref>
        , we discussed the implications of the framework and how it should be developed
further in this paper. Though the framework, as is, can be utilized for empirical studies,
it should be complemented by the inclusion of some of the more recent AI ethics
constructs such as fairness and trustworthiness to make it more current. Given that the
framework was originally devised in late 2017, the discussion in the field of AI ethics
has since then gone forward.
      </p>
      <p>
        We seek to develop the framework further ourselves. We have utilized a similar
framework in another study
        <xref ref-type="bibr" rid="ref29 ref30">(Vakkuri et al. 2019b)</xref>
        . Aside from simply expanding the
framework to include fairness and trustworthiness, we have plans to essentialize the
framework by utilizing the Essence Theory of Software Engineering
        <xref ref-type="bibr" rid="ref18 ref19">(Jacobson et al.
2012, Jacobson et al. 2017)</xref>
        in order to make it more relevant to practitioners.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Abrahamsson</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2002</year>
          ).
          <article-title>Commitment nets in software process improvement</article-title>
          .
          <source>Annals of Software Engineering</source>
          ,
          <volume>14</volume>
          (
          <issue>1</issue>
          ),
          <fpage>407</fpage>
          -
          <lpage>438</lpage>
          . doi:1020526329708
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <source>AI HLEG (High-Level Expert Group on Artificial Intelligence)</source>
          . (
          <year>2019</year>
          ).
          <article-title>Ethics guidelines for trustworthy AI</article-title>
          .
          <article-title>Retrieved from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Reuters</surname>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Amazon scraps secret AI recruiting tool that showed bias against women</article-title>
          .
          <source>Reuters Retrieved</source>
          from https://www.reuters.com/article/us-amazon
          <article-title>-com-jobs-automationinsight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-womenidUSKCN1MK08G</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ananny</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Crawford</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability</article-title>
          .
          <source>New Media &amp; Society</source>
          ,
          <volume>20</volume>
          (
          <issue>3</issue>
          ),
          <fpage>973</fpage>
          -
          <lpage>989</lpage>
          . doi:
          <volume>10</volume>
          .1177/1461444816676645
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Bryson</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Standardizing ethical design for artificial intelligence and autonomous systems</article-title>
          .
          <source>Computer</source>
          ,
          <volume>50</volume>
          (
          <issue>5</issue>
          ),
          <fpage>116</fpage>
          -
          <lpage>119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Charisi</surname>
          </string-name>
          , V.,
          <string-name>
            <surname>Dennis</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lieck</surname>
            ,
            <given-names>M. F. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matthias</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sombetzki</surname>
            ,
            <given-names>M. S. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Winfield</surname>
            ,
            <given-names>A. F.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Yampolskiy</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Towards moral autonomous systems</article-title>
          .https://arxiv.org/abs/1703.04741
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Dignum</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Responsible autonomy</article-title>
          . https://arxiv.org/abs/1706.02513
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Dignum</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Ethics in artificial intelligence: Introduction to the special issue</article-title>
          .
          <source>Ethics and Information Technology</source>
          ,
          <volume>20</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>3</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10676-018-9450-z"
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <source>Ethics commission's complete report on automated and connected driving</source>
          . (
          <year>2017</year>
          ) .BMVI. Retrieved from https://www.bmvi.de/goto?id=
          <fpage>354980</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <source>Finland's age of artificial intelligence report</source>
          . (
          <year>2017</year>
          ). Retrieved from https://www.tekoalyaika.fi/en/reports/finlands
          <article-title>-age-of-artificial-intelligence/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Fitzgerald</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hartnett</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Conboy</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Customizing agile methods to software practices at intel shannon</article-title>
          .
          <source>European Journal of Information Systems</source>
          ,
          <volume>15</volume>
          (
          <issue>2</issue>
          ),
          <fpage>200</fpage>
          -
          <lpage>213</lpage>
          . doi:
          <volume>10</volume>
          .1057/palgrave.ejis.3000605
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Flores</surname>
            ,
            <given-names>A. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bechtel</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lowenkamp</surname>
            ,
            <given-names>C. T.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>False positives, false negatives, and false analyses: A rejoinder to "machine bias: There's software used across the country to predict future criminals, and it's biased against blacks"</article-title>
          .
          <source>Federal Probation</source>
          ,
          <volume>80</volume>
          (
          <issue>2</issue>
          ),
          <fpage>38</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Galletta</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Mastering the semi-structured interview and beyond</article-title>
          . New York: NYU Press. Retrieved from https://www.jstor.org/stable/j.ctt9qgh5x
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Gotel</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cleland-Huang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hayes</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zisman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Egyed</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grunbacher</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mäder</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Traceability fundamentals</article-title>
          .
          <source>Software and systems traceability</source>
          (pp.
          <fpage>3</fpage>
          -
          <lpage>22</lpage>
          ) Springer.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Gotterbarn</surname>
            ,
            <given-names>D. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brinkman</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flick</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kirkpatrick</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vazansky</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Wolf</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>ACM code of ethics and professional conduct</article-title>
          . Retrieved from https://www.acm.org/code-of-ethics
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Heath</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Developing a grounded theory approach: A comparison of glaser and strauss</article-title>
          .
          <source>International Journal of Nursing Studies</source>
          ,
          <volume>41</volume>
          (
          <issue>2</issue>
          ),
          <fpage>141</fpage>
          -
          <lpage>150</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <article-title>ISO/IEC JTC 1/SC 42 artificial intelligence</article-title>
          .
          <source>Retrieved</source>
          from https://www.iso.org/committee/6794475.html
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Jacobson</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ng</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McMahon</surname>
            ,
            <given-names>P. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spence</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Lidman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>The Essence of Software Engineering: The SEMAT Kernel</article-title>
          .
          <source>ACM Queue</source>
          ,
          <volume>10</volume>
          (
          <issue>10</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Jacobson</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lawson</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ng</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McMahon</surname>
            ,
            <given-names>P. E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Goedicke</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Software Engineering Essentialized</article-title>
          .
          <article-title>Pre-print book draft</article-title>
          . Retrieved from http://semat.org/documents/400812/405173/Part1-2017-12-17_draft_3.pdf/
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>McNamara</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Murphy-Hill</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <article-title>Does ACM's code of ethics change ethical decision making in software development? Paper presented at the 26th</article-title>
          <source>ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering ESEC/FSE. doi:10.1145/3236024</source>
          .3264833
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Microsoft</surname>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Responsible bots: 10 guidelines for developers of conversational AI</article-title>
          . Retrieved from https://www.microsoft.com/en-us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_
          <year>2018</year>
          .pdf
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Morley</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Floridi</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kinsey</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Elhalal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>From what to how. an overview of AI ethics tools, methods and research to translate principles into practices</article-title>
          . https://arxiv.org/abs/
          <year>1905</year>
          .06876
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Pichai</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>AI at google: Our principles</article-title>
          . Retrieved from https://www.blog.google/technology/ai/ai-principles/
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Rao</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Intel recommendations for public policy principles on AI</article-title>
          . Retrieved from https://blogs.intel.com/policy/2017/10/18/naveen-rao
          <article-title>-announces-intel-ai-public-policy/#gs</article-title>
          .8qnx16
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Strauss</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Corbin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>1998</year>
          ).
          <article-title>Basics of qualitative research: Techniques and procedures for developing grounded theory</article-title>
          , 2nd ed. Thousand Oaks, CA, US: Sage Publications, Inc.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <source>The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems</source>
          . (
          <year>2019</year>
          ).
          <article-title>Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems, first edition</article-title>
          . . Retrieved from https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Turilli</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Floridi</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>The ethics of information transparency</article-title>
          .
          <source>Ethics and Information Technology</source>
          ,
          <volume>11</volume>
          (
          <issue>2</issue>
          ),
          <fpage>105</fpage>
          -
          <lpage>112</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10676-009-9187-9
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Unterkalmsteiner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gorschek</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Islam</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. K. M. M</surname>
            , Chow Kian Cheng, Permadi,
            <given-names>R. B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Feldt</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Evaluation and measurement of software process improvement-A systematic literature review</article-title>
          .
          <source>IEEE Transactions on Software Engineering</source>
          ,
          <volume>38</volume>
          (
          <issue>2</issue>
          ),
          <fpage>398</fpage>
          -
          <lpage>424</lpage>
          . doi:
          <volume>10</volume>
          .1109/TSE.
          <year>2011</year>
          .26
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Vakkuri</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kemell</surname>
            ,
            <given-names>KK.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Abrahamsson</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2019a</year>
          ).
          <article-title>Implementing Ethics in AI: Initial results of an industrial multiple case study</article-title>
          . To be published in
          <source>the proceedings of the 20th International Conference on Product-Focused Software Process Improvement (PROFES2019)</source>
          . https://arxiv.org/abs/
          <year>1906</year>
          .12307
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Vakkuri</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kemell</surname>
          </string-name>
          , KK.,
          <string-name>
            <surname>Kultanen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Siponen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Abrahamsson</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2019b</year>
          ).
          <article-title>Ethically Aligned Design of Autonomous Systems: Industry viewpoint and an empirical study</article-title>
          . To be published in
          <source>the proceedings of the 8th Transport Research Arena (TRA2020) Conference</source>
          . https://arxiv.org/abs/
          <year>1906</year>
          .07946
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Villani</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bonnet</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , schoenauer, m., berthet, c., levin, f., cornut, a. c., .
          <string-name>
            <surname>Rondepierre</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>For a meaningful artificial intelligence: Towards a french and european strategy Conseil national du numérique</article-title>
          . Retrieved from https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>