<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>K. Kemell);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Process to Product Point of View</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kai-Kristian Kemell</string-name>
          <email>kai-kristian.kemell@helsinki.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ville Vakkuri</string-name>
          <email>ville.vakkuri@uwasa.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fahad Sohrab</string-name>
          <email>fahad.sohrab@tuni.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AI ethics</institution>
          ,
          <addr-line>Machine Learning, Principles, Product</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Conference on Technology Ethics - Tethics</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Tampere University</institution>
          ,
          <country country="FI">Finland</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Helsinki</institution>
          ,
          <country country="FI">Finland</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Vaasa</institution>
          ,
          <country country="FI">Finland</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Discussing the potential negative impacts of AI systems and how to address them has been the core idea of AI ethics more recently. Based on this discussion, various principles summarizing and categorizing ethical issues have been proposed. To bring these principles into practice, it has been common to repackage them into guidelines for AI ethics. The impact of these guidelines seems to remain small, however, and is considered to be a result of a lack of interest in them. To remedy this issue, other ways of implementing these principles have also been proposed. In this paper, we wish to motivate more discussion on the role of the product in AI ethics. While the lack of adoption of these guidelines and their principles is an issue, we argue that there are also issues with the principles themselves. The principles overlap and conflict and commonly include discussion on issues that seem distant from practice. Given the lack of empirical studies in AI ethics, we wish to motivate further empirical studies by highlighting current gaps in the research area.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In the past decade, AI systems have become increasingly ubiquitous across highly varied use
contexts. As these systems exert more and more influence in society, it also becomes increasingly
important to minimize any negative impacts they might also have. Minimizing the negative
impact of AI systems is particularly vital in the case of systems that impact the general public,
such as autonomous vehicles and medical systems.</p>
      <p>
        Discussing the potential negative impacts of AI systems and how to address them has been
the core idea of AI ethics more recently. In the past, AI ethics was primarily focused on
discussing future scenarios, but following technological progress, these future scenarios are
increasingly becoming a reality alongside other issues. Numerous conceptual AI ethics studies
have highlighted potential issues in diferent types of AI systems, and discussed ways to begin
addressing them. This discussion has resulted in a number of AI ethics principles [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However,
(F. Sohrab)
CEUR
CEUR
Workshop
Proceedings
      </p>
      <p>
        ceur-ws.org
ISSN1613-0073
applying ethical principles into practice is a recurring challenge in computer ethics in general
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]), and is arguably one in AI ethics as well [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        To bring AI ethics into practice, guidelines have been the primary way of approaching
AI ethics. Numerous guidelines have been built around the AI ethics principles proposed
in conceptual papers (and by companies and other organizations) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The impact of these
guidelines remains small, however [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In addition to guidelines, some tools have
been developed to help with the technical implementation of principles [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However, these
are individual, precise technical tools [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] rather development approaches. For example, recent
technical tools include ”tactics for fair clustering and producing intersectionally fair rankings,
as well as testing the probabilistic fairness of pre-trained logistic classifiers, assessing
”leaveone-out unfairness” or measuring robustness bias among many others.” [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] Some Software
Engineering (SE) methods to implement AI ethics have also recently been proposed, including
the RE4AI Ethical Guide [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and ECCOLA method [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        The practice remains a key issue in AI ethics. Only a small number of empirical studies on AI
Ethics has been published. These studies have primarily focused on the current state of practice
in AI ethics by looking at AI development practices (e.g., [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]). The two SE methods
we are aware of [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] have also been developed using empirical data. Studies looking at the
successful implementation of AI ethics, in particular, are lacking [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Even though empirical studies in AI ethics in general remain scarce, we feel that is nonetheless
time to broaden our focus from the development process to the product itself. While any further
studies on developing ethical AI are certainly still sorely needed, we feel that discussing the role
of the product is relevant at this stage. As some guidelines, methods, and other artifacts that
(are claimed to) help develop ethical AI exist – and are hopefully also being applied in practice
– we should look at their impact not only on the development process but on the product or
service itself.</p>
      <p>
        Yet we are not aware of any studies looking at the impacts of these ethical tools, guidelines,
and methods on operational AI/ML systems that have been developed using them. Even if
the development process is changed to support the development of ethical AI, by, e.g., using
AI ethics guidelines (which in and of itself is a recurring challenge [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]), what kind of impact
would this have on the resulting system? If we were to take the numerous AI ethics principles
currently considered important and implement them into a system to make it ethical, what
would the system look like?
      </p>
      <p>
        In this paper, we discuss what we consider a gap in AI ethics research: the point of view
of the product. Thus far, much emphasis has been placed on the development and design of
ethical AI, with guidelines seeking to direct high-level design and development decisions –
although their practical relevance has been questioned [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] – and individual, precise
technical tools [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In comparison, little attention has been paid to the finished systems and
their operational life. Consequently, little is known about whether these principles really even
could be implemented by the book. Measuring adherence to these principles is similarly a
challenge to be tackled, although not one we can begin to address in this paper. To illustrate
our points regarding principles, we look at a number of high-profile AI ethics principles and
evaluate, through a hypothetical AI/ML system, what they could mean in practice if a system
were to be developed by fully implementing them into practice. Based on this thought exercise,
we discuss potential issues within these guidelines and principles and propose future avenues
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background: AI system development</title>
      <p>
        When discussing AI systems in this paper, we refer to Machine Learning (ML) systems specifically.
To illustrate the process of creating ML models, we turn to Mikkonen et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], who present
one way of conceptualizing the process in their paper. This process is depicted in Figure 1. In
order to train ML model(s), training data is needed to optimize the model(s) parameters. This
data needs to be bought, collected, or otherwise acquired. Depending on how the data was
acquired, it may need to be cleaned and otherwise processed before training the model. During
training, the model(s) learn the unknown function that maps the input data to the output values.
Much depends on the ML approach being used, as, for example, supervised learning where
one has ”access to the data and to the “right answer” often called a label, e.g., a photo and the
objects in the photo” [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] requires highly refined data. In contrast, in unsupervised learning,
the unknown mapping function is identified without leveraging the labels of data. The aims of
the system also need to already be clear during the training stage so that the right type of data
can be decided on.
      </p>
      <p>The training process is iterative, as ML models are iteratively trained, adjusted, and validated
until they are able to produce satisfactory results in terms of, e.g., prediction accuracy or any
other metric(s) of choice. Validation can be carried out with diferent data.</p>
      <p>
        This process is at the core of any ML model development. However, as DevOps (a portmanteau
of Development and Operations), and continuous SE in general, continue to be state-of-the-art
in SE, the idea of continuous SE is also prevalent in ML development in the form of MLOps [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
(a portmanteau of ML and Operations). Consequently, various papers discussing ML
development also discuss operations in the context of ML and involve issues related to (continuous)
deployment and monitoring of ML systems. In their paper seeking to better define MLOps John
et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] highlight three pipelines for MLOps: Data Pipeline, Modeling Pipeline, and Release
Pipeline, each of which also includes a number of sub-processes. Compared to the process seen
in 1, for example, this conceptual model includes the third release pipeline that deals with the
deployment and operational life of the models in addition to their creation.
      </p>
      <p>
        These processes are a part of the software development process in general. ML components
are only one part of the entire system, while much of the work is conventional SE [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. In
practice, ML is often seen as a separate process that takes place alongside the conventional
software development process. There are thus two processes running in parallel: (1) ML model
building and (2) software development. Simplifying to some extent, ML models are built and
evaluated by the data science team, while the rest of the development is carried out by the
development (and operations) team(s). This is also highlighted by Lwakatare et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] as
they discuss the relationship between ML and SE: ”increasingly there is a need to integrate the
development workflow of ML system into existing SE processes and methods, such as agile
methods and continuous integration (CI) practice.”
      </p>
      <p>As we argue for a more product-focused point of view on AI ethics in this paper, we should
also briefly discuss what ’product’ means in this context. As we have established in this regard,
we discuss ML systems specifically when speaking of AI systems. Additionally, when speaking
of products in the context of ML systems, we specifically refer to systems that are operational.
As opposed to looking at requirements or design decisions, which has often been the point
of view in AI ethics, we want to look at the finished systems and how they are or should be
afected by AI ethics. We, therefore, refer to ML systems in the deployment and/or monitoring
stages.</p>
    </sec>
    <sec id="sec-3">
      <title>3. AI ethics: from guidelines to requirements and practice</title>
      <p>Having discussed ML development in general in the preceding section, we discuss AI ethics in
this system. In the first subsection, we provide an introduction to AI ethics principles. In the
second subsection, we discuss the practical implementation of these principles.</p>
      <sec id="sec-3-1">
        <title>3.1. AI ethics principles: what should an ethical AI look like?</title>
        <p>
          Currently, AI ethics is commonly approached through various principles. These AI ethics
principles act as a way of categorizing AI ethics issues and are commonly distilled into guidelines
containing multiple principles each. Such guidelines are produced by companies, researchers,
as well as (supra)national actors such as the EU [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ][
          <xref ref-type="bibr" rid="ref1">1</xref>
          ][
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Jobin et al.[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] and Hagendorf[
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]
have conducted reviews of a large number of these guidelines in their papers.
        </p>
        <p>
          Various principles have guided discussions on AI ethics and continue to do so. Despite a
large number of principles that have been proposed over time, the discussion on AI ethics
principles has recently begun to converge on a set of recurring principles whose definitions
are also starting to unify to some extent [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. In Table 1, we list the most common AI ethics
principles based on three widely cited AI ethics papers discussing AI ethics principles.
        </p>
        <p>
          The first paper in Table 1 is the AI ethics guidelines review of Jobin et al.[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], which provides
an extensive review and summary of the most commonly utilized principles. The authors also
ofer synthesized definitions for each of the principles included in the paper. The second paper is
the guideline review of Hagendorf [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. This review also looks at a large number of guidelines
but is more focused on simply quantifying the principles in terms of how frequently they appear
in guidelines, while also providing some meta-analysis of the authors of the guidelines. The
third paper is that of Morley et al.[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] where the authors compare AI ethics guidelines and
methods (here ’method’ refers to highly specific ML methods as opposed to SE methods such as
SCRUM), and based on the results, propose their own requirements for ethical AI in the form of
principles.
        </p>
        <p>
          Out of these three papers, we utilize the paper of Jobin et al. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] as the framework for our
principle review in the following section. This paper is utilized due to its extensiveness and
due to the extensive descriptions of each principle proposed by the authors. These definitions
provide us a way of utilizing these principles without having to focus on arguing about their
definitions in this paper, as they are based on synthesis in a scientific publication. We focus on
exploring the implementation of the principles as opposed to defining them.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Implementing AI ethics in practice through principles</title>
        <p>AI ethics research aims to facilitate the creation of ethical AI systems ultimately. For the time
being, principles have been the main way of working towards this goal. By distilling principles
into guidelines, principles have been intended to serve as a way of implementing AI ethics in
practice.</p>
        <p>
          However, implementing principles directly into practice is challenging, even with the help
of guidelines [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Indeed, perhaps the largest problem typically associated with
guidelines and principles in AI ethics, or computer ethics in general, is that it is dificult to make
principles actionable for developers. In more detail, Mittelstadt [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] argues that there are four
challenges associated with using principles in AI ethics: ”(1) common aims and fiduciary duties,
(2) professional history and norms, (3) proven methods to translate principles into practice,
and (4) robust legal and professional accountability mechanisms.” From the point of view of SE
research, the second and third challenges are the most relevant ones out of these four.
        </p>
        <p>
          The first challenge is related to the fact that software organizations have no obligation to
do good and are, in fact, more responsible to their board and shareholders than the general
public [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Principles and guidelines are only useful if there is an intent (or obligation) to
use them. Aside from legal obligations to do so, incentives for implementing AI ethics are
largely personal moral convictions or fear of reputational risk. The lack of legal obligations to
implement ethics is the fourth challenge. For companies, it is often ’enough’ to simply adhere to
laws and regulations, and consequently, these laws and regulations (e.g., GDPR) are important
in setting the bar for the bare minimum in ethics [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
        </p>
        <p>
          As for the second challenge, professional history and norms, the challenge is that there is far
less tradition for being a ’good’ developer, for example, than there is for being a ’good’ doctor
[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Some attempts to establish such a tradition have been made, e.g., in the form of the ACM
Code of Ethics [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. Much like the first challenge, this is primarily related to (the lack of) intent
to implement AI ethics, particularly from the point of view of personal moral convictions.
        </p>
        <p>
          The third challenge, the lack of proven methods to translate principles into practice, is
interesting from the point of view of SE and is something that has been discussed in existing
papers. Various studies (e.g., [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]) argue that principles need to be made more
actionable by converting them into SE practices or methods or otherwise repackaging them
into some other form. Doing so remains a key challenge in AI ethics, though some steps have
nonetheless been taken toward operationalizing these principles.
        </p>
        <p>
          Morley et al. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] list various precise technical tools related to the implementation of AI ethics.
Ryan &amp; Stahl [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] call for studies to further associate and classify such tools according to the
relevant AI ethics principles so as to aid their implementation. While such tools are required to
bring AI ethics into ML, AI ethics also needs to be a part of SE [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], given that ML is only a part
of the entire software development process [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. To provide some basis for doing so, we are
aware of two methods that have been proposed for implementing AI ethics [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], although
they are still arguably far from being ’proven’ ways of doing so despite having some empirical
support. While various methods for implementing ethics in general exist (Value Sensitive
Design methodology etc.), these are not aimed at the specific context of AI ethics and may
consequently fail to account for AI/ML-related ethical issues [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. Other ways of implementing
AI ethics could involve ethical user stories [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], for example, but such ideas warrant further
study as well.
        </p>
        <p>
          To summarize, all of these four challenges are ultimately related to the lack of adoption of
these guidelines, discussing reasons that contribute towards these guidelines and principles
not seeing use in practice. Empirical studies also point towards these guidelines and principles
having had little impact on practice thus far [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. This is an issue given the importance
of principles in recent AI ethics discussions. Principles have served as a way of categorizing
various AI ethics-related issues (e.g., bias is a common practical issue that falls under the
umbrella of the fairness principle). Principles, as established, have also become the primary
way of attempting to bring AI ethics into practice.
        </p>
        <p>In the next section, we assume a hypothetical scenario where these principles are implemented
into a system by the book and review these principles in this context. We aim to understand to
what extent doing so would even be possible with the intent to use these principles as presented
in AI ethics guidelines.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. AI ethics as a part of the product</title>
      <p>
        In this section, we explore the implementation of AI ethics principles by reviewing them one
principle at a time. We assume the point of view of a finished product. I.e., what would a product
that adheres to each of these principles actually look like, and whether it would be possible to
accomplish at all? This section involves the eleven most common AI ethics principles listed by
Jobin et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], with each subsection discussing one of these eleven principles.
      </p>
      <p>
        In the interest of space, we have not included the full description for each principle provided
by Jobin et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and instead provide a summary of our own, based on the original description
of Jobin et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], before looking at them from the product point of view. Similarly, we briefly
list the various aspects of each principle discussed by [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], but only tackle the most relevant
aspects of each principle in this paper. While there is something to be said about each aspect,
some are arguably more relevant for some systems and less relevant for others – which is also a
point we raise in more detail in the discussion. To further emphasize: the principles discussed
here are discussed solely based on Jobin et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], as the discussion on the definitions of these
principles is still ongoing, and there are conflicts between papers.
      </p>
      <p>For the purposes of this review, we have selected a specific, hypothetical AI system in order to
provide us with a clear context in which to evaluate these principles. The system in question is a
smart ofice solution. The goal of the system is to improve employee well-being and productivity,
while also providing savings on building upkeep. ML is used in the following ways:
• Movement tracking. Cameras and sensors inside the ofice building are used to
understand where employees (do not) spend time inside the building, as well as to understand
where positive or negative emotions are often experienced while at work. Used to optimize
building layout etc.
• Facial expression recognition. Applied to analyze the emotional state of individual
employees and to avoid duplicate results in movement tracking.
• Biometric data. Gathered through wearable devices (smart rings) in order to evaluate
the stress levels and emotional states of the employees. Worn outside work and while
working remotely as well.
• Building upkeep. Optimizing heating, cooling, lights, etc., to provide savings based on
employee activity in the building, among other factors.</p>
      <p>The system is used by the management of the companies buying the system, while the
employees of these companies (also including the managers themselves) provide the data for the
system. The full data set can be accessed by the company whose system it is and who provides
the client companies with the system. Company-specific data from one’s own company and
some anonymous benchmark data can be utilized by the client companies.</p>
      <sec id="sec-4-1">
        <title>4.1. Principles to be implemented</title>
        <p>
          Transparency is the most common AI ethics principle. It includes the concepts of explainability,
interpretability, and other acts of communication and disclosure. Discussion on transparency
focuses on data use, human-AI interaction, automated decisions, and the purpose of data use
or application of AI systems. Increasing the disclosure of information about the following can
increase transparency: use of AI, source code, data use, evidence base for AI use, limitations,
laws, responsibility for AI, investments in AI, and possible impact. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
        </p>
        <p>
          Justice and fairness. Justice is mainly expressed in terms of fairness, i.e., through the
prevention, monitoring, or mitigation of unwanted bias and discrimination. Rather than being
justice in the purely legal sense, it thus also includes social aspects of justice. In more detail,
this refers to respect for diversity, inclusion, and equality, the possibility to challenge decisions
made by AI, the right to compensation, fair access to AI and data and the benefits of AI, and
the impact of AI on the labor market, as well as the need to address democratic or societal
issues. Practical key issues are the risk of bias and diversity in data sets. Ways addressing justice
and fairness in practice include (1) ”technical solutions such as standards explicit normative
encoding”, (2) transparency, particularly through public awareness, (3) testing, monitoring,
auditing, (4) ”developing or strengthening the rule of law and the right to appeal, recourse,
redress, or remedy”, and (5) more diverse development teams and the better inclusion of civil
society in an interactive manner. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
        </p>
        <p>
          Non-maleficence refers to avoiding risks or potential harms and is related to safety and
security. The system should never cause foreseeable or (un)intentional harm. Harm includes
discrimination and violation of privacy in addition to physical, emotional, and economic harm,
as well as harm to infrastructure, and harm related to long-term social well-being. More
technical ways of implementing non-maleficence include data quality evaluations, emphasise on
security and privacy, testing, monitoring, and awareness of the ‘dual-use’ potential of the system.
Governance-level solutions for implementing non-maleficence include active cooperation across
disciplines and stakeholders, compliance with existing or new legislation, and the need to
establish oversight processes and practices. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
        </p>
        <p>
          Responsibility and accountability. These two principles deal with responsibility in the
moral and legal sense. Personal moral convictions are discussed in addition to legal
accountability. In discussing accountability, very diferent actors are discussed: developers, designers,
institutions or organizations, and industry. Practical ways of approaching responsibility and
accountability include legal liability, the possibility of remedy, identifying reasons and processes
that may lead to potential harm, as well as whistle-blowing in case of potential harm, and
aiming at promoting diversity. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
        </p>
        <p>
          Privacy in AI ethics is both a value and a right. Privacy issues are most commonly discussed
in relation to data (data protection and security), as ML systems rely on large sets of data.
Implementing privacy is associated with ”diferential privacy, privacy by design, data minimization
and access control, calls for more research and awareness and regulatory approaches, with
sources referring to legal compliance more broadly, or suggesting certificates or the creation or
adaptation of laws and regulations to accommodate the specificities of AI.” [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
        </p>
        <p>
          Beneficence is often discussed as a principle focused on promoting ’good’ through human
well-being, peace, and happiness, but is seldom accurately defined past this general goal. This
includes the creation of socioeconomic opportunities, as well as economic prosperity. Ways
of doing so include aligning AI with human values, advancing scientific understanding of the
world, minimizing power concentration or using power to advance human rights, working more
closely with all possible stakeholders, minimizing conflicts of interests, providing channels and
possibilities feedback, and acting on it to prove beneficence, and by creating ways of quantifying
human well-being. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
        </p>
        <p>
          Freedom and autonomy focus on both negative and positive freedom in relation to AI.
Positive freedom refers to the freedom to flourish, to self-determination through democratic
means, the right to establish and develop relationships with other human beings, the freedom to
withdraw consent, or the freedom to use a preferred platform or technology. Negative freedom
is about, e.g., freedom from technological experimentation, manipulation, or surveillance. Ways
to implement freedom and autonomy include focusing on transparent and predictable AI, by
not ”reducing options for and knowledge of citizens”, actively seeking to increase awareness
about AI, as well as actively seeking consent from those impacted by it (e.g., data collection). [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
        </p>
        <p>
          Trust as a principle is focused on building AI systems the (end-)users and other stakeholders
can trust. Trust is often associated with other principles (e.g., transparency, accountability,
and reliability), which are considered ways of generating trust. Stakeholders should be able
to trust the recommendations or decisions of the AI, as this is imperative for the adoption of
the system. Trust is linked to the idea of trustworthy AI. Ways of producing trust include: (1)
producing understandable systems, (2) fulfilling public expectations, (3) proof of fairness, and
(4) dialogue/stakeholder participation. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
        <p>
          Sustainability is about environmental and societal factors. In terms of the environment, it
focuses on, e.g., improving ecosystems and biodiversity. As for societal factors, sustainability is
about job market factors, fair societies, and the promotion of peace, among other aspects. Ways
of achieving sustainable AI include: (1) increasing energy eficiency, (2) minimizing ecological
footprint, and (3) ensuring accountability in case of potential job losses. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
        </p>
        <p>
          Dignity is described as a vague and undefined principle. The main point of dignity is to
respect the human in the loop. In more detail, this means respecting human rights and otherwise
avoiding harm, not forcing acceptance, not automatically classifying individuals, and having
no hidden or deceptive human-AI interaction. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] Practical recommendations for doing so are
scarce.
        </p>
        <p>
          Solidarity stems from the argument that AI systems benefit already well-of individuals
and increase inequality. The idea of solidarity includes redistributing the benefits of AI, not
threatening social cohesion, and respecting vulnerable persons and groups. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] It is more
focused on the long-term societal efects of AI than on the efects of any single company or its
system at present.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Process and result of applying principles</title>
        <p>
          First, we outlined a hypothetical system to apply the principles discussed above. We outlined
goals for the system and what it would look like in practice. After this, we began to go over
the principles one at a time, discussing the contents of the principles and the proposed ways
of addressing them in practice in the given context. We considered all of these propositions
in the system and, in general, outlined a system that would adhere to all ”requirements” of
each principle outlined in the various guidelines reviewed by [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. In this fashion, we specified
all the aspects of the product that would change as a result of it being developed using AI
ethics principles strictly by the book. There were, additionally, various things that should be
considered when looking at the larger context that involves the development process and the
business model of the system, but these were out of the scope of this exercise.
        </p>
        <p>The resulting system is illustrated in Figure 2 (found at the very end of the paper). Figure 2
depicts the ways in which the system was afected by the 11 principles. It also depicts which
principles contributed to which feature or part of the system (or related service). In Section
5, we use observations from this thought exercise to illustrate challenges in applying these
principles in practice from a product point of view.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>In this section, we discuss our observations from attempting to build a hypothetical system
around 11 AI ethics principles. This process and its results are discussed in Section 4. For clarity,
we have bolded each main point we are making at the start of the paragraph(s) discussing them.</p>
      <p>
        Potential for information overload. One problem we see with implementing AI ethics
directly through principles is a large number of diferent principles and related concepts
associated with each principle. Even for individuals well-versed in the philosophical discussion on
the topic, these concepts require studying to internalize, especially when their interplay and the
entire body of knowledge on AI ethics is considered. This is arguably even more daunting for
developers who may have no prior knowledge of AI ethics or ethics-related training. This is an
issue of communication between fields of research and professions and could be, in part, why
there has been little interest in AI ethics in SE. Based on this paper, we further highlight the
importance of the idea of repackaging and prioritizing principles to make them more actionable
or understandable for developers [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In this regard, we also feel that implementing ethics
directly through guidelines may easily make it a process that is detached from SE activities.
      </p>
      <p>
        Principles overlap. Another point we wish to raise in relation to a large number of principles
and the large number of concepts associated with these principles is that there is indeed some
overlap in the principles. This has been pointed out by Morley et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] who, as a solution,
recommend focusing on the five principles they highlight in their paper (also found in this
paper in Table 1). To some extent, we agree. As we have also illustrated in Figure 2, there is
also an overlap between AI ethics principles from a product point of view. To use privacy as an
example: privacy violations are a part of non-maleficence , and being able to exert control over
one’s own data is a part of autonomy. Even on the level of conceptual discussion, having fewer
principles that are more accurately defined could be beneficial.
      </p>
      <p>
        However, for practical implementation, the concept of privacy may feel more tangible. This,
again, leads back to the idea of repackaging these principles in some form to make them more
approachable in practical use. The authors of methods and other tools for implementing AI
ethics need to familiarize themselves with the discussion on AI ethics and make conscious
decisions on which principles to include and in what form. This type of debate is at the core of
ethics [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>Principles may conflict in implementation. There are also conflicts between
diferent principles in the same manner. For example, the conflict between beneficence and
nonmaleficence. Ultimately, it could even be said that the best way to entirely minimize harm
(non-maleficence) would be to never develop a system. Yet the principle of beneficence forces
one to develop to increase human well-being.</p>
      <p>
        For a more practical discussion, in the case of the example system, the system would be most
beneficial if it could be used to handle data about individual employees. This would make it
possible to see if, e.g., individual employees are particularly stressed or tired. However, such
information could easily be misused to fire poorly performing employees instead of helping
them. Thus, to minimize harm and to prevent the possibility of such unintended use
(nonmaleficence), the system should not provide (end-)user profiles containing extensive individual
data, even in an anonymized form, consequently making it far less useful. While in practice,
such situations result in trade-ofs, strictly adhering to AI ethics principles leaves less room for
them. Similar conflicts are also highlighted by Jobin et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]: ”For example, the need for ever
larger, more diverse datasets to “unbias” AI appears dificult to conciliate with the requirement
to give individuals increased control over their data and its use in order to respect their privacy
and autonomy.”
      </p>
      <p>Context-specificity of principles. In terms of the practical implementation of principles,
the heterogeneity of AI systems also warrants consideration. It arguably depends on the system
and its use context, which principles are the most relevant, and how relevant they are. E.g., in
our example system solidarity had no impact on the system. Principles should be looked at in a
context-specific manner to make them more relevant in practice. Aspects to consider include,
for example, (1) whether the system is safety-critical, (2) whether the system makes autonomous
decisions or simply recommendations, (3) whether the use of the system (or being an object of
its data collection) is voluntary in practice, and (4) whether the system handles personal data.
While generalizations always have to be made, and it is impossible to account for the uniqueness
of each system, creating methods or tooling aimed at certain types or categories of systems
according to aspects such as the four we mentioned could be one solution to making these
principles more actionable. Reading about the importance of promoting peace or accounting
for potential military dual-use may be demotivating and uninteresting for a developer working
on an in-house recommendation system for his company. This leads to the following point.</p>
      <p>The relevance of high-level global and societal issues. Issues such as promoting peace,
accounting for military dual-use, and distribution of wealth are important topics of discussion.
However, the best place to do so may not be in tools (e.g., guidelines) meant for implementing
AI ethics in some form. Doing so may shift focus away from practice and the everyday issues
of SE. The role of this discussion is ultimately to raise awareness, but it can be very dificult to
convert such issues into requirements or features for most systems.</p>
      <p>
        The relationship between ethics and quality. When approaching ethics from the point
of view of the product, it becomes more apparent that some ethical issues are close to issues
also associated with quality. Especially the idea of predictability, i.e., that the system works in
a predictable manner, as intended and consistently, is also a traditional issue of quality. This
relationship between ethics and quality has been explored in the past in SE (e.g., in Agile [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]).
However, we argue that quality continues to be a relevant issue in the context of AI. Creating an
error-free system is one aspect of creating a system that does not unintentionally cause harm.
      </p>
      <p>
        In this vein, conventional SE approaches can thus contribute towards realizing AI ethics to
some extent. For example, various AI ethics principles stress the importance of stakeholder
communication in diferent ways. While the idea of a stakeholder is understood in a wider
sense in AI ethics (e.g., the general public is often considered an important stakeholder) than in
SE traditionally, the importance of stakeholder communication and involving (end-)users in
development is a core principle in Agile SE1. Proper code documentation can similarly contribute
towards transparency [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. This may be something that is occasionally forgotten in the context
of ML, however.
      </p>
      <p>Communication is the solution, according to principles and guidelines. As
established when discussing overlap between principles, many separate principles advocated for
heightened communication with diferent stakeholders and involving them in the development
process (during the operational life of the system as well). This includes both one-way
communication from the developers to the users, as well as two-way communication by providing
diferent stakeholders with the means of providing feedback (and receiving compensation).</p>
      <p>
        The importance of transparency is widely acknowledged in AI ethics [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. As it is
not possible to evaluate a system with no knowledge about it, it is considered to enable ethics
in the first place. In this regard, there are two issues to tackle: (1) what to communicate and
to whom, and (2) how to make sense of the systems themselves to make this communication
possible. Disclosure to end-users should be diferent from disclosure to authorities, for example,
as end-users should not be faced with a flood of information they might be inclined to ignore
out of convenience.
      </p>
      <p>
        The second question is how to make sense of the systems themselves. Making understandable
AI systems is a technical challenge and not just an ethical one, and requires notable technical
expertise. The idea of interpretable or explainable AI systems has been extensively discussed in
AI ethics and in ML development as well. This is also a question from the end-user’s point of
view. While AI ethics principles argue that end-users should be able to understand AI systems
(through, e.g., education) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], is this feasible?
      </p>
      <p>The importance of a service-oriented approach to products in AI/ML systems. While
software being a service is now the norm in general, this is further highlighted in AI ethics.
As AI ethics principles stress the importance of communication and transparency, many of
the practical solutions to these issues include interaction with stakeholders and providing
possibilities for doing so. Consequently, many resources need to be devoted towards (end-)user
interaction in terms of involvement, feedback channels, providing opportunities for redress,
customer support, etc. This unavoidably shifts focus from product to service.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>
        In this paper, we have discussed the current state of AI ethics research with a focus on guidelines
and principles. AI ethics research continues to be largely absent in SE, and empirical studies on
AI ethics are still scarce. Studies on the state of practice point towards AI ethics receiving little
attention out on the field [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], which has been argued to be, in part, due to how dificult it is to
implement AI ethics in practice through guidelines and principles.
      </p>
      <p>
        To encourage more discussion on practical SE in AI ethics, we looked at AI ethics from the
point of view of products/services. Utilizing the most prominent AI ethics principles discussed
by Jobin et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], we reviewed them in the context of a hypothetical system. We determined
ways in which the diferent principles would afect such a system (or its related services, such
as customer feedback possibilities) in practice as features. Based on this review and the current
state of AI ethics research, we have discussed potential issues in AI ethics principles and
guidelines and suggested steps going forward for AI ethics research.
      </p>
      <p>We summarize our discussion with the following key takeaways:
• Guidelines and principles place notable emphasis on large-scale global and societal issues.</p>
      <p>AI ethics should focus more on practical development.
• SE methods and other ways of repackaging guidelines into a more actionable form are
still needed.
• From the point of view of the product/service, there seems to be a notable overlap in
commonly discussed AI ethics principles. When seeking to help implement principles,
attention should be paid to potential overlap and conflicts. Conceptual discussion should
also further consider such overlap.
• AI/ML systems further emphasize the idea of software products as services. Ethical AI
necessitates close cooperation and communication with stakeholders from early design
to operations.</p>
      <p>Acknowledgments
This work was partly funded by local authorities (“Business Finland”) under grant agreement
ITEA-2020-20219-IML4E of ITEA4 programme.
Figure 2: Aspects and features to be included to the product</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jobin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ienca</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Vayena,</surname>
          </string-name>
          <article-title>The global landscape of ai ethics guidelines</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>1</volume>
          (
          <year>2019</year>
          )
          <fpage>389</fpage>
          -
          <lpage>399</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>McNamara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Murphy-Hill</surname>
          </string-name>
          ,
          <article-title>Does acm's code of ethics change ethical decision making in software development?</article-title>
          ,
          <source>in: Proceedings of the 2018 26th ACM ESEC/FSE</source>
          , ESEC/FSE 2018, ACM, New York, NY, USA,
          <year>2018</year>
          , pp.
          <fpage>729</fpage>
          -
          <lpage>733</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Canca</surname>
          </string-name>
          ,
          <article-title>Operationalizing ai ethics principles</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>63</volume>
          (
          <year>2020</year>
          )
          <fpage>18</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vakkuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kemell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kultanen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Abrahamsson</surname>
          </string-name>
          ,
          <article-title>The current state of industrial practice in artificial intelligence ethics</article-title>
          ,
          <source>IEEE Software 37</source>
          (
          <year>2020</year>
          )
          <fpage>50</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <article-title>Principles alone cannot guarantee ethical ai</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Morley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kinsey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Elhalal</surname>
          </string-name>
          ,
          <article-title>From what to how: An initial review of publicly available ai ethics tools, methods and research to translate principles into practices</article-title>
          ,
          <source>Science and Engineering Ethics</source>
          <volume>26</volume>
          (
          <year>2020</year>
          )
          <fpage>2141</fpage>
          -
          <lpage>2168</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sloane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zakrzewski</surname>
          </string-name>
          ,
          <article-title>German ai start-ups and “ai ethics”: Using a social practice lens for assessing and implementing socio-technical innovation</article-title>
          , in: 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT '22,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2022</year>
          , p.
          <fpage>935</fpage>
          -
          <lpage>947</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>A. P. de Azevedo</surname>
            ,
            <given-names>H. A.</given-names>
          </string-name>
          <string-name>
            <surname>Tives</surname>
            ,
            <given-names>E. D.</given-names>
          </string-name>
          <string-name>
            <surname>Canedo</surname>
          </string-name>
          ,
          <article-title>Guide for artificial intelligence ethical requirements elicitation-re4ai ethical guide</article-title>
          ,
          <source>Proceedings of the 55th Hawaii International Conference on System Sciences</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vakkuri</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-K. Kemell</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Jantunen</surname>
            , E. Halme,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Abrahamsson</surname>
          </string-name>
          ,
          <article-title>Eccola-a method for implementing ethically aligned ai systems</article-title>
          ,
          <source>Journal of Systems and Software</source>
          <volume>182</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vakkuri</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-K. Kemell</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Jantunen</surname>
          </string-name>
          , P. Abrahamsson, “
          <article-title>this is just a prototype”: How ethics are ignored in software startup-like environments</article-title>
          , in: V.
          <string-name>
            <surname>Stray</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Hoda</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Paasivaara</surname>
          </string-name>
          , P. Kruchten (Eds.),
          <source>Agile Processes in Software Engineering and Extreme Programming</source>
          , Springer International Publishing, Cham,
          <year>2020</year>
          , pp.
          <fpage>195</fpage>
          -
          <lpage>210</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vakkuri</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-K. Kemell</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Tolvanen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Jantunen</surname>
            , E. Halme,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Abrahamsson</surname>
          </string-name>
          ,
          <article-title>How do software companies deal with artificial intelligence ethics? a gap analysis</article-title>
          ,
          <source>in: The International Conference on Evaluation and Assessment in Software Engineering</source>
          <year>2022</year>
          ,
          <article-title>EASE 2022</article-title>
          ,
          <article-title>Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <year>2022</year>
          , p.
          <fpage>100</fpage>
          -
          <lpage>109</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ryan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Stahl</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications</article-title>
          ,
          <source>Journal of Information</source>
          , Communication and Ethics in Society (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikkonen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. K.</given-names>
            <surname>Nurminen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Raatikainen</surname>
          </string-name>
          , I. Fronza,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mäkitalo</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. Männistö,</surname>
          </string-name>
          <article-title>Is machine learning software just software: A maintainability view</article-title>
          , in: D.
          <string-name>
            <surname>Winkler</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Bifl</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Mendez</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Wimmer</surname>
          </string-name>
          , J. Bergsmann (Eds.),
          <source>Software Quality: Future Perspectives on Software Engineering Quality</source>
          , Springer International Publishing, Cham,
          <year>2021</year>
          , pp.
          <fpage>94</fpage>
          -
          <lpage>105</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Granlund</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kopponen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stirbu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Myllyaho</surname>
          </string-name>
          , T. Mikkonen,
          <article-title>Mlops challenges in multi-organization setup: Experiences from two real-world cases</article-title>
          , in: 2021 IEEE/ACM 1st Workshop on AI Engineering -
          <article-title>Software Engineering for AI (WAIN</article-title>
          ),
          <year>2021</year>
          , pp.
          <fpage>82</fpage>
          -
          <lpage>88</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>M. M. John</surname>
            ,
            <given-names>H. H.</given-names>
          </string-name>
          <string-name>
            <surname>Olsson</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Bosch</surname>
          </string-name>
          ,
          <article-title>Towards mlops: A framework and maturity model</article-title>
          ,
          <source>in: 2021 47th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Sculley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Holt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Golovin</surname>
          </string-name>
          , E. Davydov,
          <string-name>
            <given-names>T.</given-names>
            <surname>Phillips</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ebner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-F.</given-names>
            <surname>Crespo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dennison</surname>
          </string-name>
          ,
          <article-title>Hidden technical debt in machine learning systems</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>28</volume>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>L. E.</given-names>
            <surname>Lwakatare</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Raj</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Crnkovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bosch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Olsson</surname>
          </string-name>
          ,
          <article-title>Large-scale machine learning systems in real-world industrial settings: A review of challenges and solutions</article-title>
          ,
          <source>Information and Software Technology</source>
          <volume>127</volume>
          (
          <year>2020</year>
          )
          <fpage>106368</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hagendorf</surname>
          </string-name>
          ,
          <article-title>The ethics of ai ethics: An evaluation of guidelines</article-title>
          ,
          <source>Minds and Machines</source>
          <volume>30</volume>
          (
          <year>2020</year>
          )
          <fpage>99</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vakkuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kemell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kultanen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Siponen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Abrahamsson</surname>
          </string-name>
          ,
          <article-title>Ethically aligned design of autonomous systems: Industry viewpoint and an empirical study</article-title>
          , arXiv preprint arXiv:
          <year>1906</year>
          .
          <volume>07946</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>D. W.</given-names>
            <surname>Gotterbarn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Brinkman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Flick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Kirkpatrick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vazansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Wolf</surname>
          </string-name>
          ,
          <source>Acm code of ethics and professional conduct</source>
          ,
          <year>2018</year>
          . URL: https://www.acm.org/ code-of-ethics.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vakkuri</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-K. Kemell</surname>
          </string-name>
          ,
          <article-title>Implementing ai ethics in practice: An empirical evaluation of the resolvedd strategy</article-title>
          , in: S. Hyrynsalmi,
          <string-name>
            <given-names>M.</given-names>
            <surname>Suoranta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nguyen-Duc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tyrväinen</surname>
          </string-name>
          , P. Abrahamsson (Eds.),
          <source>Software Business</source>
          , Springer International Publishing, Cham,
          <year>2019</year>
          , pp.
          <fpage>260</fpage>
          -
          <lpage>275</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>H.</given-names>
            <surname>Abdulhalim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lurie</surname>
          </string-name>
          , S. Mark,
          <article-title>Ethics as a quality driver in agile software projects</article-title>
          ,
          <source>Journal of Service Science and Management</source>
          <volume>11</volume>
          (
          <year>2018</year>
          )
          <fpage>13</fpage>
          -
          <lpage>25</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>