<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops. Los Angeles, USA, March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Requirements for Explainable Smart Systems in the Enterprises from Users and Society Based on FAT</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yuri Nakao FUJITSU LABORATORIES LTD. Kawasaki</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Japan nakao.yuri@jp.fujitsu.com</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hikaru Yokono FUJITSU LABORATORIES LTD. Kawasaki</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Japan yokono.hikaru@jp.fujitsu.com</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Explainable Smart Systems</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Explainability</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enterprise</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Junichi Shigezumi FUJITSU LABORATORIES LTD. Kawasaki</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Takuya Takagi FUJITSU LABORATORIES LTD. Kawasaki</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>As statistical methods for smart systems prevail, requirements related to explainability based on fairness, accountability, and transparency (FAT) become stronger. While end users need to confirm the fairness of the output of smart systems at a glance, society needs thorough explanations based on FAT. In this paper, we ofer a conceptual framework for considering the explainability of smart systems in enterprises practically. A conceptual model that has two layers, a core layer and interface layer, is provided, and we discuss the ideal environment in which there exist explainable smart systems that can meet the demands of both users and society.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 INTRODUCTION</title>
      <p>
        Explainability for algorithmic systems has been needed since
the initial stage of research on intelligent systems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This
is because those who use the analysis results obtained with
intelligent systems are human decision makers, and they
IUI Workshops’19, March 20, 2019, Los Angeles, USA
Copyright © 2019 for the individual papers by the papers’ authors. Copying
permitted for private and academic purposes. This volume is published and
copyrighted by its editors.
☑Fair ☑Fair ☑Fair
☑Accountable☑Accountable ☑Accountable
☑Transparent ☑Transparent ☑Transparent
☑Fair
☑Accountable
☑Transparent
need to judge the validity of the results. For conventional
rule-based intelligent systems, it is not dificult to keep the
process of decision-making explainable. However, recent
statistical methods for smart systems, such as machine learning
or especially deep learning, have dificulty with direct
extraction of explanations that users or other human stakeholders
can accept [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Therefore, as machine learning becomes
popular, the social necessity for explainable smart systems
also becomes stronger.
      </p>
      <p>
        This necessity for explainability for smart systems is
categorized into two sides: the user side and social side (Figure 1).
From the social side, it can be pointed out that demand has
increased for explainability based on fairness, accountability,
and transparency (FAT). For example, in the current situation
of data regulation, it is mentioned that meaningful
information is needed related to decisions made with algorithmic
systems [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. Such a necessity for explainability that exists
on the basis of discussion on FAT because the entire process
of decision-making involving the use of support systems
needs to be explained to confirm FAT. From the user side,
the explainability of smart systems is necessary basically
because users should understand the results of systems used
for the decisions made around themselves or should operate
the systems and adjust the results as they like [
        <xref ref-type="bibr" rid="ref23 ref29">23, 29</xref>
        ].
      </p>
      <p>
        Explainable smart systems in enterprises ideally need to
meet demands from both users and society at the same time
regarding the explainability described above. These demands
exist regardless of whether the enterprises are public
organizations or private companies because, while private
companies have trade secrets or intellectual property to protect,
enterprises that are responsible for the needs of society also
have their own citizens or customers who are recipients of
their services. To meet the demands of both users and society,
enterprises have to overcome dificulties that occur because
of a trade-of between the two. When they try to meet the
social demand for explainability, complete accountability
and transparency are ideally needed. This means that the
complex process of decision-making has to be traceable, and
the reason for each decision has to be able to be elaborated
on when requested. This evokes the problem of
information overload from the user side because the processes of
decision-making in enterprises are multi-tiered, various, and
dificult for users to understand at first glance. Some research
on machine learning considering fairness takes how to avoid
information overload into account [
        <xref ref-type="bibr" rid="ref12 ref24 ref31">12, 24, 31</xref>
        ]. However, the
necessity of avoiding information overload limits the
transparency or traceability of decision-making processes.
      </p>
      <p>In this paper, we ofer a conceptual framework for
considering the explainability of smart systems in enterprises. We
suggest considering systems in organizations as integrated
smart systems and splitting the discussion on the systems
in regard to two layers: a core layer and interface layer. The
core layer is the part that has functions for guaranteeing that
the FAT required by society is met. The interface layer is the
part for discussing how to filter information from the core
layer to make it understandable to users. We focus mainly
on the core layer because the interface layer is discussed
well in existing papers. Discussion on the core layer is
described not only from a technological perspective but from
an organizational one. Our contribution to the community
is that we clarify the diferences in the necessity for
explainability from the perspective of users and of society, and we
suggest a conceptual framework of the explainability needed
in enterprises.
2</p>
    </sec>
    <sec id="sec-2">
      <title>RELATED WORK</title>
      <p>
        There have been several discussions on the explainability
of smart systems in enterprises. As statical approaches for
data analysis are popularized, several articles summarize
statistical methods for industry from the perspective of
explainability [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] or appropriate industrial domains to apply
statistical methods to [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Several papers discuss
explainability from the perspective of users [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. For example,
Chander et al. focus on how explanations extracted by
artificial intelligence are accepted by diferent types of users
and discuss the appropriateness of the results from
state-ofthe-art explanation methods for each type of users [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].In
these papers, a system is considered to be a single agent
that extracts explanations that were decided on by a single
department in charge. However, in reality, smart systems
have to be considered as the integrated systems because
insider the systems, there are some heterogeneous methods or
criteria for data processing.
      </p>
      <p>
        Besides the explainability from the user side that is
discussed in existing papers, there is a need for explainability
from the perspective of the social side because there are
several stakeholders, such as governments or public
organizations, that require FAT for both public and private
enterprises. While there is discussion on the transparency of
information in heterogeneous organizations in the field of
ethics [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], there are no concrete requirements for
achieving FAT with explainable smart systems in enterprises. In
this paper, our focus is to make a conceptual framework
for breaking issues on explainable smart systems down into
concrete requirements for the environments in which the
systems exist.
      </p>
      <p>
        As practical measures, there have been several
technological approaches to FAT in the industry community. For
example, IBM launched new Trust and Transparency
capabilities on their cloud service1. To keep AI systems fair,
their system provides functions that automatically detect and
produce alerts for biases in decisions made by the system.
The functions visualize the confidence scores of data
recommended to be added to the model used for the system. Related
to this kind of explanation in the meaning of visualization,
several companies, such as simMachines2, and Fujitsu [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]
have developed machine learning technologies for extracting
results in ways that are understandable to users. However,
these approaches focus only on how explanations of fairness
are intelligible to their users. While such approaches can
confirm the understandability of explanations, they cannot
cover all FAT issues. In this paper, by focusing on the social
side, explainability can be discussed on the basis of FAT. This
will help in developing new technologies that can be used to
thoroughly confirm explainability in our ideal environment
in which systems that are explainable from the perspective
of FAT exist, and we ofer a list of ideal conditions.
1https://newsroom.ibm.com/2018-09-19-IBM-Takes-Major-Step-inBreaking-Open-the-Black-Box-of-AI
2https://simmachines.com/
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>DETAILS OF EXPLAINABILITY</title>
      <p>To develop our conceptual framework for discussing the
explainability of smart systems in enterprises, we need to
elaborate on the explainability required from users and from
society in terms of the necessity of FAT in smart systems.</p>
    </sec>
    <sec id="sec-4">
      <title>FAT as a Reason for Explainability</title>
      <p>
        Nowadays, requirements for the explainability of algorithmic
systems from domestic governments, academic
communities3, or international organizations are based on discussion
about FAT [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. As web services or decision support systems
with complex algorithms prevail, many people are making
decisions under the influence of algorithms without
knowing it [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. This problematic situation is pointed out within
the concepts named “algorithmic awareness” [
        <xref ref-type="bibr" rid="ref19 ref2">2, 19</xref>
        ], “filter
bubble” [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], or “social echo chamber” [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Discussions on
these have lead to requirements for displaying existence
of algorithms explicitly, clarifying the efects of algorithms
used for daily decisions, and making algorithmic systems
transparent.
      </p>
      <p>
        The social necessity for explainability based on FAT is
especially seen in the context of data regulations explicitly.
There are requirements in the GDPR, a data regulation from
the EU taking efect globally, for algorithmic systems to
extract meaningful information related to decisions made
by the systems. The necessity for meaningful information
is mainly due to the necessity for users to obtain enough
information on algorithmic decision-making to have an
actionable discrimination claim [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. Thus, the necessity for
explainability to show fairness in smart systems results in
the necessity for accountability and transparency to confirm
fairness in the entire process of algorithmic decision-making.
Therefore, it is important to consider explainability from the
perspective of FAT.
      </p>
      <p>To discuss the details of this kind of explainability, we
elaborate on fairness, accountability, and transparency as
the first step. We define fairness, accountability, and
transparency in our context in the following part as summarized
in Table 1.</p>
      <p>
        Fairness. Fairness is a multifaceted concept that varies
according to the domain focused on. For example, in the field of
machine learning, there are two definitions of fairness: group
fairness and individual fairness [
        <xref ref-type="bibr" rid="ref17 ref32">17, 32</xref>
        ]. According to Zemel
et al., group fairness is considered as the concept meaning
that the proportion of members in a protected group
receiving positive classification is identical to the proportion in
the population as a whole, and the definition of individual
fairness is that similar individuals should be treated
similarly [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. Moreover, there are diverse ways of considering
      </p>
      <sec id="sec-4-1">
        <title>3IEEE ethically aligned design https://ethicsinaction.ieee.org/</title>
        <p>
          the definition of fairness outside of the machine learning
community [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. The definition varies depending on what
kind of justice is considered [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], or, more simply, on who
should be protected, to what extent protected people should
be protected, and so on.
        </p>
        <p>Therefore, the concept of fairness that we discuss here is
that there is no bias regarding the data of individuals that
are thought to be sensitive by stakeholders related to
discrimination in the results of analyses or decisions supported
with smart systems. This definition of fairness means that
the content of fairness itself changes flexibly depending on
stakeholders including customers, government, and other
social groups, but fairness in one domain should be confirmed
among the stakeholders.</p>
        <p>
          Accountability. In conventional discussions, accountability
is a concept that is achieved not only with algorithms but
by all parts of an enterprise [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. The accountability of
algorithms should be considered together with that of groups
of people in enterprises. As a foundation for accountability
in the field of computer science, Nissenbaum stated that
accountability can be considered something related to moral
blameworthiness [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ]. In brief, blameworthiness is defined as
the conditions that someone’s actions caused a harm and her
or his actions were faulty.In her article, while it is mentioned
that blameworthiness is not identical to accountability,
accountability is grasped by “the nature of an action and the
relationship of the agent (or several agents) to the action’s
outcome.” Moreover, Diakopoulos describes the demand toward
accountable algorithms as what exists simultaneously with
the demand toward accountability of the people behind the
algorithms [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>
          Following these discussions, we consider the concept of
accountability as a concept that indicates situations in which an
organization possesses a structure that confirms the validity
of the output of each phase of data processing in
decisionmaking supported by smart systems, and, at the same time, of
the output of the entire process. For example, to confirm the
validity of the output of a process, there has to be identical
department responsible for the phase. The reason that the
responsibilities for each phase of data processing and the
entire process of it are described respectively is that confirming
the validity of each result from each phase of data processing
is diferent from confirming the validity of the results from
the entire process. For example, even if the outputs of each
phase are fair, the outputs of the entire process of analysis are
sometimes unfair because training data extracted from a
previous phase of data processing that seem to be fair can cause
unfair result in a later phase [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Moreover, this definition of
accountability implies that there is one requirement for the
methods embedded in smart systems for which departments
have a responsibility, that is, that the methods should be
• The structure of an organization confirms responsibilities for each and entire data analysis results.
• Outsiders can trace the entire process of data analysis or algorithmic decision-making.
equipped with mechanisms that can provide reasons for the
outputs for those who have responsibility.
        </p>
        <p>
          Transparency. The concept of transparency is considered
both in the field of computer science and in that of ethics. In
the former, transparency means to show the internal
functions of a system. For example, in the field of recommender
systems or of data analysis, when a system explains the
reason results were extracted, the system becomes more
transparent [
          <xref ref-type="bibr" rid="ref13 ref22 ref28">13, 22, 28</xref>
          ]. In the latter, transparency is
considered to be a macro concept that focuses on the entire process
of operation related to data in organizations. The concept
covers what occurs among people, data, or algorithmic
systems, including practices, norms, and other factors [
          <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
          ] or
all phases of data processing [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ]. It is useful to introduce
transparency as a concept for clarifying the details of data
processing to develop smart systems in enterprises.
        </p>
        <p>According to the discussions above, we define the concept
of transparency in the context of smart systems in
enterprises to mean that the entirety of data processing can be
traced by outsiders if needed. Of course, there are
dificulties in achieving thorough transparency because there are
matters of privacy, trade secrets, or other sensitive factors.</p>
        <p>However, whether full information is really disclosed or not,
it is necessary to confirm that it is possible for third parties
to trace the detailed process of algorithmic decision-making.</p>
        <p>Additionally, the definition is consistent with transparency
considered in the field of computer science in that
traceability of data processing is achieved by expressing the results
of processing in interpretable ways.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Two Types of Explainability</title>
      <p>Explainability based on FAT that is needed as per the
discussions above should ideally always be provided to users
both in and outside of an enterprise. However, realistically
speaking, this is dificult because of the complex structure of
the data analysis process. Therefore, the necessity for
explanation from the end users’ perspective should be separated
from that of society (Figure 2).
☑FAT</p>
      <p>☑FAT</p>
      <p>
        Smart System
Needs from End Users. The explainability needed from the
user side has been considered in research on how users
recognize, understand, or learn about explanations from smart
systems [
        <xref ref-type="bibr" rid="ref23 ref29">23, 29</xref>
        ]. Additionally, several pieces of existing
research on machine learning focus on how easily end users
read the results of analyses [
        <xref ref-type="bibr" rid="ref24 ref31">24, 31</xref>
        ]. We name the need to
be able to read results as assumed in the field of computer
science “readability.”
      </p>
      <p>
        It is dificult to confirm explainability for accountability
and transparency when there has to be readability as well.
This is because, while fairness can be guaranteed in each
single process of analysis, accountability and transparency
need to be considered among multiple processes or
departments, which is hard for end users to understand at first
glance. Therefore, when explainability based on readability
for users is discussed, entire FAT can not be guaranteed.
Needs from Society. The social need for FAT in enterprises
has become stronger as data driven analysis technologies
become more popular. Current regulations [
        <xref ref-type="bibr" rid="ref18 ref27">18, 27</xref>
        ] and visions4
mention the importance or necessity of FAT in algorithmic
systems. Additionally, there are also important concepts such
as filter bubble or algorithm awareness [
        <xref ref-type="bibr" rid="ref19 ref2 ref26">2, 19, 26</xref>
        ]. Among
      </p>
      <sec id="sec-5-1">
        <title>4Ibid.</title>
        <sec id="sec-5-1-1">
          <title>Users Inside the Enterprise</title>
          <p>☑
☑
☑</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>Interface layer</title>
        </sec>
        <sec id="sec-5-1-3">
          <title>Core layer</title>
          <p>☑
I
n
t
e
r
f
a
c
e
l
a
y
e
r
☑</p>
        </sec>
        <sec id="sec-5-1-4">
          <title>Customer (end user)</title>
          <p>these discussions, both public and private enterprises are
usually faced with a demand for thorough confirmation of FAT.
We use the word “thorough” in the sense that confirmation
is not limited because of readability. Of course, protecting
privacy or trade secrets has to be considered, but the social
requirements for FAT are that enterprises be fair, accountable,
and transparent as long as possible.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>4 CONCEPTUAL FRAMEWORK</title>
      <p>According to the discussions above, the functions for meeting
demands from individual users and those from society are
diferent and should be discussed separately. However, this
is dificult because these two types of demands are easily
jumbled, for they have to be considered as requirements for
the same system in an enterprise. Existing research has not
suggested frameworks for discussing explainability based on
FAT separately for the purpose of meeting these two diferent
demands. Therefore, we need to visualize a framework for
considering these separate discussions first.</p>
    </sec>
    <sec id="sec-7">
      <title>Core Layer and Interface Layer</title>
      <p>
        We suggest that the structure of a smart system be
considered in two separate parts: a core layer and interface layer.
We show the structure of the two layers in Figure 3. Simply
speaking, the interface layer is set to meet demands from end
users, and the core layer is for meeting those from society.
The interface layer covers FAT aware technologies that take
readability into consideration [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] and discussions on what
kind of information should be displayed to end users [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
The core layer covers discussions on confirming FAT for the
entire structure of an enterprise [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] and technologies that
focus on making results fair without considering readability.
Technologies for showing the potential results of analyses or
decision-making, such as technologies related to data
structure, can be helpful for considering FAT for the core layer
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Compared with the technologies discussed for the core
layer, technologies for the interface layer can be interpreted
as technologies that play the role of filtering information
from the core layer and showing readable results to users.
Discussions related to the core layer include the structural
issues of enterprises. However, there are few studies in the
ifeld of computer science on which environments
explainable smart systems can exist with respect to FAT. Therefore,
the ideal conditions under which explainable smart systems
can exist have to be discussed.
      </p>
    </sec>
    <sec id="sec-8">
      <title>Explainability and FAT in Smart Systems</title>
      <p>As a first step toward thinking of the ideal environment in
which FAT-aware explainable smart systems exist in
enterprises, we ofer a fivefold list (Table 2) based on our definition
of FAT described in the previous section. In the list, there
are technological factors and structural ones for composing
such an environment. We use the word “structural” to mean
what is related to the organizational structure of enterprises.
While the list in Table 2 is an example of the requirements
for an environment in which there are explainable smart
systems, it is helpful when taking diferent definitions of
FAT in account.</p>
      <p>
        Detecting discriminative features. To guarantee the fairness
of results, smart systems should detect features related to
discriminative outputs. According to our definition of fairness,
results from FAT-aware explainable smart systems must not
extract outputs that have biases regarding sensitive features.
Therefore, technologies for detecting sensitive factors need
to be implemented in the systems. Moreover, not only
sensitive factors such as gender or nationality but features that
are correlated with the features (i.e. one’s present address)
should ideally be detected. Some conventional works in the
ifeld of machine learning approach this problem [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ].
Adjustability of factors in the model. After detecting the
features related to unfair outputs, users should be able to adjust,
that is, delete or add features. This is because of our definition
of fairness, that is, that sensitive features related to
discrimination are decided by stakeholders and change flexibly in
accordance with the context. In technological manners, this
means that users including decision makers can change what
features should be included or excluded in terms of fairness.
This function of smart systems helps people in enterprises
account for their decisions in consideration of fairness. This
means the function helps to keep the environment
accountable and, moreover, transparent indirectly.
      </p>
      <p>Interpretability in all models used in system. All models in
smart systems need to be interpretable to confirm the
accountability and transparency of the systems. According to
our definition of accountability, details on the mechanisms of
methods in explainable smart systems have to be composed
in the way that the method can provide results for its output.
Therefore, the outputs of the technologies in systems
including statistical methods have to be able to be interpreted by
human decision makers. This condition would be helpful for
confirming transparency because, in order for outsiders to
trace the process of data analysis, it is necessary that the
output of each phase of analyses be shown in a form that
outsiders can understand.</p>
      <p>
        Departments in charge of each process. There have to be
departments that are responsible for each phase of data
processing in enterprises to confirm accountability and transparency.
This requirement is not related to technology but to the
structure of an enterprise. While this condition corresponds to
a part of our definition of accountability, what is important
is that this requirement is useful when considering it with
respect to transparency. When outsiders trace the phases of
data processing, they have to ask for the release of detailed
information in some way or other. For this request to be
efective, each public or private organization has to have an
individual department or group of people responsible for
each phase of data processing internally. This kind of
requirement for enterprises shows that departments in charge
of each process are needed in terms of transparency.
Departments in charge of the entire process. To guarantee
accountability, the ideal environment has to have a
department, a group of people, or an individual that is responsible
for the final output of a system. This requirement is based
on our definition of accountability that the structure of an
enterprise in which an explainable smart system exists has
to have a function for confirming the validity or rationality
of the final results of decision-making. Precisely speaking,
making a process for decision-making transparent does not
directly mean confirming accountability [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Therefore,
departments that are responsible for the final results of data
processing must guarantee accountability.
      </p>
    </sec>
    <sec id="sec-9">
      <title>5 CONCLUSION AND FUTURE WORK</title>
      <p>We discussed requirements related to explainability based
on FAT from two sides, the user side and the social side.
Describing these two sides and giving detailed definitions
on fairness, accountability, and transparency, we clarified
the diferences and trade-ofs between the two sides. After
this, we suggested a conceptual framework that splits the
structure of smart systems in enterprises into two layers: a
core layer and interface layer. On the basis of this framework,
we focused on the core layer and set the ideal environment
in which explainable smart systems based on FAT exist with
a fivefold table. We suggested a first step toward considering
the environment including not only the systems themselves
but also the structure of the organization of enterprises. As
future work, technologies or the principles for meeting both
social and user demands will be built on the basis of our
framework.</p>
      <p>In this paper, we proposed an abstract conceptual
framework to recognize the situation around the discussion of
FAT from two perspectives. To show the usefulness of our
framework, it is preferred that there are case studies, such
as analysis of specific FAT-aware technology or of structure
of enterprises. With the case studies, there can be chances
to obtain the guidance to implementation in technological
manners and methods of organizational analysis of various
enterprises with our framework.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Ashraf</given-names>
            <surname>Abdul</surname>
          </string-name>
          , Jo Vermeulen, Danding Wang,
          <string-name>
            <surname>Brian</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lim</surname>
            , and
            <given-names>Mohan</given-names>
          </string-name>
          <string-name>
            <surname>Kankanhalli</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda</article-title>
          .
          <source>In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18)</source>
          . ACM, New York, NY, USA, Article
          <volume>582</volume>
          , 18 pages.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Oscar</given-names>
            <surname>Alvarado</surname>
          </string-name>
          and
          <string-name>
            <given-names>Annika</given-names>
            <surname>Waern</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Towards Algorithmic Experience: Initial Eforts for Social Media Contexts</article-title>
          .
          <source>In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18)</source>
          . ACM, New York, NY, USA, Article
          <volume>286</volume>
          , 12 pages.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Mike</given-names>
            <surname>Ananny</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Toward an Ethics of Algorithms: Convening</article-title>
          , Observation, Probability, and Timeliness. Science, Technology, &amp;
          <source>Human Values</source>
          <volume>41</volume>
          ,
          <issue>1</issue>
          (Sep.
          <year>2016</year>
          ),
          <fpage>93</fpage>
          -
          <lpage>117</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Mike</given-names>
            <surname>Ananny</surname>
          </string-name>
          and
          <string-name>
            <given-names>Kate</given-names>
            <surname>Crawford</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability</article-title>
          .
          <source>New Media &amp; Society</source>
          <volume>20</volume>
          ,
          <issue>3</issue>
          (Dec.
          <year>2018</year>
          ),
          <fpage>973</fpage>
          -
          <lpage>989</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Solon</given-names>
            <surname>Barocas</surname>
          </string-name>
          and
          <string-name>
            <given-names>Andrew D.</given-names>
            <surname>Selbst</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Big Data's Disparate Impact</article-title>
          .
          <source>California Law Review</source>
          <volume>104</volume>
          ,
          <issue>3</issue>
          (Jun.
          <year>2016</year>
          ),
          <fpage>671</fpage>
          -
          <lpage>732</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Reuben</given-names>
            <surname>Binns</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Fairness in Machine Learning: Lessons from Political Philosophy</article-title>
          .
          <source>In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (FAT*'18)</source>
          . PMLR, New York, NY, USA,
          <fpage>149</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Indranil</given-names>
            <surname>Bose and Radha</surname>
          </string-name>
          <string-name>
            <given-names>K.</given-names>
            <surname>Mahapatra</surname>
          </string-name>
          .
          <year>2001</year>
          .
          <article-title>Business data mining - a machine learning perspective</article-title>
          .
          <source>Information &amp; Management</source>
          <volume>39</volume>
          ,
          <issue>3</issue>
          (Dec.
          <year>2001</year>
          ),
          <fpage>211</fpage>
          -
          <lpage>225</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Randal</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Bryant</surname>
          </string-name>
          .
          <year>1992</year>
          .
          <article-title>Symbolic Boolean Manipulation with Ordered Binary-decision Diagrams</article-title>
          .
          <source>ACM Comput. Surv</source>
          .
          <volume>24</volume>
          ,
          <issue>3</issue>
          (Sep.
          <year>1992</year>
          ),
          <fpage>293</fpage>
          -
          <lpage>318</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Erik</given-names>
            <surname>Brynjolfsson</surname>
          </string-name>
          and Tom Mitchell.
          <year>2017</year>
          .
          <article-title>What can machine learning do? Workforce implications</article-title>
          .
          <source>Science</source>
          <volume>358</volume>
          ,
          <issue>6370</issue>
          (Dec.
          <year>2017</year>
          ),
          <fpage>1530</fpage>
          -
          <lpage>1534</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Ajay</given-names>
            <surname>Chander</surname>
          </string-name>
          and
          <string-name>
            <given-names>Ramya</given-names>
            <surname>Srinivasan</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Evaluating Explanations by Cognitive Value</article-title>
          .
          <source>In Machine Learning and Knowledge Extraction</source>
          . Springer International Publishing, Cham, Switzerland,
          <fpage>314</fpage>
          -
          <lpage>328</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Ajay</surname>
            <given-names>Chander</given-names>
          </string-name>
          , Ramya Srinivasan, Suhas Chelian,
          <string-name>
            <given-names>Jun</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Kanji</given-names>
            <surname>Uchino</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Working with Beliefs: AI Transparency in the Enterprise</article-title>
          .
          <source>In Joint Proceedings of the ACM IUI 2018 Workshops.</source>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Nicole</surname>
            <given-names>Cruz</given-names>
          </string-name>
          , Jean Baratgin, Mike Oaksford, and
          <string-name>
            <given-names>David E.</given-names>
            <surname>Over</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Bayesian reasoning with ifs and ands and ors</article-title>
          .
          <source>Frontiers in Psychology 6, Article</source>
          <volume>192</volume>
          (
          <issue>Feb</issue>
          .
          <year>2015</year>
          ), 9 pages.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Anupam</surname>
            <given-names>Datta</given-names>
          </string-name>
          , Shayak Sen, and
          <string-name>
            <given-names>Yair</given-names>
            <surname>Zick</surname>
          </string-name>
          .
          <year>2017</year>
          . Algorithmic Transparency via Quantitative Input Influence . Springer International Publishing, Cham, Switzerland,
          <fpage>71</fpage>
          -
          <lpage>94</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Nicholas</given-names>
            <surname>Diakopoulos</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Algorithmic Accountability Reporting: On the Investigation of Black Boxes. Tow Center for Digital Journalism A Tow/Knight Brief (Dec</article-title>
          .
          <year>2014</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>32</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Nicholas</given-names>
            <surname>Diakopoulos</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Accountability in Algorithmic Decision Making</article-title>
          .
          <source>Commun. ACM 59</source>
          ,
          <issue>2</issue>
          (Jan.
          <year>2016</year>
          ),
          <fpage>56</fpage>
          -
          <lpage>62</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Graham</surname>
            <given-names>Dove</given-names>
          </string-name>
          , Kim Halskov, Jodi Forlizzi,
          <string-name>
            <given-names>and John</given-names>
            <surname>Zimmerman</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>UX Design Innovation: Challenges for Working with Machine Learning As a Design Material</article-title>
          .
          <source>In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17)</source>
          . ACM, New York, NY, USA,
          <fpage>278</fpage>
          -
          <lpage>288</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Cynthia</surname>
            <given-names>Dwork</given-names>
          </string-name>
          , Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel.
          <year>2012</year>
          .
          <article-title>Fairness Through Awareness</article-title>
          .
          <source>In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS '12)</source>
          . ACM, New York, NY, USA,
          <fpage>214</fpage>
          -
          <lpage>226</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Lilian</given-names>
            <surname>Edwards</surname>
          </string-name>
          and
          <string-name>
            <given-names>Michael</given-names>
            <surname>Veale</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Enslaving the Algorithm: From a 'Right to an Explanation' to a 'Right to Better Decisions'</article-title>
          ?
          <source>IEEE Security &amp; Privacy</source>
          <volume>16</volume>
          ,
          <issue>3</issue>
          (Jul.
          <year>2018</year>
          ),
          <fpage>46</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Motahhare</surname>
            <given-names>Eslami</given-names>
          </string-name>
          , Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and
          <string-name>
            <given-names>Christian</given-names>
            <surname>Sandvig</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <string-name>
            <surname>"I Always Assumed That I Wasn'T Really</surname>
          </string-name>
          That Close to [Her]
          <article-title>": Reasoning About Invisible Algorithms in News Feeds</article-title>
          .
          <source>In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15)</source>
          . ACM, New York, NY, USA,
          <fpage>153</fpage>
          -
          <lpage>162</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Seth</surname>
            <given-names>Flaxman</given-names>
          </string-name>
          , Sharad Goel, and
          <string-name>
            <surname>Justin</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Rao</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Filter Bubbles, Echo Chambers, and Online News Consumption</article-title>
          .
          <source>Public Opinion Quarterly</source>
          <volume>80</volume>
          ,
          <string-name>
            <surname>S1 (Mar</surname>
          </string-name>
          .
          <year>2016</year>
          ),
          <fpage>298</fpage>
          -
          <lpage>320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Randy</surname>
            <given-names>Goebel</given-names>
          </string-name>
          , Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf,
          <string-name>
            <given-names>Peter</given-names>
            <surname>Kieseberg</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Holzinger</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <string-name>
            <surname>Explainable</surname>
            <given-names>AI</given-names>
          </string-name>
          :
          <article-title>The New 42?</article-title>
          .
          <source>In Machine Learning and Knowledge Extraction</source>
          . Springer International Publishing, Cham, Switzerland,
          <fpage>295</fpage>
          -
          <lpage>303</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Jonathan</surname>
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Herlocker</surname>
          </string-name>
          , Joseph A.
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>and John</given-names>
          </string-name>
          <string-name>
            <surname>Riedl</surname>
          </string-name>
          .
          <year>2000</year>
          .
          <article-title>Explaining Collaborative Filtering Recommendations</article-title>
          .
          <source>In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (CSCW '00)</source>
          . ACM, New York, NY, USA,
          <fpage>241</fpage>
          -
          <lpage>250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Margaret Burnett,
          <string-name>
            <surname>Weng-Keen Wong</surname>
            , and
            <given-names>Simone</given-names>
          </string-name>
          <string-name>
            <surname>Stumpf</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Principles of Explanatory Debugging to Personalize Interactive Machine Learning</article-title>
          .
          <source>In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI '15)</source>
          . ACM, New York, NY, USA,
          <fpage>126</fpage>
          -
          <lpage>137</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Himabindu</surname>
            <given-names>Lakkaraju</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stephen H. Bach</surname>
            , and
            <given-names>Jure</given-names>
          </string-name>
          <string-name>
            <surname>Leskovec</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Interpretable Decision Sets: A Joint Framework for Description and Prediction</article-title>
          .
          <source>In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16)</source>
          . ACM, New York, NY, USA,
          <fpage>1675</fpage>
          -
          <lpage>1684</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>Helen</given-names>
            <surname>Nissenbaum</surname>
          </string-name>
          .
          <year>1996</year>
          .
          <article-title>Accountability in a computerized society</article-title>
          .
          <source>Science and Engineering Ethics</source>
          <volume>2</volume>
          ,
          <issue>1</issue>
          (Mar.
          <year>1996</year>
          ),
          <fpage>25</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Eli</given-names>
            <surname>Pariser</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>The Filter Bubble: What the Internet Is Hiding from You</article-title>
          . The Penguin Group, London, UK.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Andrew</surname>
            <given-names>D</given-names>
          </string-name>
          <string-name>
            <surname>Selbst</surname>
            and
            <given-names>Julia</given-names>
          </string-name>
          <string-name>
            <surname>Powles</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Meaningful information and the right to explanation</article-title>
          .
          <source>International Data Privacy Law</source>
          <volume>7</volume>
          ,
          <issue>4</issue>
          (Dec.
          <year>2017</year>
          ),
          <fpage>233</fpage>
          -
          <lpage>242</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>Rashmi</given-names>
            <surname>Sinha</surname>
          </string-name>
          and
          <string-name>
            <given-names>Kirsten</given-names>
            <surname>Swearingen</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>The Role of Transparency in Recommender Systems</article-title>
          .
          <source>In CHI '02 Extended Abstracts on Human Factors in Computing Systems (CHI EA '02)</source>
          . ACM, New York, NY, USA,
          <fpage>830</fpage>
          -
          <lpage>831</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Simone</surname>
            <given-names>Stumpf</given-names>
          </string-name>
          , Vidya Rajaram,
          <string-name>
            <given-names>Lida</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>Weng-Keen</surname>
            <given-names>Wong</given-names>
          </string-name>
          , Margaret Burnett, Thomas Dietterich,
          <string-name>
            <given-names>Erin</given-names>
            <surname>Sullivan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Jonathan</given-names>
            <surname>Herlocker</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Interacting meaningfully with machine learning systems: Three experiments</article-title>
          .
          <source>International Journal of Human-Computer Studies 67</source>
          ,
          <issue>8</issue>
          (Aug.
          <year>2009</year>
          ),
          <fpage>639</fpage>
          -
          <lpage>662</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>Matteo</given-names>
            <surname>Turilli</surname>
          </string-name>
          and
          <string-name>
            <given-names>Luciano</given-names>
            <surname>Floridi</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>The ethics of information transparency</article-title>
          .
          <source>Ethics and Information Technology</source>
          <volume>11</volume>
          ,
          <issue>2</issue>
          (Jun.
          <year>2009</year>
          ),
          <fpage>105</fpage>
          -
          <lpage>112</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>Fulton</given-names>
            <surname>Wang</surname>
          </string-name>
          and
          <string-name>
            <given-names>Cynthia</given-names>
            <surname>Rudin</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Falling Rule Lists</article-title>
          .
          <source>In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics</source>
          . PMLR, San Diego, California, USA,
          <fpage>1013</fpage>
          -
          <lpage>1022</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Richard</surname>
            <given-names>Zemel</given-names>
          </string-name>
          , Yu Wu, Kevin Swersky, Toniann Pitassi, and
          <string-name>
            <given-names>Cynthia</given-names>
            <surname>Dwork</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Learning Fair Representations</article-title>
          .
          <source>In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28 (ICML'13)</source>
          . JMLR.org, III-325
          <string-name>
            <surname>-</surname>
          </string-name>
          III-333.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>