<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Investigating Explanations that Target Training Data</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ariful Islam Anik</string-name>
          <email>aianik@cs.umanitoba.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Bunt</string-name>
          <email>bunt@cs.umanitoba.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Manitoba</institution>
          ,
          <addr-line>Winnipeg, Manitoba</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>To promote transparency in black-box machine learning systems, different explanation approaches have been developed and discussed in the literature. However, training dataset information is rarely communicated in these explanations despite the utmost importance of training data to a system trained with machine learning techniques. We investigated explanations that focus on communicating training dataset information to end-users in our work. In this position paper, we discuss our prototype explanations and highlight findings from our user studies. We also discuss open questions and interesting directions for future research.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Explanations</kwd>
        <kwd>Training Data</kwd>
        <kwd>Machine Learning Systems</kwd>
        <kwd>Transparency</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>While machine learning (ML) and artificial
intelligence (AI) are being increasingly used in
a range of automated systems, a lack of
transparency in these black-box systems can be
a barrier for end-users to interpret the systems’
outcomes [28,32]. This lack of transparency
can also negatively impact end-users’ trust and
acceptance of the systems [13,36].</p>
      <p>To increase system transparency, prior work
has investigated a range of explanation
approaches for machine learning systems
[2,7,9,14,36,37]. These explanations provide
the users with information about the systems
and their decisions by mostly focusing on
explaining the decision factors, the criteria, and
the properties of the outcomes [2,7,9,14,36,37].
While evaluations of these approaches
[4,7,9,16,23,35] have shown them to be
valuable, previously studied explanations rarely
communicate information about training data or
how the system was trained. Since machine
learning algorithms look at the underlying
patterns and characteristics of the training data
to decide on the outcomes, training data and
training procedures can have a fundamental
impact on the performance of machine learning</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>With the goal of increasing transparency in
machine learning systems, prior work has
investigated a range of explanation approaches
that explain the outcomes and/or a system’s
rationale behind the outcomes. These
explanations can be categorized into different
groups based on the focus of the provided
information. For example, input-influence
explanations [4,14] describe the degree of
influence of the inputs to the system output. In
contrast, sensitivity-based explanations [4,36]
describe how much the value of an input has to
differ to change the output. Other popular
explanation approaches include
demographicbased explanations [2,4], which describe the
aggregate statistics on the outcome classes for
different demographic categories (e.g., gender,
race), while case-based explanations [4,7] use
example instances from the training data to
explain the outcome. Prior work also explored
white-box explanations [9] that explain the
internal workings of an algorithm, and visual
explanations [25,39] that explain the outcomes
or the model through a visual analytics
interface. Most of these approaches either focus
on the decision process or the factors in the
decision process.</p>
      <p>
        Prior work has also investigated the impact
of different explanation approaches on
endusers’ perception of machine learning systems
[4,7,9,16,23,35]. While increased transparency
through explanations tends to universally
increase users’ acceptance of the systems
[
        <xref ref-type="bibr" rid="ref50">13,21,24</xref>
        ], the impacts on trust have been
mixed [9,13,23,26,30,33,34]. Prior work has
also studied the impact of explanations on
endusers’ sense of fairness, finding that certain
explanation styles impact fairness judgments
more than the others [4,16].
      </p>
      <p>Given that training data is fundamental to
the performance of machine learning systems,
Gebru et al. advocated the concept of
documenting important information (e.g.,
motivation, creation, compositions, intended
use, distribution) about datasets before
releasing them, proposing a standard dataset
documentation sheet for this purpose [17]. This
documentation approach is receiving attention
in the machine learning community [10,40] and
in some organizations [3,31]. Our research
focuses on investigating how such information
could be communicated to end-users and how
it might impact their perceptions of machine
learning systems.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Data-centric Explanations</title>
      <p>In this section, we present a high-level
description of our approach to explanations that
communicate the underlying training data. We
also summarize our key evaluation results to
date. A more detailed discussion of our work
can be found in [1].</p>
      <p>Our data-centric explanations focus on
providing end-users with information on the
training data used in machine learning systems.
We leveraged Gebru et al.’s datasheets for
datasets [17] as a starting point to design
datacentric explanations, using an iterative process
to transform this information into forms that
were meaningful and understandable to
endusers. Figure 1 provides an overview of one of
our prototype data-centric explanations. Our
iterative design and evaluation led us to include
five different categories of training data
information (Figure 1: Left). Within each
category, the prototype explains dataset
information using a question-and-answer
format (example is given in Figure 1: A).</p>
      <p>We evaluated our prototype explanations in
a mixed-method user study with 27 participants
to assess their potential to impact end-users’
perceptions of machine learning systems. Our
evaluation used a scenario-based approach,
where we presented participants with a set of
scenarios describing potential real-world
systems along with the accompanying
explanations. The scenarios varied in the
perceived stakes of the systems (high stakes vs
low stakes) and the characteristics of the
training data revealed in the accompanying
explanations (balanced training data vs training
data with red flags). Our study also included a
semi-structured interview session with each
participant where we probed on issues
surrounding trust, fairness, and characteristics
of the system scenarios and training data.</p>
      <p>We found in our evaluation that the
datacentric explanations impacted participants’
perceived level of trust in and the sense of
fairness of the machine learning systems. We
found that participants had more trust in the
system and thought the system was fair when
the explanations revealed a balanced training
dataset with no errors compared to when
explanations pointed out issues in the training
data. Our study also provided qualitative
insights into the value end-users see in having
training-data information available. For
example, participants liked having access to the
demographics information as they felt it helped
them identify biases. We also noticed initial
indications of participant expertise affecting
attitudes towards the explanations. Machine
learning experts expected other users to have
difficulty understanding explanations;
however, we did not such concerns expressed
by participants with less prior knowledge of
machine learning. In fact, almost all
participants reported that the explanations were
easy to understand and expressed interest in
having them available.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Opportunities and Challenges with Data-centric Explanations</title>
      <p>Our initial evaluation of the data-centric
explanation prototypes suggested that
endusers are capable of and interested in
understanding information about training
datasets. Our results also point to interesting
future research directions that we discuss in this
section.</p>
      <p>While our study findings suggest that
participants positively receive data-centric
explanations, some participants also wanted
additional information about the systems and
the decision factors, particularly to judge
fairness. A significant body of research has
investigated explanations that focus on the
factors of a decision and the decision process
(i.e., process-centric information)
[9,14,25,36,39]. While each of the explanation
approache has its own benefits, it would be
interesting to explore ways to combine
explanations of training data with
processcentric explanations. Doing so would also allow
us to investigate how end-users might prioritize
the different types of explanations, as well as
how the different approaches might
complement each other.</p>
      <p>
        We also see opportunities for the
community to study and discuss different
evaluation methods. For example, a common
method for evaluating explanations of machine
learning systems is to use fictional system
scenarios (which we also used in our study with
data-centric explanations) [
        <xref ref-type="bibr" rid="ref89">4,19,29,38,41</xref>
        ]. A
downside of this method is that it requires
participants to role-play rather than experience
the systems directly, which in turn impacts the
ecological validity of the study findings. There
are a number of challenges with moving
towards evaluations with real-life systems. For
example, before we can evaluate our
explanations in a real setting, we need more
documented datasets available for real-world
systems and we need more machine learning
specialists to buy into the idea of data-centric
explanations and be more open to incorporating
data-centric explanations in real-life systems.
      </p>
      <p>
        One of the goals for explanations, in general,
is to ensure fairness in machine learning
systems by revealing more details about the
systems and their decision process. However,
measuring users’ perceptions of fairness is a
challenging task. While a common approach is
to adapt and use prior scales proposed for
organizational justice [4,12,16] (which we also
use in our study), these scales do not necessarily
capture the fact that fairness is
multidimensional and context-dependent [
        <xref ref-type="bibr" rid="ref109 ref16">18,19</xref>
        ]. A
first necessary step in developing more robust
study instruments is to develop a common
definition of “fairness”. There is existing work
in this direction that we can build upon [
        <xref ref-type="bibr" rid="ref104">11,20</xref>
        ].
A second key evaluation challenge is having
objective measures to complement the
commonly collected questionnaire data (e.g.,
self-reported Likert scale values
[
        <xref ref-type="bibr" rid="ref89">4,7,9,16,19,29</xref>
        ]). Developing such measures,
particularly ones that can be feasibility
collected, is an important area of future work.
      </p>
      <p>Finally, we are interested in how
explanations such as ours might influence the
perceptions of stakeholders other than potential
end-users, who are often the target pool in
evaluations [4,7,16,23,35]. For example, for
explanations of training data, one interesting
audience could be companies and organizations
that want to purchase machine learning systems
to see whether data-centric explanations might
impact on their purchasing decisions. Another
potential audience for the data-centric
explanations are journalists, who play an
important role in reporting black-box systems
and communicating them to the general public
[15]. We know from prior work that journalists
have criticized machine learning systems for
their black-box nature [27].</p>
    </sec>
    <sec id="sec-5">
      <title>5. Summary</title>
      <p>Explaining the training data of machine
learning systems has the potential to provide a
range of benefits to end-users and other
stakeholders in terms of increased transparency
of the systems. Our study with data-centric
explanations found some evidence that such
explanations can impact people’s trust in and
fairness judgment of machine learning systems.
We discussed some important directions for
future work, which we hope will encourage
discussion with researchers working on a
variety of explanation styles and approaches.</p>
    </sec>
    <sec id="sec-6">
      <title>6. References</title>
      <p>[1] Ariful Islam Anik and Andrea Bunt. 2021.</p>
      <p>Data-Centric Explanations: Explaining
Training Data of Machine Learning
Systems to Promote Transparency. In
Proceedings of the 2021 CHI Conference
on Human Factors in Computing Systems,
(To appear).
[2] Liliana Ardissono, Anna Goy, Giovanna
Petrone, Marino Segnan, and Pietro
Torasso. 2003. Intrigue : Personalized
Recommendation of Tourist Attractions.
Applied Artificial Intelligence: Special
Issue on Artificial Intelligence for Cultural
[40] Semih Yagcioglu, Aykut Erdem, Erkut
Erdem, and Nazli Ikizler-Cinbis. 2020.
RecipeQA: A challenge dataset for
multimodal comprehension of cooking
recipes. Proceedings of the 2018
Conference on Empirical Methods in
Natural Language Processing, EMNLP
2018: 1358–1368.
https://doi.org/10.18653/v1/d18-1166
[41] Yunfeng Zhang, Q. Vera Liao, and Rachel
K.E. Bellamy. 2020. Efect of confidence
and explanation on accuracy and trust
calibration in AI-assisted decision making.
Proceedings of the 2020 Conference on
Fairness, Accountability, and
Transparency: 295–305.
https://doi.org/10.1145/3351095.3372852</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>Heritage and Digital Libraries</source>
          <volume>17</volume>
          ,
          <fpage>8</fpage>
          -
          <lpage>9</lpage>
          :
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          687-
          <fpage>714</fpage>
          . [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Arnold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Piorkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Reimer</surname>
          </string-name>
          , J.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Ramamurthy</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Olteanu</surname>
          </string-name>
          .
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>Development</source>
          <volume>63</volume>
          ,
          <fpage>4</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          https://doi.org/10.1147/JRD.
          <year>2019</year>
          .294228
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <volume>8</volume>
          [4]
          <string-name>
            <given-names>Reuben</given-names>
            <surname>Binns</surname>
          </string-name>
          , Max Van Kleek, Michael
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Shadbolt</surname>
          </string-name>
          .
          <year>2018</year>
          . “
          <article-title>It's reducing a human</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>Proceedings of the 2018 CHI Conference</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          2018-April:
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          https://doi.org/10.1145/3173574.3173951 [5]
          <string-name>
            <given-names>Tolga</given-names>
            <surname>Bolukbasi</surname>
          </string-name>
          ,
          <string-name>
            <surname>Kai Wei</surname>
            <given-names>Chang</given-names>
          </string-name>
          , James
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Kalai</surname>
          </string-name>
          .
          <year>2016</year>
          . Man is to computer
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          4356-
          <fpage>4364</fpage>
          . [6]
          <string-name>
            <given-names>Joy</given-names>
            <surname>Buolamwini</surname>
          </string-name>
          and
          <string-name>
            <given-names>Timnit</given-names>
            <surname>Gebru</surname>
          </string-name>
          .
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Classification</surname>
          </string-name>
          .
          <source>In Proceedings of the 1st</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>Machine Learning Research)</source>
          ,
          <fpage>77</fpage>
          -
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>http://proceedings.mlr.press/v81/buolamw</mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          ini18a.html [7]
          <string-name>
            <surname>Carrie</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Cai</surname>
          </string-name>
          , Jonas Jongejan, and Jess
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Holbrook</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>The effects of example-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          interface.
          <source>Proceedings of the 24th</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>User</given-names>
            <surname>Interfaces</surname>
          </string-name>
          :
          <fpage>258</fpage>
          -
          <lpage>262</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          https://doi.org/10.1145/3301275.3302289 [8]
          <string-name>
            <given-names>Toon</given-names>
            <surname>Calders</surname>
          </string-name>
          and
          <string-name>
            <given-names>Indrė</given-names>
            <surname>Žliobaitė</surname>
          </string-name>
          .
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <source>Epistemology and Rational Ethics</source>
          <volume>3</volume>
          :
          <fpage>43</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          57. https://doi.org/10.1007/978-3-
          <fpage>642</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          30487-
          <fpage>3</fpage>
          _
          <issue>3</issue>
          [9]
          <string-name>
            <given-names>Hao</given-names>
            <surname>Fei</surname>
          </string-name>
          <string-name>
            <surname>Cheng</surname>
          </string-name>
          , Ruotong Wang, Zheng
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <given-names>F.</given-names>
            <surname>Maxwell Harper</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Haiyi</given-names>
            <surname>Zhu</surname>
          </string-name>
          .
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          stakeholders.
          <source>Proceedings of the 2019 CHI</source>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <source>Computing Systems:</source>
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          https://doi.org/10.1145/3290605.3300789 [10]
          <string-name>
            <surname>Eunsol</surname>
            <given-names>Choi</given-names>
          </string-name>
          , He He, Mohit Iyyer, Mark
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <surname>Liang</surname>
            ,
            <given-names>and Luke</given-names>
          </string-name>
          <string-name>
            <surname>Zettlemoyer</surname>
          </string-name>
          .
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <source>Proceedings of the 2018 Conference on</source>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <string-name>
            <surname>Processing</surname>
            ,
            <given-names>EMNLP</given-names>
          </string-name>
          <year>2018</year>
          :
          <fpage>2174</fpage>
          -
          <lpage>2184</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          https://doi.org/10.18653/v1/d18-
          <fpage>1241</fpage>
          [11]
          <string-name>
            <given-names>Alexandra</given-names>
            <surname>Chouldechova</surname>
          </string-name>
          and
          <string-name>
            <given-names>Aaron</given-names>
            <surname>Roth</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <year>2018</year>
          .
          <article-title>The Frontiers of Fairness in</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <source>Machine Learning</source>
          .
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . Retrieved from
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          http://arxiv.org/abs/
          <year>1810</year>
          .
          <volume>08810</volume>
          [12]
          <string-name>
            <surname>Jason</surname>
            <given-names>A</given-names>
          </string-name>
          <string-name>
            <surname>Colquitt and Jessica B Rodell.</surname>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          2015.
          <article-title>Measuring justice and fairness</article-title>
          . In
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <string-name>
            <surname>York</surname>
          </string-name>
          , NY, US,
          <fpage>187</fpage>
          -
          <lpage>202</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>https://doi.org/10.1093/oxfordhb/9780199</mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          981410.013.8 [13]
          <string-name>
            <surname>Henriette</surname>
            <given-names>Cramer</given-names>
          </string-name>
          , Vanessa Evers, Satyan
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          <string-name>
            <given-names>Bob</given-names>
            <surname>Wielinga</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>The effects of</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          455-
          <fpage>496</fpage>
          . https://doi.org/10.1007/s11257-
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          008-
          <fpage>9051</fpage>
          -3 [14]
          <string-name>
            <surname>Anupam</surname>
            <given-names>Datta</given-names>
          </string-name>
          , Shayak Sen, and Yair
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <string-name>
            <surname>Zick</surname>
          </string-name>
          .
          <year>2017</year>
          . Algorithmic Transparency via
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <article-title>Data Mining for Big and</article-title>
          Small Data:
          <fpage>71</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          94. https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          54024-
          <fpage>5</fpage>
          _
          <fpage>4</fpage>
          [15]
          <string-name>
            <given-names>Nicholas</given-names>
            <surname>Diakopoulos</surname>
          </string-name>
          .
          <year>2015</year>
          . Algorithmic
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <issue>Journalism 3</issue>
          ,
          <issue>3</issue>
          :
          <fpage>398</fpage>
          -
          <lpage>415</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          https://doi.org/10.1080/21670811.
          <year>2014</year>
          .97
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          <volume>6411</volume>
          [16]
          <string-name>
            <given-names>Jonathan</given-names>
            <surname>Dodge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. Vera</given-names>
            <surname>Liao</surname>
          </string-name>
          , Yunfeng
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <string-name>
            <surname>Dugan</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Explaining models: An</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>the 24th International Conference on</mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          <string-name>
            <given-names>Intelligent</given-names>
            <surname>User Interfaces</surname>
          </string-name>
          :
          <fpage>275</fpage>
          -
          <lpage>285</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          https://doi.org/10.1145/3301275.3302310 [17]
          <string-name>
            <surname>Timnit</surname>
            <given-names>Gebru</given-names>
          </string-name>
          , Jamie Morgenstern, Briana
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          <string-name>
            <surname>Crawford</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Datasheets for datasets</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          <string-name>
            <surname>In</surname>
          </string-name>
          5th Workshop on Fairness,
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          <source>Machine Learning</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>27</lpage>
          . Retrieved from
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          http://arxiv.org/abs/
          <year>1803</year>
          .
          <volume>09010</volume>
          [18]
          <string-name>
            <given-names>Ben</given-names>
            <surname>Green</surname>
          </string-name>
          and
          <string-name>
            <given-names>Lily</given-names>
            <surname>Hu</surname>
          </string-name>
          .
          <year>2018</year>
          . The Myth
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          <article-title>machine learning: the debates workshop</article-title>
          . [19]
          <string-name>
            <given-names>Nina</given-names>
            <surname>Grgic-Hlaca</surname>
          </string-name>
          , Elissa M. Redmiles,
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          2018.
          <article-title>Human perceptions of fairness in</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          <string-name>
            <surname>Conference</surname>
          </string-name>
          2018 - Proceedings of the
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          <source>World Wide Web Conference</source>
          ,
          <string-name>
            <surname>WWW</surname>
          </string-name>
          <year>2018</year>
          :
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          https://doi.org/10.1145/3178876.3186138 [20]
          <string-name>
            <surname>Moritz</surname>
            <given-names>Hardt</given-names>
          </string-name>
          , Eric Price, and Nathan
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          <string-name>
            <surname>Srebro</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Equality of opportunity in</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          <source>Information Processing Systems</source>
          :
          <fpage>3323</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          3331. [21]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Herlocker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Konstan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Riedl</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          2000.
          <article-title>Explaining collaborative filtering</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref66">
        <mixed-citation>
          <string-name>
            <given-names>Cooperative</given-names>
            <surname>Work</surname>
          </string-name>
          ,
          <fpage>241</fpage>
          -
          <lpage>250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref67">
        <mixed-citation>
          https://doi.org/10.1145/358916.358995 [22]
          <string-name>
            <surname>Lauren</surname>
            <given-names>Kirchner</given-names>
          </string-name>
          , Surya Mattu, Jeff
        </mixed-citation>
      </ref>
      <ref id="ref68">
        <mixed-citation>
          <string-name>
            <surname>Larson</surname>
            ,
            <given-names>and Julia</given-names>
          </string-name>
          <string-name>
            <surname>Angwin</surname>
          </string-name>
          .
          <year>2016</year>
          . Machine
        </mixed-citation>
      </ref>
      <ref id="ref69">
        <mixed-citation>
          <source>Bias. Propublica</source>
          <volume>23</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>26</lpage>
          . Retrieved from
        </mixed-citation>
      </ref>
      <ref id="ref70">
        <mixed-citation>
          sentencing [23]
          <string-name>
            <surname>Rene</surname>
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Kizilcec</surname>
          </string-name>
          .
          <year>2016</year>
          . How much
        </mixed-citation>
      </ref>
      <ref id="ref71">
        <mixed-citation>
          <source>Proceedings of the 2016 CHI Conference</source>
        </mixed-citation>
      </ref>
      <ref id="ref72">
        <mixed-citation>
          https://doi.org/10.1145/2858036.2858402 [24]
          <string-name>
            <surname>Rafal</surname>
            <given-names>Kocielnik</given-names>
          </string-name>
          , Saleema Amershi, and
        </mixed-citation>
      </ref>
      <ref id="ref73">
        <mixed-citation>
          <string-name>
            <given-names>Paul N.</given-names>
            <surname>Bennett</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Will you accept an</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref74">
        <mixed-citation>
          <source>Systems. Proceedings of the 2019 CHI</source>
        </mixed-citation>
      </ref>
      <ref id="ref75">
        <mixed-citation>
          <source>Computing Systems:</source>
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref76">
        <mixed-citation>
          https://doi.org/10.1145/3290605.3300641 [25]
          <string-name>
            <surname>Josua</surname>
            <given-names>Krause</given-names>
          </string-name>
          , Adam Perer, and Kenney
        </mixed-citation>
      </ref>
      <ref id="ref77">
        <mixed-citation>
          <string-name>
            <surname>Ng</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Interacting with predictions:</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref78">
        <mixed-citation>
          <article-title>learning models</article-title>
          .
          <source>Proceedings of the 2016</source>
        </mixed-citation>
      </ref>
      <ref id="ref79">
        <mixed-citation>
          <source>Computing Systems:</source>
          <fpage>5686</fpage>
          -
          <lpage>5697</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref80">
        <mixed-citation>
          https://doi.org/10.1145/2858036.2858529 [26]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Simone Stumpf, Margaret
        </mixed-citation>
      </ref>
      <ref id="ref81">
        <mixed-citation>
          <string-name>
            <surname>Burnett</surname>
            ,
            <given-names>and Irwin</given-names>
          </string-name>
          <string-name>
            <surname>Kwan</surname>
          </string-name>
          .
          <year>2012</year>
          . Tell me
        </mixed-citation>
      </ref>
      <ref id="ref82">
        <mixed-citation>
          <source>Computing Systems:</source>
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref83">
        <mixed-citation>
          https://doi.org/10.1145/2207676.2207678 [27]
          <string-name>
            <surname>Jeff</surname>
            <given-names>Larson</given-names>
          </string-name>
          , Surya Mattu, Lauren
        </mixed-citation>
      </ref>
      <ref id="ref84">
        <mixed-citation>
          <string-name>
            <surname>Kirchner</surname>
            ,
            <given-names>and Julia</given-names>
          </string-name>
          <string-name>
            <surname>Angwin</surname>
          </string-name>
          .
          <year>2020</year>
          . How
        </mixed-citation>
      </ref>
      <ref id="ref85">
        <mixed-citation>
          algorithm [28]
          <string-name>
            <surname>Bruno</surname>
            <given-names>Lepri</given-names>
          </string-name>
          , Nuria Oliver, Emmanuel
        </mixed-citation>
      </ref>
      <ref id="ref86">
        <mixed-citation>
          <string-name>
            <surname>Vinck</surname>
          </string-name>
          .
          <year>2018</year>
          . Fair, Transparent, and
        </mixed-citation>
      </ref>
      <ref id="ref87">
        <mixed-citation>
          <source>Technology 31</source>
          ,
          <issue>4</issue>
          :
          <fpage>611</fpage>
          -
          <lpage>627</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref88">
        <mixed-citation>https://doi.org/10.1007/s13347-017-0279-</mixed-citation>
      </ref>
      <ref id="ref89">
        <mixed-citation>
          x [29]
          <string-name>
            <surname>Brian</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lim and Anind</surname>
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Dey</surname>
          </string-name>
          .
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref90">
        <mixed-citation>
          2009: Ubiquitous Computing:
          <fpage>195</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref91">
        <mixed-citation>
          https://doi.org/10.1145/1620545.1620576 [30]
          <string-name>
            <surname>Brian</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <surname>Anind K. Dey</surname>
          </string-name>
          , and Daniel
        </mixed-citation>
      </ref>
      <ref id="ref92">
        <mixed-citation>
          <string-name>
            <surname>Avrahami</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Why and why not</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref93">
        <mixed-citation>
          <source>Proceedings of the 27th international</source>
        </mixed-citation>
      </ref>
      <ref id="ref94">
        <mixed-citation>
          <source>computing systems - CHI 09</source>
          ,
          <year>2119</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref95">
        <mixed-citation>https://doi.org/10.1145/1518701.1519023 [31] Margaret Mitchell, Simone Wu, Andrew</mixed-citation>
      </ref>
      <ref id="ref96">
        <mixed-citation>
          <string-name>
            <given-names>Deborah</given-names>
            <surname>Raji</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Timnit</given-names>
            <surname>Gebru</surname>
          </string-name>
          .
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref97">
        <mixed-citation>
          <source>2019 - Proceedings of the 2019</source>
        </mixed-citation>
      </ref>
      <ref id="ref98">
        <mixed-citation>
          <source>and Transparency, Figure</source>
          <volume>2</volume>
          :
          <fpage>220</fpage>
          -
          <lpage>229</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref99">
        <mixed-citation>
          https://doi.org/10.1145/3287560.3287596 [32]
          <string-name>
            <given-names>Frank</given-names>
            <surname>Pasquale</surname>
          </string-name>
          .
          <year>2015</year>
          . The Black Box
        </mixed-citation>
      </ref>
      <ref id="ref100">
        <mixed-citation>https://doi.org/10.4159/harvard.97806747</mixed-citation>
      </ref>
      <ref id="ref101">
        <mixed-citation>
          <volume>36061</volume>
          [33]
          <string-name>
            <surname>Forough</surname>
          </string-name>
          Poursabzi-Sangdeh, Daniel G.
        </mixed-citation>
      </ref>
      <ref id="ref102">
        <mixed-citation>2018. Manipulating and Measuring Model</mixed-citation>
      </ref>
      <ref id="ref103">
        <mixed-citation>
          http://arxiv.org/abs/
          <year>1802</year>
          .
          <volume>07810</volume>
          [34]
          <string-name>
            <given-names>Pearl</given-names>
            <surname>Pu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Li</given-names>
            <surname>Chen</surname>
          </string-name>
          .
          <year>2006</year>
          . Trust building
        </mixed-citation>
      </ref>
      <ref id="ref104">
        <mixed-citation>
          <source>of the 11th International Conference on</source>
        </mixed-citation>
      </ref>
      <ref id="ref105">
        <mixed-citation>
          <source>Intelligent User Interfaces</source>
          <year>2006</year>
          :
          <fpage>93</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref106">
        <mixed-citation>
          https://doi.org/10.1145/1111449.1111475 [35]
          <string-name>
            <surname>Emilee</surname>
            <given-names>Rader</given-names>
          </string-name>
          , Kelley Cotter, and Janghee
        </mixed-citation>
      </ref>
      <ref id="ref107">
        <mixed-citation>
          <string-name>
            <surname>Cho</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Explanations as Mechanisms</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref108">
        <mixed-citation>
          <source>Proceedings of the 2018 CHI Conference</source>
        </mixed-citation>
      </ref>
      <ref id="ref109">
        <mixed-citation>
          - CHI '
          <volume>18</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref110">
        <mixed-citation>
          https://doi.org/10.1145/3173574.3173677 [36]
          <string-name>
            <surname>Marco</surname>
            <given-names>Ribeiro</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Sameer</given-names>
            <surname>Singh</surname>
          </string-name>
          , and Carlos
        </mixed-citation>
      </ref>
      <ref id="ref111">
        <mixed-citation>
          <string-name>
            <surname>Guestrin</surname>
          </string-name>
          .
          <year>2016</year>
          . “Why Should I Trust
        </mixed-citation>
      </ref>
      <ref id="ref112">
        <mixed-citation>
          <string-name>
            <surname>Classifier</surname>
          </string-name>
          .
          <source>In Proceedings of the 22nd</source>
        </mixed-citation>
      </ref>
      <ref id="ref113">
        <mixed-citation>
          97-
          <fpage>101</fpage>
          . https://doi.org/10.18653/v1/n16-
        </mixed-citation>
      </ref>
      <ref id="ref114">
        <mixed-citation>
          <volume>3020</volume>
          [37]
          <string-name>
            <surname>Wojciech</surname>
            <given-names>Samek</given-names>
          </string-name>
          , Alexander Binder,
        </mixed-citation>
      </ref>
      <ref id="ref115">
        <mixed-citation>
          2017.
          <article-title>Evaluating the visualization of what</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref116">
        <mixed-citation>
          <source>Learning Systems</source>
          <volume>28</volume>
          ,
          <fpage>11</fpage>
          :
          <fpage>2660</fpage>
          -
          <lpage>2673</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref117">
        <mixed-citation>
          https://doi.org/10.1109/TNNLS.
          <year>2016</year>
          .259
        </mixed-citation>
      </ref>
      <ref id="ref118">
        <mixed-citation>
          <volume>9820</volume>
          [38]
          <string-name>
            <surname>Megha</surname>
            <given-names>Srivastava</given-names>
          </string-name>
          , Hoda Heidari, and
        </mixed-citation>
      </ref>
      <ref id="ref119">
        <mixed-citation>
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Krause</surname>
          </string-name>
          .
          <year>2019</year>
          . Mathematical
        </mixed-citation>
      </ref>
      <ref id="ref120">
        <mixed-citation>
          <article-title>machine learning</article-title>
          .
          <source>Proceedings of the 25th</source>
        </mixed-citation>
      </ref>
      <ref id="ref121">
        <mixed-citation>
          <string-name>
            <surname>Mining</surname>
          </string-name>
          :
          <fpage>2459</fpage>
          -
          <lpage>2468</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref122">
        <mixed-citation>
          https://doi.org/10.1145/3292500.3330664 [39]
          <string-name>
            <surname>Paolo</surname>
            <given-names>Tamagnini</given-names>
          </string-name>
          , Josua Krause, Aritra
        </mixed-citation>
      </ref>
      <ref id="ref123">
        <mixed-citation>
          <string-name>
            <surname>Dasgupta</surname>
            ,
            <given-names>and Enrico</given-names>
          </string-name>
          <string-name>
            <surname>Bertini</surname>
          </string-name>
          .
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref124">
        <mixed-citation>
          <source>Proceedings of the 2nd Workshop on</source>
        </mixed-citation>
      </ref>
      <ref id="ref125">
        <mixed-citation>
          <string-name>
            <surname>HILDA</surname>
          </string-name>
          <year>2017</year>
          :
          <article-title>1-6</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref126">
        <mixed-citation>https://doi.org/10.1145/3077257.3077260</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>