=Paper= {{Paper |id=Vol-2327/IUI-ATEC5 |storemode=property |title=Making Transparency Clear: The Dual Importance of Explainability and Auditability |pdfUrl=https://ceur-ws.org/Vol-2327/IUI19WS-IUIATEC-5.pdf |volume=Vol-2327 |authors=Aaron Springer,Steve Whittaker |dblpUrl=https://dblp.org/rec/conf/iui/SpringerW19a }} ==Making Transparency Clear: The Dual Importance of Explainability and Auditability== https://ceur-ws.org/Vol-2327/IUI19WS-IUIATEC-5.pdf
                                                    Making Transparency Clear
                                          The Dual Importance of Explainability and Auditability

                            Aaron Springer                                                                     Steve Whittaker
                        Computer Science                                                                         Psychology
                University of California Santa Cruz                                                   University of California Santa Cruz
                      Santa Cruz, CA, USA                                                                  Santa Cruz, CA, USA
                        alspring@ucsc.edu                                                                    swhittak@ucsc.edu



ABSTRACT
Algorithmic transparency is currently invoked for two separate                          1 Introduction
purposes: to improve trust in systems and to provide insight into                           We are at a pivotal time in the use of machine learning as
problems like algorithmic bias. Although transparency can help                          intelligent systems increasingly impact our daily lives. Machine
both problems, recent results suggest these goals cannot be                             learning algorithms underlie the many intelligent systems we
accomplished simultaneously by the same transparency                                    routinely use. These systems provide information ranging from
implementation. Providing enough information to diagnose                                routes to work to recommendations about criminal parole [2,4].
algorithmic bias will overwhelm users and lead to poor                                  As humans with limited time and attention, we increasingly defer
experiences. On the other hand, scaffolding user mental models                          responsibility to these systems with little reflection or oversight.
with selective transparency will not provide enough information                         For example, as of February 2018, over 50% of adults in the
to audit these systems for fairness. This paper argues that if we                       United States report using a range of voice assistants on a daily
want to address both problems we must separate two distinct                             basis to accomplish tasks such as navigating to work, answering
aspects of transparency: explainability and auditability.                               queries, and automating actions [27]. Improvements to the
Explainability improves user experience by facilitating mental                          increasing use of voice assistants are largely driven by
model formation and building user trust. It provides users with                         improvements in underlying algorithms.
sufficient information to form accurate mental models of system                             Compounding these advances in machine learning is the fact
operation. Auditability is more exhaustive; providing third-parties                     that many people have difficulty understanding current intelligent
with the ability to test algorithmic outputs and diagnose biases and                    systems [38]. Here, we use ‘intelligent systems’ to mean systems
unfairness. This conceptual separation provides a path forward for                      that use machine learned models and/or data derived from user
designers to make systems both usable and free from bias.                               context to make predictions. The machine learning models that
                                                                                        often power these intelligent systems are complex and trained
CCS CONCEPTS                                                                            upon massive troves of data, making it difficult for even experts to
• Human-centered computing~Human computer interaction (HCI)                             form accurate mental models. For example, many Facebook users
                                                                                        did not know that the service curated their newsfeed using
KEYWORDS                                                                                machine learning, they simple thought that they saw a feed of all
Transparency, trust, explanation, bias, auditability, algorithms,                       their connections posts [15]. More recently, users of Facebook
intelligent systems.                                                                    and other systems have been shown to generate simple “folk
                                                                                        theories” that explain how such systems are working [14,38].
ACM Reference format:                                                                   Although users cannot validate such folk theories that does not
                                                                                        stop users from acting upon them. [14] demonstrated that users
Aaron Springer and Steve Whittaker. 2019. Making Transparency Clear:
The Dual Importance of Explainability and Auditability. In Joint                        went so far as to modify how they interacted with Facebook to try
Proceedings of the ACM IUI 2019 Workshops, Los Angeles, USA, March                      to force the system to present a certain outcome consistent with
20, 2019, 4 pages.                                                                      their user folk theory. There is potential for danger in other
Permission to make digital or hard copies of all or part of this work for personal or   contexts when users are willing to act upon their folk hypotheses
classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice and     when not given the ability to understand the system. Furthermore,
the full citation on the first page. Copyrights for components of this work owned       there are many challenges regarding the best ways to effectively
by others than the author(s) must be honored. Abstracting with credit is permitted.     communicate underlying algorithms to users [35,39].
To copy otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee. Request permissions from                   Another concern is the user experience of opaque algorithmic
Permissions@acm.org.                                                                    systems. Without any form of transparency, users may trust and
IUI     Workshops'19,       March 20,    2019,    Los    Angeles,    USA.               understand these systems less [11,24]. Even in low-stakes systems
Copyright © 2019 for the individual papers by the papers' authors. Copying              like the Netflix recommender, users still struggle to understand
permitted for private and academic purposes. This volume is published and
copyrighted by its editors.
                                                                                        how to control and influence internal algorithms [6]. These
                                                                                        problems surrounding user experience, trust especially, become
IUI Workshops '19, March 20, 2019, Los Angeles, USA                                                            A. Springer & S. Whittaker

more pronounces in high stakes scenarios such as the medical                Addressing the user experience in intelligent systems has now
field where elements of user experience like trust are essential to a   become a pressing concern for mainstream usability practitioners.
program’s use.                                                          The Nielsen Norman group recently completed a diary study
    Furthermore, academics and industry practitioners are               examining the user experience of normal people with systems
discovering other significant issues in deploying these systems.        such as Facebook, Instagram, Netflix, and Google News [6].
Intelligent systems powered by machine learning can learn and           Mirroring the work on Facebook folk theories, users found it
embody societal biases. Systems may therefore treat users               unclear which aspects of their own behavior the intelligent
differently based on characteristics of users’ speech and writing       systems used as inputs. Users were also frustrated by the lack of
[31,37] or even based upon characteristics that are protected under     control over the output. Overall, users struggled to form correct
law [2]. In a particularly egregious example, an intelligent system     mental models of system operation which led to poor user
used to help inform parole decisions was found to discriminate          experiences.
against people of color [2].                                                Other work shows the importance of transparency for building
    Despite these challenges of bias and user experience, many          trust in algorithmic systems, an important part of the user
critics have coalesced around a concept they believe could address      experience. Users who receive explanations better understand and
these challenges: transparency. The insight underlying                  trust complex algorithmic systems [24]. In the presence of
transparency is that an algorithm should reveal itself to users.        disagreement between the system and the user, transparency can
There are many important potential benefits for algorithmic             improve user perceptions of trust and system accuracy [11,23,34].
transparency. Transparency enables important oversight by               But in addition to improving user experience, advocates point to
system designers. Without transparency it may be unclear whether        transparency as a counter to more pernicious problems such as
an algorithm is optimizing the intended behavior [39], or whether       algorithmic bias.
an algorithm has negative, unintended consequences (e.g. filter
bubbles in social media; [26]). These arguments have led some           2.2 Revealing Bias
researchers to argue that machine learning must be ‘interpretable
                                                                            Intelligent systems and predictive analytics have been shown
by design’ [1], and that transparency is even essential for the
                                                                        to learn and perpetuate societal biases. One clear example of this
adoption of intelligent systems, such as in cases of medical            is COMPAS, an algorithm used widely within the United States to
diagnoses [40]. Transparency has taken on the role of a cure-all
                                                                        predict risk of recidivism. In 2016 ProPublica published an article
for machine learnings woes.
                                                                        noting that the COMPAS system was more likely to predict higher
    However, problems remain. Transparency is currently ill-            risk scores for people of color than other populations, even when
defined [12]. Transparency is purported to address machine
                                                                        the ground truth was similar [2]. The COMPAS system had been
learning problems such as bias [25], while simultaneously
                                                                        in use for over 5 years in some locations before these biases were
improving the user experience [18,21]. This paper argues that           publicized [13].
achieving both goals may be impossible with a single
                                                                            Other work shows how interfaces can discriminate based on
implementation. An implementation of transparency that allows
                                                                        ways of speaking and writing. YouTube captions have been
someone to infer system bias will likely overwhelm users and lead       shown to be less accurate for speakers with a variety of accents
to less usage—which in turn will lead to developers refusing to         [37]. Common voice interfaces can struggle with specific ways of
implement transparency. Transparency should be disaggregated
                                                                        speaking [31]. These problems likely arise from how algorithms
into two separate classes: explainability and auditability.
                                                                        were trained on a non-diverse set of voices (i.e., ‘distributional
Explainability is concerned with building interfaces that promote       drift’), and then deployed broadly to all people. Even textual
accurate mental models of system operation leading to a better
                                                                        methods are not immune to embodying societal biases. Word
user experience. Auditability is concerned with allowing users or       embeddings have been shown to harbor biases related to gender.
third-party groups to audit a deployed algorithmic system for bias      For example, one of the roles most closely related to ‘she’ within
and other problems. Separating these aspects of transparency
                                                                        the learned word embeddings is “homemaker”; in contrast, an
allows us to build systems with improved user experiences while
                                                                        occupation closely related to “he” is “boss” [5].
maintaining high standards of fairness and unbiased outcomes.               The fear is that the embodiment of these societal biases within
                                                                        machine learning systems will perpetuate them. For example,
                                                                        biased recidivism algorithms will exacerbate existing inequalities,
2 Why Do We Need Transparency?
                                                                        creating a cycle where those who are not currently privileged will
                                                                        have even less opportunity in the future. An example of this is
2.1 Poor User Experiences in Intelligent Systems                        shown in the posting of job ads online. Men saw significantly
    A wealth of prior work has explored issues surrounding              more job ads for senior positions compared to women, when
algorithm transparency in the commercial deployments of systems         searching online [10]. In other cases, African-American names in
for social media and news curation. Social media feeds are often        Google search are more likely to display ads for criminal records,
curated by algorithms that may be invisible to users (e.g.,             which has been noted as a possible risk for job applicants [36].
Facebook. Twitter, LinkedIn). Work on algorithmic folk theories             It is not simple to fix these problems. Algorithmic bias
shows that making the designs more transparent or seamful,              problems are everywhere; but fixing them requires fitting complex
allowed users to better understand and work within the system           research and auditing practices into iterative agile workflows [32].
[14].                                                                   This combination requires new tools and extensive organizational
Making Transparency Clear                                                        IUI Workshops '19, March 20, 2019, Los Angeles, USA

buy-in [9]. Even with these processes and tools, not all biases will   users, provide them with too much information, and provoke
be found and fixed before a system is deployed.                        unnecessary doubt in the system. Transparency is trying to do too
    Transparency has been invoked as a solution to bias. Best-         much. We cannot exhaustively convey the inner workings of
selling books such as Weapons of Math Destruction call for             many algorithms, nor is that what users want. However, without
increased transparency as a counter to algorithmic bias [25]. Even     making these complete inner-workings transparent, how can we
the call for papers for this workshop notes that ‘algorithmic          audit these systems for unfairness and bias?
processes are opaque’ and that this can hide issues of algorithmic         As we have shown in previous work, diagnosing and fixing
bias [20]. The idea is that transparency can expose the inner          algorithmic bias is not a simple task, even for the creators of a
working of an algorithm, allowing users to see whether or not the      system [9]. These creators have access to the complete code, data,
system is biased. This allows third parties to have the ability to     and inner workings of the system; even with this access, fixing
audit the algorithmic systems they are using. However, showing         algorithmic bias is a challenge. How much harder will it then be
complete algorithmic transparency may have negative impacts on         for third parties and users to diagnose algorithmic bias through a
the user experience.                                                   transparent interface which does not display all of this
                                                                       information? We cannot reasonably expect that our current
                                                                       operationalization of transparency by explanation will allow third
3 Transparency Troubles                                                parties to diagnose bias in deployed systems.
    Although transparency is an active research area in both               In summary, these two goals of transparency conflict. We
machine learning and HCI communities, we believe that a major          cannot simultaneously improve the user experience while
barrier to current conceptualizations of transparency is the           providing a mechanism for diagnosing algorithmic bias. Providing
potential negative effects on user experience. Even though a goal      enough information to diagnose algorithmic bias will overwhelm
of much transparency research is to improve the user experience        users and lead to poor experiences. On the other hand, scaffolding
by building trust, studies are continually showing that                user mental models with selective transparency will not provide
transparency has mixed effects on the user experience with             enough information to audit these systems for fairness. In order
intelligent systems.                                                   for transparency to be successful, we need to clarify our aims. We
    One system built by our research team clearly reveals              must separate transparency into two related concepts:
problems with our current concept of transparency. The E-meter is      explainability and auditability.
an “intelligent” system with an algorithm that assesses the
positivity and negativity of a users’ writing emotional writing in
real time [33]. Users were asked to write about personal emotional
                                                                       4 Two Facets of Transparency
experiences and the system interpreted their writing to evaluate           The first facet, explainability, has a single goal: to improve the
how each user felt about their experiences. The E-meter was            user experience. Many problems with intelligent systems occur
transparent; it highlighted the words used by the machine learning     because users lack proper mental models of how the system
model conveyed their corresponding emotional weights through a         operates [14] and helping users form an accurate mental model
color gradient. The results were unexpected. Users of the              improves satisfaction [22]. Therefore, the goal of explainability is
transparent system actually felt the system was less accurate          to facilitate an ‘accurate enough’ mental model formation to
overall [34]. Why was this? In some cases, seeing inevitable           enable correct action within the system. Attempting to go beyond
system errors undermined user confidence, and in other cases,          helping users form heuristics may lead to worse user experience
users overrode correct system models that conflicted with their        [35]. We need to give users heuristics and approximate
own (inaccurate) beliefs.                                              understandings so that they can feel that they are in control of the
    Further tests on the E-meter system showed other problems          interface.
with transparency. Users with a non-transparent version of the E-          The key to explainability is to reveal only the information
meter thought that the system performed more accurately [35]. On       needed by users [12]. This separates it from many current
the other hand, users with transparency seemed to find it              conceptualizations of transparency that aim for completeness.
distracting. Users of the transparent system were also prone to        Explanations that aim for completeness may induce poor user
focus errors exposed by the transparency, even when the overall        experiences because they are too complex [19] or conflict with
mood prediction was correct. Clearly, distracting users and            users’ mental models [30,35]. In addition, explaining only the
leading them to believe the system is more errorful does not create    needed elements conforms better to the extensive bodies of social
a positive user experience.                                            science research that study explanation. Explanations should
    Furthermore, users may not want complete transparency for          follow Grices’s maxims [17], i.e. to only explain as much as is
other reasons. Providing such information may be distracting due       needed and no more. Explanation should be occasioned [16], it
to the overhead in processing that transparency requires [7].          should present itself when needed and disappear when not.
Transparency negatively affects the user experience in less            Exhaustive transparency does conform with HCI experimental
accurate systems [23]. Short explanations of what a system is          results or these social science theories; which is why it is essential
doing can improve trust but full transparency can result in less       that we study explainability.
trust in intelligent systems [21].                                         Explainability can happen through a variety of means. For
    Together these studies provide strong evidence that exhaustive     example, we can use natural language to explain results. For
transparency may undermine the user experience. It may distract        example, Facebook has a feature labeled ‘Why am I seeing this?’
IUI Workshops '19, March 20, 2019, Los Angeles, USA                                                                  A. Springer & S. Whittaker

on ads that provides a natural language explanation of the user        API endpoint is the simplest implementation for developers, there
profile factors that led to the targeted ad. These explanations can    is no reason that a user interface to supply data and view
also involve data and visualization intended to fill in gaps in the    predictions could not be created. For instance, the E-meter we
user’s mental models [12]. The range of explanation types is           talked of earlier exhaustively exposed its predictions and data to
large, from simple natural language to explorable explanations.        users allowing them to edit and explore what text results in
This is necessary given the many domains in which explanations         different predictions. These both fit the definition of auditability
are needed. Explanations must be tailored to the domain; doctors       by allowing the user to provide known data as input and receive a
have very different needs than mobile fitness coach users. For         prediction. While an API endpoint is a simple solution, further
example, doctors are making high-stakes decisions and are likely       research should explore what form auditability should take in
to be very invested in each decision; therefore, the explanations      interactive programs.
for doctors should be more complete and contain more
information. Such lengthy explanations may not be successful in
more casual settings such as an intelligent mobile fitness coach       6 Conclusion
where users may be less motivated to process a lengthy                     Algorithmic transparency is purported to improve the user
explanation. Again, explanations are to improve the use of the         experience and simultaneously help diagnose algorithmic bias.
system and the user experience, not to provide the user the ability    We argue that these goals cannot be accomplished simultaneously
to ensure the system is fair and free from bias.                       with the same implementation. Exposing enough information to
    But how can transparency satisfy its second goal of ensuring       diagnose algorithmic bias overwhelms users and leads to a poor
fair algorithms? Explainability is insufficient to meet this           user experience. We therefore distinguish two aspects of
requirement. It is not possible to ensure that an intelligent system   transparency: explainability and auditability. Explainability aims
is fair on the basis of the natural language explanations it           to improve the user experience through making users aware of
provides. How then, can we determine whether algorithms are fair       inputs and reasons for the system predictions; this is necessarily
and free from bias?                                                    incomplete, providing just enough information to allow users to
    In addition to explainability, the second facet of transparency    form simple mental system models. Auditability ensures that third
is auditability of deployed systems. We define auditable as the        parties and users can test a system’s predictions for fairness and
ability for users or third parties to validate and test the deployed   bias by providing their own data for predictions. Distinguishing
system by providing their own data for the system to predict on.       these two aspects of transparency provides a way forward for
While some systems are currently auditable, it is mostly               industry implementations of usable and safe algorithmic systems.
adversarial; auditors must use methods such as sock-puppet
auditing to determine whether a system is biased [29]. For an
                                                                             ACKNOWLEDGMENTS
example of auditability, in Facebook, users are beholden to seeing
advertisements targeted to their profile information. An auditable         We would like to thank Victoria Hollis, Ryan Compton, and
version of Facebook advertisements would have the ability to           Lee Taber for their feedback on this project. We would also like
supply any profile data and receive back what targeted                 to thank the anonymous reviewers for their insightful comments
advertisements the supplied data would generate. A current             that helped refine this work.
example of systems that are easily auditable is current facial
recognition APIs created by cloud providers; these are                 REFERENCES
programmable and thus supplying data and checking for bias can         [1]    Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan
                                                                              Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable
be done by independent researchers [28].                                      and Intelligible Systems: An HCI Research Agenda. In Proceedings of the
    Other definitions of auditability rely on seeing the code itself          2018 CHI Conference on Human Factors in Computing Systems - CHI ’18,
                                                                              1–18. https://doi.org/10.1145/3173574.3174156
[8], but this may not be necessary. Relying on seeing the code         [2]    Julia Angwin and Jeff Larson. 2016. Machine Bias. ProPublica. Retrieved
itself complicates the audit process considerably due to source               October 27, 2017 from https://www.propublica.org/article/machine-bias-
code being highly valued intellectual property. Rather we should              risk-assessments-in-criminal-sentencing
                                                                       [3]    Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2018. Fairness and
pursue audits that allow the user or a third party to generate their          machine learning. Fairness and machine learning. Retrieved January 10,
own conclusions about the fairness of the algorithm, rather than              2019 from https://fairmlbook.org/
                                                                       [4]    Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and
relying on the explanations it generates. We do not need to know              Nigel Shadbolt. 2018. “It’s Reducing a Human Being to a Percentage”;
how the underlying algorithm works to ensure that it is generating            Perceptions of Justice in Algorithmic Decisions. Proceedings of the 2018
                                                                              CHI Conference on Human Factors in Computing Systems - CHI ’18: 1–
fair predictions for all possible subgroups. Under many criterions            14. https://doi.org/10.1145/3173574.3173951
of fairness such as independence and separation, all we need to        [5]    Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and
know are the predicted output and the data [3]. Knowledge about               Adam T. Kalai. 2016. Man is to computer programmer as woman is to
                                                                              homemaker? Debiasing word embeddings. In Advances in Neural
the inner-workings of the algorithm is not required to ensure                 Information Processing Systems, 4349–4357.
fairness. The expectation is not that every user has the skill or      [6]    Raluca Budiu. 2018. Can Users Control and Understand a UI Driven by
                                                                              Machine Learning? Nielsen Norman Group. Retrieved January 10, 2019
desire to audit these algorithms but rather that auditability is              from https://www.nngroup.com/articles/machine-learning-ux/
possible, in case it should be needed.                                 [7]    Andrea Bunt, Matthew Lount, and Catherine Lauzon. 2012. Are
    Given space constraints, we do not attempt to prescribe here              explanations always important?: a study of deployed, low-cost intelligent
                                                                              interactive systems. In Proceedings of the 2012 ACM international
exactly how auditability should be implemented. According to our              conference on Intelligent User Interfaces, 169–178. Retrieved April 25,
definition, it could be as simple as an exposed public API                    2017 from http://dl.acm.org/citation.cfm?id=2166996
endpoint that takes parameters and returns a prediction. While an
Making Transparency Clear                                                                          IUI Workshops '19, March 20, 2019, Los Angeles, USA

[8]    Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in                 of the 2018 CHI Conference on Human Factors in Computing Systems (CHI
       machine learning algorithms. Big Data & Society 3, 1: 2053951715622512.                 ’18), 296:1–296:13. https://doi.org/10.1145/3173574.3173870
       https://doi.org/10.1177/2053951715622512                                         [32]   Aaron Springer, Jean Garcia-Gathright, and Henriette Cramer. 2018.
[9]    Henriette Cramer, Jean Garcia-Gathright, Aaron Springer, and Sravana                    Assessing and Addressing Algorithmic Bias—But Before We Get There...
       Reddy. 2018. Assessing and Addressing Algorithmic Bias in Practice.                     In 2018 AAAI Spring Symposium Series.
       Interactions 25, 6: 58–63. https://doi.org/10.1145/3278156                       [33]   Aaron Springer, Victoria Hollis, and Steve Whittaker. 2017. Dice in the
[10]   Amit Datta, Michael Carl Tschantz, and Anupam Datta. 2015. Automated                    Black Box: User Experiences with an Inscrutable Algorithm. Retrieved
       Experiments on Ad Privacy Settings. Proceedings on Privacy Enhancing                    April                     24,                   2017                   from
       Technologies 2015, 1: 92–112. https://doi.org/10.1515/popets-2015-0007                  https://aaai.org/ocs/index.php/SSS/SSS17/paper/view/15372
[11]   Mary T. Dzindolet, Scott A. Peterson, Regina A. Pomranky, Linda G.               [34]   Aaron Springer and Steve Whittaker. 2018. What are You Hiding?
       Pierce, and Hall P. Beck. 2003. The Role of Trust in Automation Reliance.               Algorithmic Transparency and User Perceptions. In 2018 AAAI Spring
       Int. J. Hum.-Comput. Stud. 58, 6: 697–718. https://doi.org/10.1016/S1071-               Symposium Series.
       5819(03)00038-7                                                                  [35]   Aaron Springer and Steve Whittaker. 2019. Progressive Disclosure:
[12]   Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con,                      Designing for Effective Transparency. In Proceedings of the 24th
       Mareike Haug, and Heinrich Hussmann. 2018. Bringing Transparency                        International Conference on Intelligent User Interfaces - IUI ’19.
       Design into Practice. In 23rd International Conference on Intelligent User       [36]   Latanya Sweeney. 2013. Discrimination in Online Ad Delivery. Queue 11,
       Interfaces (IUI ’18), 211–223. https://doi.org/10.1145/3172944.3172961                  3: 10:10–10:29. https://doi.org/10.1145/2460276.2460278
[13]   Electronic Privacy Information Center. 2018. EPIC - Algorithms in the            [37]   Rachael Tatman. 2017. Gender and Dialect Bias in YouTube’s Automatic
       Criminal Justice System. Retrieved November 5, 2018 from                                Captions. EACL 2017: 53.
       https://epic.org/algorithmic-transparency/crim-justice/                          [38]   Jeffrey Warshaw, Nina Taft, and Allison Woodruff. 2016. Intuitions,
[14]   Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro,                Analytics, and Killing Ants: Inference Literacy of High School-educated
       Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First I like it, then             Adults in the US. 16.
       I hide it: Folk Theories of Social Feeds. In Proceedings of the 2016 cHI         [39]   Daniel S. Weld and Gagan Bansal. 2018. The Challenge of Crafting
       conference on human factors in computing systems, 2371–2382. Retrieved                  Intelligible Intelligence. arXiv:1803.04263 [cs]. Retrieved September 20,
       April 25, 2017 from http://dl.acm.org/citation.cfm?id=2858494                           2018 from http://arxiv.org/abs/1803.04263
[15]   Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein                    [40]   Jenna Wiens and Erica S. Shenoy. 2018. Machine Learning for Healthcare:
       Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian                  On the Verge of a Major Shift in Healthcare Epidemiology. Clinical
       Sandvig. 2015. “I Always Assumed That I Wasn’T Really That Close to                     Infectious Diseases 66, 1: 149–153. https://doi.org/10.1093/cid/cix731
       [Her]”: Reasoning About Invisible Algorithms in News Feeds. In
       Proceedings of the 33rd Annual ACM Conference on Human Factors in
       Computing              Systems           (CHI          ’15),         153–162.
       https://doi.org/10.1145/2702123.2702556
[16]   Harold Garfinkel. 1991. Studies in Ethnomethodology. Wiley.
[17]   H. P Grice. 1975. Logic and conversation.
[18]   Chloe Gui and Victoria Chan. 2017. Machine learning in medicine.
       University of Western Ontario Medical Journal 86, 2: 76–78.
       https://doi.org/10.5206/uwomj.v86i2.2060
[19]   Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000.
       Explaining collaborative filtering recommendations. In Proceedings of the
       2000 ACM conference on Computer supported cooperative work - CSCW
       ’00, 241–250. https://doi.org/10.1145/358916.358995
[20]   IUI ATEC. 2018. IUI ATEC Call for Papers. Retrieved January 10, 2019
       from       https://iuiatec.files.wordpress.com/2018/09/iui-atec-2019-call-for-
       papers.pdf
[21]   René F. Kizilcec. 2016. How Much Information?: Effects of Transparency
       on       Trust      in      an     Algorithmic      Interface.     2390–2395.
       https://doi.org/10.1145/2858036.2858402
[22]   Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012.
       Tell Me More?: The Effects of Mental Model Soundness on Personalizing
       an Intelligent Agent. In Proceedings of the SIGCHI Conference on Human
       Factors        in      Computing        Systems     (CHI       ’12),     1–10.
       https://doi.org/10.1145/2207676.2207678
[23]   Brian Y. Lim and Anind K. Dey. 2011. Investigating intelligibility for
       uncertain context-aware applications. In Proceedings of the 13th
       international conference on Ubiquitous computing, 415–424. Retrieved
       April 25, 2017 from http://dl.acm.org/citation.cfm?id=2030168
[24]   Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and why not
       explanations improve the intelligibility of context-aware intelligent systems.
       In Proceedings of the 27th international conference on Human factors in
       computing             systems         -         CHI          09,         2119.
       https://doi.org/10.1145/1518701.1519023
[25]   Cathy O’Neil. 2016. Weapons of Math Destruction: How Big Data
       Increases Inequality and Threatens Democracy. Crown.
[26]   Eli Pariser. 2011. The Filter Bubble: What The Internet Is Hiding From You.
       Penguin Books Limited.
[27]   PricewaterhouseCoopers. 2018. Consumer Intelligence Series: Prepare for
       the voice revolution. PwC. Retrieved October 30, 2018 from
       https://www.pwc.com/us/en/services/consulting/library/consumer-
       intelligence-series/voice-assistants.html
[28]   Inioluwa Deborah Raji and Joy Buolamwini. Actionable Auditing:
       Investigating the Impact of Publicly Naming Biased Performance Results of
       Commercial AI Products. 7.
[29]   Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric
       Langbort. Auditing Algorithms: Research Methods for Detecting
       Discrimination on Internet Platforms. 23.
[30]   James Schaffer, Prasanna Giridhar, Debra Jones, Tobias Höllerer, Tarek
       Abdelzaher, and John O’Donovan. 2015. Getting the Message?: A Study of
       Explanation Interfaces for Microblog Data Analysis. In Proceedings of the
       20th International Conference on Intelligent User Interfaces - IUI ’15, 345–
       356. https://doi.org/10.1145/2678025.2701406
[31]   Aaron Springer and Henriette Cramer. 2018. “Play PRBLMS”: Identifying
       and Correcting Less Accessible Content in Voice Interfaces. In Proceedings