<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, Los
Angeles, USA, March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Dark Paterns of Explainability, Transparency, and User Control for Intelligent Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sarah Theres Völkel LMU Munich Munich</string-name>
          <email>daniel.buschek@ifi.lmu.de</email>
          <email>malin.eiband@ifi.lmu.de</email>
          <email>michael.chromik@ifi.lmu.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Germany sarah.voelkel@ifi.lmu.de</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Daniel Buschek LMU Munich Munich</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Explainability; Explanations; Transparency; Dark Patterns; Interpretability; Intelligibility; User Control; Intelligent Systems</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Malin Eiband LMU Munich Munich</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Michael Chromik LMU Munich Munich</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>The rise of interactive intelligent systems has surfaced the need to make system reasoning and decision-making understandable to users through means such as explanation facilities. Apart from bringing significant technical challenges, the call to make such systems explainable, transparent and controllable may conflict with stakeholders' interests. For example, intelligent algorithms are often an inherent part of business models so that companies might be reluctant to disclose details on their inner workings. In this paper, we argue that as a consequence, this conflict might result in means for explanation, transparency and control that do not necessarily benefit users. Indeed, we even see a risk that the actual virtues of such means might be turned into dark patterns: user interfaces that purposefully deceive users for the benefit of other parties. We present and discuss such possible dark patterns of explainability, transparency and control building on dark UX design patterns by Grey et al. The resulting dark patterns serve as a thought-provoking addition to the greater discussion in this field.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Human-centered computing → HCI theory, concepts and
models.</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>Intelligent systems that are empowered by advanced machine
learning models have successfully been applied in closed contexts to
well-structured tasks (e.g., object recognition, translations, board
games) and often outperform humans in those. These advancements
IUI Workshops’19, March 20, 2019, Los Angeles, USA
© 2019 for the individual papers by the papers’ authors. Copying permitted for private
and academic purposes. This volume is published and copyrighted by its editors.
fostered the introduction of intelligent systems into more sensitive
contexts of human life, like courts, personal finance or recruiting,
with the promise to augment human decision-making in those.</p>
      <p>
        However, the efectiveness of intelligent systems in sensitive
contexts cannot always be measured in objective terms. Often they
need to take soft factors, like safety, ethics and non-discrimination,
into account. Their acceptance will greatly depend on their ability
to make decisions and actions interpretable to its users and those
afected by them. Introducing interpretability through explanation
facilities [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] is widely discussed as an efective measure to
support users in understanding intelligent systems [
        <xref ref-type="bibr" rid="ref24 ref9">9, 24</xref>
        ]. Yet, these
measures are located at the intersection of potentially conflicting
interests between decision-subjects, users, developers and company
stakeholders [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ].
      </p>
      <p>
        First, companies may not see the benefit to invest in potentially
costly processes to include explanations and control options for
users unless they improve their expected revenues in some way.
Second, creating suitable explanations of algorithmic reasoning
presents a major technical challenges in itself that often requires
abstraction from the algorithmic complexity [
        <xref ref-type="bibr" rid="ref28 ref29">28, 29</xref>
        ]. Furthermore,
those systems are often integrated with critical business processes.
Companies might be reluctant to disclose explanations that honestly
describe their reasoning to the public as it might have an impact
on their reputation or competitive advantage. Forcing companies
to do so by law, like the right to explanation as part of the European
Union General Data Protection Regulation (GDPR) [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], will most
likely not result in meaningful explanations for users.
      </p>
      <p>
        Therefore, we see a danger that means for algorithmic
explanation, transparency and control might not always be designed by
practitioners to benefit users. We even see a risk that users might
consciously be deceived for the benefit of other parties. Such
carefully crafted deceptive design solutions have gained notoriety in
the UI design community as dark patterns [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        In this paper, we extend the notion of prominent dark UX
patterns [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to algorithmic explanation, transparency and control. We
discuss situations of opposing interests between the creator and
receiver of algorithmic explanation, transparency and control means
that could be potentially argued as questionable or unethical and
contribute to the discussion about the role of design practitioners
in this process.
2
2.1
      </p>
    </sec>
    <sec id="sec-3">
      <title>BACKGROUND</title>
    </sec>
    <sec id="sec-4">
      <title>Explanations in Intelligent Systems</title>
      <p>
        Haynes et al. define intelligent systems as “software programs
designed to act autonomously and adaptively to achieve goals defined
by their human developer or user” [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Intelligent systems typically
utilize a large knowledge data base and decision-making algorithms.
Following Singh [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], a system is intelligent if users need to
“attribute cognitive concepts such as intentions and beliefs to it in order
to characterize, understand, analyze, or predict its behavior”.
      </p>
      <p>
        Many of the intelligent systems developed today are based on
increasingly complex and non-transparent machine learning models,
which are dificult to understand for humans. However, sensitive
contexts with potentially significant consequences often require
some kind of human oversight and intervention. Yet, even
intelligent systems in everyday contexts often confuse users [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. For
example, social network users are not aware that the news feed
is algorithmically curated [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. These insights result in ongoing
research activities to improve the interpretability of those systems.
Interpretability is the degree to which a human can understand
the cause of a decision [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Interpretability can be achieved
either by transparency of the model’s inner workings and data, or
post-hoc explanations that convey information about a (potentially)
approximated cause – just like a human would explain [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
      </p>
      <p>
        Diferent stakeholders (e.g., creator, owner, operator,
decisionsubjects, examiner) of an intelligent system may require diferent
means of interpretability [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]. Creators may demand transparency
about the system’s algorithms, while operators might be more
interested how well the system’s conceptual model fits their mental
model (global explanation). Decision-subjects, on the other hand,
may be interested in the factors influencing their individual decision
(local explanation). This paper focuses on the interplay between
owners of intelligent systems and decision-subjects using it.
      </p>
      <p>
        Explanation facilities [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] are an important feature of usable
intelligent systems. They may produce explanations in forms of textual
representations, visualizations or references to similar cases [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
The explanations provided may enable users to better understand
why the system showed a certain behaviour and allow them to
refine their mental models of the system. Following Tomsett [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]
we define explainability as the level to which a system can provide
clarification for the cause of its decision to its users.
      </p>
      <p>
        Previous research work suggests that explanation facilities
increase users’ trust towards a system [
        <xref ref-type="bibr" rid="ref23 ref28">23, 28</xref>
        ] and user
understanding [
        <xref ref-type="bibr" rid="ref10 ref18 ref20">10, 18, 20</xref>
        ]. However, how to present efective and usable
explanations in intelligent systems is still a challenge that lacks best
practices [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Due to the complexity of intelligent systems,
explanations can easily overwhelm users or clutter the interface [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
Studies by Bunt et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] indicate that the costs of reading
explanations may outweigh the perceived benefits of users. Moreover, some
researchers warn that it may also be possible to gain users’ trust
with the provision of meaningless or misleading explanations [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ].
This might leave users prone to manipulation and give rise to the
emergence of dark patterns.
2.2
      </p>
    </sec>
    <sec id="sec-5">
      <title>Dark Patterns</title>
      <p>In general, a design pattern is defined as a proven and generalizing
solution to a recurring design problem. It captures design insights</p>
      <p>Would you like to let the
system choose the best</p>
      <p>route for you?
Not Now</p>
      <p>
        OK
in a formal and structured way and is intended to be reused by
other practitioners [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Design patterns originate from
architecture [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], but have been adopted in other fields such as software
engineering [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], proxemic interaction [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], interface design [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ],
game design [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ], and user experience design [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. In contrast, an
anti pattern refers to a solution that is commonly used although
being considered inefective and although another reusable and
proven solution exists [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        In 2010, Harry Brignull coined the term dark pattern [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] to
describe “a user interface that has been carefully crafted to trick users
into doing things [...] with a solid understanding of human psychology,
and they do not have the user’s interests in mind” [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. He contrasts
dark patterns to “honest” interfaces in terms of trading-of
business revenue and user benefit [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]: while the latter put users first,
the former deliberately deceive users to increase profit within the
limits of law. Brignull [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] identified twelve diferent types of dark
patterns and collects examples in his "hall of shame". Gray et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
further clustered these dark patterns into five categories: Nagging,
Obstruction, Sneaking, Interface Interference and Forced Action.
3
      </p>
    </sec>
    <sec id="sec-6">
      <title>DARK PATTERNS OF EXPLAINABILITY,</title>
    </sec>
    <sec id="sec-7">
      <title>TRANSPARENCY AND CONTROL</title>
      <p>
        What makes a pattern dark in the context of explainability,
transparency and control? We see two general ways: the phrasing (of
an explanation), and the way it is integrated and depicted in the
interface (of explanation facilities). We build on the five categories
of dark UX design patterns by Gray et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and apply them to
the context of explainability, transparency, and user control, along
with concrete examples (Table 1).
3.1
      </p>
    </sec>
    <sec id="sec-8">
      <title>Nagging</title>
      <p>
        Nagging is defined as a “redirection of expected functionality that
may persist over one or more interactions” [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Transferred to the
context of this paper, Nagging interweaves explanation and control
with other, possibly hidden, functionality and thus forces users to
do things they did not intend to do or interrupts them during their
“actual” interaction.
3.1.1 Example 1: Restricted Dialogue. One example that Gray et
al. present in their paper are pop-up dialogues that do not allow
permanent dismissal. This could be easily transferred to our context:
for example, an intelligent routing system could take control away
from users with the tempting ofer “Would you like to let the system
Dark Pattern
by Gray et. al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
Nagging: “redirection of expected
functionality that may persist over one or
more interactions”
Obstruction: “making a process more
dificult than it needs to be, with the
intent of dissuading certain action(s)”
Sneaking: “attempting to hide, disguise,
or delay the divulging of information
that is relevant to the user”
Interface Interference:
“manipulation of the user interface that privileges
certain actions over others.”
Forced Action “Requiring the user to
perform a certain action to access [...]
certain functionality”
Transfer
to Explainability and Control
Example Phrasings
of Explanation
      </p>
      <p>Example Interfaces
of Explanation Facilities
Interrupt users’ desire for explanation
and control
Make users shun the efort to find and
understand an explanation while
interacting with explanation or control
facilities
Gain from user’s interaction with
explanation/control facilities through hidden
functions
Encourage explainability or control
settings that are preferred by the system
provider
Force users to perform an action
before providing them with useful
explanations or control options
Restricted Dialogue</p>
      <p>Hidden Interaction
Information Overload,
Nebulous Prioritization</p>
      <p>Hidden Access,
Nested Details,</p>
      <p>Hampered Selection
Explanation Marketing</p>
      <p>Explanation Surveys
Unfavorable Default
Forced Data Exposure,
Tit for Tat</p>
      <p>Competing Elements,
Limited View
Forced Dismissal
choose the best route for you?”, where users can only select “Not
now” or “OK”, but have no “No” option (see Figure 1).
explanation could be framed vaguely (e.g., “This recommendation is
based on factors such as...” – i.e. not claiming to present all factors).
3.1.2 Example 2: Hidden Interaction. Nagging might include
linking on-demand explanations with hidden advertisements: A click
on “Why was this recommended to me?” on an ad could indeed
open the explanation, but also the ad link (e.g., in two browser tabs).
3.2</p>
    </sec>
    <sec id="sec-9">
      <title>Obstruction</title>
      <p>
        Gray et al. define Obstruction in UX design as “making a process
more dificult than it needs to be, with the intent of dissuading certain
action(s)”. In the context of this paper, Obstruction makes it hard to
get (useful) explanations about the system’s decision-making and
to control the algorithmic settings. Users thus might shun from the
additional efort this takes and rather accept the system as is.
3.2.1 Example 1: Information Overload. Moreover, the use of very
technical language to explain system behaviour and decision-making,
or very lengthy explanations would most probably discourage users
from reading the given information at all (see Figure 3. This might be
comparable to what we currently see in end user licence agreements:
the use of very technical language and a very lengthy presentation
format results in users skipping the system prompt [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
3.2.2 Example 2: Nebulous Prioritization. When explaining a
decision or recommendation with a large number of influencing factors,
the system might limit those factors by some notion of “importance”
to not overwhelm the user. However, limiting factors requires a
(potentially arbitrary) prioritization, which might be used to
obfuscate sensitive factors, like family or relationship statuses. The
3.2.3 Example 3: Hidden Access. One way to obstruct the path to
information could be to avoid “in-situ” links to explanations (e.g.,
ofer no direct explanation button near a system recommendation).
Instead, the option for explanation and control could be deeply
hidden in the user profile and thus dificult to access.
3.2.4 Example 4: Nested Details. Similarly, the information detail
could be distributed, for example nested in many links: When users
want to have more than a superficial “This was shown in your feed,
because you seem to be interested in fashion”, they would have
to take many steps to reach the level of detail that satisfies their
information need.
3.2.5 Example 5: Hampered Selection. The system could also make
activating explanations tedious for users by forcing them to do this
for, say, every single category of product recommendation
without giving a “select all” option. This could resemble the dificult
cookie management practices seen today on many ad-financed
websites. In another example setting, the information in an
intelligent routing system could be spread along diferent sections of the
recommended route and thus would have to be activated for each
section separately.
3.3
      </p>
    </sec>
    <sec id="sec-10">
      <title>Sneaking</title>
      <p>
        The dark pattern of Sneaking is defined as “attempting to hide,
disguise, or delay the divulging of information that is relevant to
the user” [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Following this dark pattern, systems could use UI
      </p>
      <p>Interesting Content
Lorem ipsum dolor sit amet, consetetur
sadipscing elitr, sed diam nonumy eirmod tempor
invidunt ut labExoprelaneattdioonlore magna aliquyam erat,
sed diam voluWpteuthao.uAghttvyoeurmoieghotslikeetthaicscpruosdaucmt beectause:
justo duo dolores et ea rebum.</p>
      <p>Lorem ipsum dolor sit amet, consetetur sadipscing
Stet clita kasdelgitru,sbededrigamrennon,unmoyseeirmaotdatkemimpoartinavidunt ut
sanctus est Lolarbeomre et dolore magna aliquyam erat, sed diam</p>
      <p>ipsum dolor sit amet. Lorem
voluptua. At vero eos et accusam et justo duo
ipsum dolor sit amet, consetetur sadipscing elitr,</p>
      <p>dolores et ea rebum.
sed diam nonumy eirmod tempor invidunt ut
labore et dolore magna Dailsimquissyeaxmplaenartaiotn,ssed diam
voluptua.</p>
      <p>At vero eos et accusam et justo duo dolores et ea
rebum. Stet clita kasd gubergren, no sea takimata
sanctus est Lorem ipsum dolor sit amet.</p>
      <p>An Ad!
LO</p>
      <p>R</p>
      <p>EM
IP</p>
      <p>SUM
N</p>
      <p>EW
!
elements for explainability and control, to sneak in information
motivated by diferent intentions than interpretability.
3.3.1 Example 1: Explanation Marketing. For example, a web
advertisement service could explain a particular ad by showing previously
seen ads which the user had seemed to be interested in. Thus, the
user’s interest in an explanation is utilized to present multiple
(potentially paid) advertisements. In a similar fashion, an online shop
could use the opportunity of explaining product recommendations
to promote further products. Also, ads might be directly integrated
into the phrasing of explanations. For instance, an intelligent maps
application might explain its routing decisions along the lines of
“This route is recommended because it passes by the following
stores you’ve visited in the past...”.
3.3.2 Example 2: Explanation Surveys. Another approach might
present an explanation and ask users for feedback in order to
improve future explanations. This way, a company might enrich its
user data and utilize it apart from explanation.</p>
    </sec>
    <sec id="sec-11">
      <title>3.4 Interface Interference</title>
      <p>
        Gray et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] define this dark pattern as “manipulation of the user
interface that privileges certain actions over others.” In our context,
this dark pattern privileges UI settings and user states that do not
contribute to – or actively suppress – explainability, transparency,
and user control.
3.4.1 Example 1: Unfavorable Default. For example, a dark pattern
in this category could preselect a “hide explanations” option during
the user onboarding in a financial robo-advisor system. This could
be motivated to the user as “uncluttering” the dashboard or UI
layout in general.
3.4.2 Example 2: Limited View. Explanations and control elements
could also be layouted in a way that significantly reduces the space
for the actual content or interferes with viewing it. This could
encourage users to dismiss explanations to increase usability. Even
simpler, links to an explanation might be presented in a barely
visible manner. Figure 2 shows an example.
3.4.3 Example 3: Competing Elements. Further integration of
explanations with the system’s business model might involve, for
instance, starting a count down timer upon opening an explanation
for a booking recommendation to compete for the user’s attention.
This timer could indicate a guaranteed price or availability, thus
putting pressure on the user to abandon the explanation view in
order to continue with the booking process.
3.5
      </p>
    </sec>
    <sec id="sec-12">
      <title>Forced Action</title>
      <p>
        This dark pattern is defined as “requiring the user to perform a certain
action to access (or continue to access) certain functionality” [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. In
our context, the user could be forced to perform an action that (also)
dismisses functionality or information related to explainability,
transparency and control.
3.5.1 Example 1: Forced Data Exposure. This dark pattern could be
used to collect valuable user data under the pretext of explanation.
The user might be forced to provide further personal information
(e.g., social connections) before receiving personalized explanations.
Otherwise, the user would be left of with a generic high-level
explanation.
3.5.2 Example 2: Forced Dismissal. A user could be forced to
dismiss an explanation pop-up in order to see the results of a request
displayed underneath (e.g., during the investment process of a
roboadvisor system). This dismissal might be interpreted as a permanent
decision to no longer display any explanations.
3.5.3 Example 3: Tit for Tat. Regarding transparency, an e-commerce
recommender system might force the user to first confirm an action
(e.g., place an order) before it displays the factors that influenced
the recommendation. For instance, the system might proclaim that
so far not enough data is available to explain its recommendation.
4
      </p>
    </sec>
    <sec id="sec-13">
      <title>SUMMARY AND DISCUSSION</title>
      <p>
        In this paper, we presented possible dark patterns of explanation,
transparency and control of intelligent systems based on the
categorization of dark UX design patters by Gray et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. We see
the possibility that simple legal obligations for explanation might
result in dark patterns rather than user benefits (e.g., similar to
cookies settings on many ad-financed websites). Instead, with our
work we intend to promote the on-going research on explainability
as well as the discussion on explanation standards and their efects
on users.
4.1
      </p>
    </sec>
    <sec id="sec-14">
      <title>What Are the Consequences of Dark</title>
    </sec>
    <sec id="sec-15">
      <title>Patterns?</title>
      <p>We see several possibly negative consequences of dark patterns in
this context: Users might be annoyed and irritated by explanations,
developing a negative attitude towards them. Examples include
explanations presented in the Nagging patterns, which
automatically open an advertisement along with the explanation; Forced
Action patterns, which hinder the user to access desired results; or
Sneaking patterns, which disguise advertisements as explanations.
Similarly, users might lose interest in explanations when Interface
Interference or Obstruction patterns are applied, which e.g., show
long and tedious to read explanations. As a consequence, users
might dismiss or disable explanations entirely.</p>
      <p>On the other hand, users might not recognize explanations when
they are hidden in profile settings. When users know that
intelligent systems must provide explanations by law, the absence of
explanations might mistakenly make users believe that the system
does not use algorithmic decision-making. Hence, users might
develop an incorrect understanding of algorithmic decision-making
in general.</p>
      <p>
        Furthermore, Obstruction patterns might lead to explanations
which promote socially acceptable factors for algorithmic
decisionmaking and withhold more critical or unethical ones. As a result,
this might hinder the formation of correct mental models of the
system’s inner workings. Hence, users might not be able to
critically reflect on the system’s correctness and potential biases. As
previous work in psychology suggests, users might accept placebo
explanations without conscious attention as long as no additional
efort is required from them [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. When explanations use very
technical language and are dificult to understand, users might simply
skip them. This lack of knowledge and uncertainty about the
underlying factors influencing the algorithm might lead to algorithmic
anxiety [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
4.2
      </p>
    </sec>
    <sec id="sec-16">
      <title>Which Further Dark Patterns May Appear in this Context?</title>
      <p>
        In this paper, we transferred the dark pattern categories by Gray et
al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to explainability and control of intelligent systems. However,
there might be further patterns in this context. For example, we
propose a pattern based on Social Pressure that uses information
about other people – who are relevant to the user – in a way that is
likely to be unknown or not endorsed by those people. For example,
when Bob is shown an advertisement for diet products, explained
by “Ask Alice about this”, he might be annoyed with Alice without
her knowledge. Similarly, Alice’s boss might be recommended a
lingerie shop that also “Alice might be interested in”.
4.3
      </p>
    </sec>
    <sec id="sec-17">
      <title>How Do Dark Patterns Afect Complex</title>
    </sec>
    <sec id="sec-18">
      <title>Ecosystems?</title>
      <p>
        In this paper, we examined dark patterns which deceive
decisionsubjects who have means of directly interacting with the intelligent
system. However, the ecosystem model of an intelligent system
might be more complex and involve multiple stakeholders [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ].
For example, in a financial decision-support context the system
could ascertain the creditworthiness of a person (decision-subject),
but only present an incontestable subset of reasons to the bank
employee (operator) to not impact the reputation of the company
(owner).
4.4
      </p>
    </sec>
    <sec id="sec-19">
      <title>Can All Aspects of Dark Patterns Be</title>
    </sec>
    <sec id="sec-20">
      <title>Avoided?</title>
      <p>Intelligent systems often use machine learning algorithms, which
have hundreds of input variables. If all of these variables are
explained, the explanation consists of a long list of text, which we
identified as a dark pattern above. On the other hand, if they only
show a subset of input variables for an explanation, this might
bias the user’s mental model, which is another dark pattern. Some
explanations might be easier to understand for users than others.
Hence, future studies have to evaluate which explanations are most
helpful for users to understand the system.
4.5</p>
    </sec>
    <sec id="sec-21">
      <title>How Can Dark Patterns Inform Research and Design?</title>
      <p>In general, reflecting on dark patterns can be useful for HCI
researchers and practitioners to learn how to do things properly by
considering how not to do them. As a concrete use case, dark
patterns can serve as a baseline for empirical studies to evaluate new
design approaches: For example, a new explanation design could
be compared against a placebo explanation – and not (only) against
a version of the system with no explanation at all. Finally, dark
patterns raise awareness that having any explanations is not suficient.
Instead, they motivate the HCI community to work on specific
guidelines and standards for explanations to make sure that these
actually support users in gaining awareness and understanding of
algorithmic decision-making.
5</p>
    </sec>
    <sec id="sec-22">
      <title>CONCLUSION</title>
      <p>
        The prevalence of intelligent systems poses several challenges for
HCI researchers and practitioners to support users to successfully
interact with these systems. Explanations of how an intelligent
system works can ofer positive benefits for user satisfaction and
control [
        <xref ref-type="bibr" rid="ref19 ref34">19, 34</xref>
        ], awareness of algorithmic decision making [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], as
well as trust in the system [
        <xref ref-type="bibr" rid="ref25 ref30 ref8">8, 25, 30</xref>
        ]. Since 2018, companies are
legally obliged to ofer users a right to explanation, enshrined in the
General Data Protection Regulation [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ].
      </p>
      <p>However, providers of intelligent systems might be reluctant to
integrate explanations that disclose system reasoning to the public
in fear of a negative impact on their reputation or competitive
advantage. Hence, legal obligations alone might not result in useful
facilities for explanation and control for the end user.</p>
      <p>
        In this paper, we have drawn on the notion of dark UX
patterns [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] to outline questionable designs for explanation and
control. These arise from explanation facilities that are not primarily
designed with the users’ benefits in mind, but purposely deceive
users for the benefit of other parties.
      </p>
      <p>In conclusion, we argue that while a legal right to explanation
might be an acknowledgement of the necessity to support users
in interacting with intelligent system, it is not suficient for users
nor our research community. By pointing to potential negative
design outcomes in this paper, we hope to encourage researchers
and practitioners in HCI and IUI communities to work towards
specific guidelines and standards for “good” facilities for explanation,
transparency and user control.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Christopher</given-names>
            <surname>Alexander</surname>
          </string-name>
          , Sara Ishikawa, Murray Silverstein, Max Jacobson, Ingrid
          <string-name>
            <surname>Fiksdahl-King</surname>
            ,
            <given-names>and Shlomo</given-names>
          </string-name>
          <string-name>
            <surname>Angel</surname>
          </string-name>
          .
          <year>1977</year>
          .
          <article-title>A Pattern Language: towns, buildings, construction</article-title>
          . Oxford University Press, Oxford, UK.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Omri</given-names>
            <surname>Ben-Shahar</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>The Myth of the “Opportunity to Read” in Contract Law</article-title>
          .
          <source>European Review of Contract Law</source>
          <volume>5</volume>
          ,
          <issue>1</issue>
          (
          <year>2009</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Harry</given-names>
            <surname>Brignull</surname>
          </string-name>
          .
          <year>2010</year>
          . Dark Patterns. darkpatterns.org.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Harry</given-names>
            <surname>Brignull</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Dark Patterns: Deception vs</article-title>
          . Honesty in UI Design. https: //alistapart.com/article/dark
          <article-title>-patterns-deception-</article-title>
          <string-name>
            <surname>vs</surname>
          </string-name>
          .
          <article-title>-honesty-in-ui-design</article-title>
          ,
          <source>accessed November 28</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Harry</given-names>
            <surname>Brignull</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Dark Patterns: User Interfaces Designed to Trick People</article-title>
          . http://talks.ui
          <article-title>-patterns.com/videos/ dark-patterns-user-interfaces-designed-to-trick-people</article-title>
          ,
          <source>accessed November 28</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Taina</given-names>
            <surname>Bucher</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>The algorithmic imaginary: exploring the ordinary afects of Facebook algorithms</article-title>
          . Information,
          <source>Communication &amp; Society</source>
          <volume>20</volume>
          , 1 (jan
          <year>2017</year>
          ),
          <fpage>30</fpage>
          -
          <lpage>44</lpage>
          . https://doi.org/10.1080/1369118X.
          <year>2016</year>
          .1154086
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Bunt</surname>
          </string-name>
          , Matthew Lount, and
          <string-name>
            <given-names>Catherine</given-names>
            <surname>Lauzon</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Are explanations always important?: A study of deployed, low-cost intelligent interactive systems</article-title>
          .
          <source>In Proceedings of the ACM Conference on Intelligent User Interfaces (IUI '12)</source>
          . ACM, New York, NY, USA,
          <fpage>169</fpage>
          -
          <lpage>178</lpage>
          . https://doi.org/10.1145/2166966.2166996
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Henriette</given-names>
            <surname>Cramer</surname>
          </string-name>
          , Bob Wielinga, Satyan Ramlal, Vanessa Evers, Lloyd Rutledge, and
          <string-name>
            <given-names>Natalia</given-names>
            <surname>Stash</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>The efects of transparency on perceived and actual competence of a content-based recommender</article-title>
          .
          <source>CEUR Workshop Proceedings</source>
          <volume>543</volume>
          (
          <year>2009</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . https://doi.org/10.1007/s11257-008-9051-3
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Finale</given-names>
            <surname>Doshi-Velez</surname>
          </string-name>
          and
          <string-name>
            <given-names>Been</given-names>
            <surname>Kim</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Towards A Rigorous Science of Interpretable Machine Learning</article-title>
          .
          <source>arXiv e-prints (Feb</source>
          .
          <year>2017</year>
          ). https://arxiv.org/abs/1702. 08608
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>John</surname>
            <given-names>J. Dudley</given-names>
          </string-name>
          and Per Ola Kristensson.
          <year>2018</year>
          .
          <article-title>A Review of User Interface Design for Interactive Machine Learning</article-title>
          .
          <source>ACM Transactions on Interactive Intelligent Systems</source>
          <volume>8</volume>
          ,
          <issue>2</issue>
          (jun
          <year>2018</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>37</lpage>
          . https://doi.org/10.1145/3185517
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Malin</surname>
            <given-names>Eiband</given-names>
          </string-name>
          , Sarah Theres Völkel, Daniel Buschek, Sophia Cook, and
          <string-name>
            <given-names>Heinrich</given-names>
            <surname>Hussmann</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>When People and Algorithms Meet:Assessing User-reported Problems to Inform Support inIntelligent Everyday Applications</article-title>
          .
          <source>In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI '19)</source>
          . ACM, New York, NY, USA.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Erich</surname>
            <given-names>Gamma</given-names>
          </string-name>
          , Richard Helm, Ralph Johnson, and John Vlissides.
          <year>1994</year>
          .
          <article-title>Design Patterns: Elements of Reusable Object-Oriented Software</article-title>
          .
          <source>Addison Wesley</source>
          , Boston, MA, USA.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Colin</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Gray</surname>
          </string-name>
          , Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L. Toombs.
          <year>2018</year>
          .
          <article-title>The Dark (Patterns) Side of UX Design</article-title>
          .
          <source>In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18)</source>
          . ACM, New York, NY, USA, Article
          <volume>534</volume>
          , 14 pages. https://doi.org/10.1145/3173574.3174108
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Saul</surname>
            <given-names>Greenberg</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Sebastian</given-names>
            <surname>Boring</surname>
          </string-name>
          , Jo Vermeulen, and
          <string-name>
            <given-names>Jakub</given-names>
            <surname>Dostal</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Dark Patterns in Proxemic Interactions: A Critical Perspective</article-title>
          .
          <source>In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS '14)</source>
          . ACM, New York, NY, USA,
          <fpage>523</fpage>
          -
          <lpage>532</lpage>
          . https://doi.org/10.1145/2598510.2598541
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Steven</surname>
            <given-names>R Haynes</given-names>
          </string-name>
          , Mark A Cohen, and Frank E Ritter.
          <year>2009</year>
          .
          <article-title>Designs for explaining intelligent agents</article-title>
          .
          <source>International Journal of Human-Computer Studies 67</source>
          ,
          <issue>1</issue>
          (
          <year>2009</year>
          ),
          <fpage>90</fpage>
          -
          <lpage>110</lpage>
          . https://doi.org/10.1016/j.ijhcs.
          <year>2008</year>
          .
          <volume>09</volume>
          .008
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Shagun</surname>
            <given-names>Jhaver</given-names>
          </string-name>
          , Yoni Karpfen, and
          <string-name>
            <given-names>Judd</given-names>
            <surname>Antin</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Algorithmic Anxiety and Coping Strategies of Airbnb Hosts</article-title>
          .
          <source>In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18)</source>
          . ACM, New York, NY, USA, Article
          <volume>421</volume>
          , 12 pages. https://doi.org/10.1145/3173574.3173995
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Andrew</given-names>
            <surname>Koenig</surname>
          </string-name>
          .
          <year>1998</year>
          .
          <article-title>Patterns and Antipatterns</article-title>
          . In The Patterns Handbooks, Linda Rising (Ed.). Cambridge University Press, New York, NY, USA,
          <fpage>383</fpage>
          -
          <lpage>389</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Margaret Burnett,
          <string-name>
            <surname>Weng-Keen Wong</surname>
            , and
            <given-names>Simone</given-names>
          </string-name>
          <string-name>
            <surname>Stumpf</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Principles of Explanatory Debugging to Personalize Interactive Machine Learning</article-title>
          .
          <source>In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI '15)</source>
          . ACM, New York, NY, USA,
          <fpage>126</fpage>
          -
          <lpage>137</lpage>
          . https://doi.org/10.1145/2678025.2701399
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Simone Stumpf, Margaret Burnett, and
          <string-name>
            <given-names>Irwin</given-names>
            <surname>Kwan</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Tell Me More?: The Efects of Mental Model Soundness on Personalizing an Intelligent Agent</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12)</source>
          . ACM, New York, NY, USA,
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . https://doi.org/10.1145/ 2207676.2207678
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Todd</surname>
            <given-names>Kulesza</given-names>
          </string-name>
          , Simone Stumpf, Margaret Burnett, Sherry Yang,
          <string-name>
            <given-names>Irwin</given-names>
            <surname>Kwan</surname>
          </string-name>
          , and
          <string-name>
            <surname>Weng-Keen Wong</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Too much, too little, or just right? Ways explanations impact end users' mental models</article-title>
          .
          <source>In 2013 IEEE Symposium on Visual Languages and Human Centric Computing. IEEE</source>
          , New York, NY, USA,
          <fpage>3</fpage>
          -
          <lpage>10</lpage>
          . https://doi.org/ 10.1109/VLHCC.
          <year>2013</year>
          .6645235
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Ellen J Langer</surname>
            , Arthur Blank, and
            <given-names>Benzion</given-names>
          </string-name>
          <string-name>
            <surname>Chanowitz</surname>
          </string-name>
          .
          <year>1978</year>
          .
          <article-title>The mindlessness of ostensibly thoughtful action: The role of “placebic” information in interpersonal interaction</article-title>
          .
          <source>Journal of Personality and Social Psychology</source>
          <volume>36</volume>
          ,
          <issue>6</issue>
          (
          <year>1978</year>
          ),
          <fpage>635</fpage>
          -
          <lpage>642</lpage>
          . http://dx.doi.org/10.1037/
          <fpage>0022</fpage>
          -
          <lpage>3514</lpage>
          .
          <year>36</year>
          .6.
          <fpage>635</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Brian</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lim and Anind</surname>
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Dey</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Design of an intelligible mobile contextaware application</article-title>
          .
          <source>In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI '11)</source>
          . ACM, New York, NY, USA,
          <fpage>157</fpage>
          -
          <lpage>166</lpage>
          . https://doi.org/10.1145/2037373.2037399
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Brian</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <surname>Anind K. Dey</surname>
            , and
            <given-names>Daniel</given-names>
          </string-name>
          <string-name>
            <surname>Avrahami</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Why and why not explanations improve the intelligibility of context-aware intelligent systems</article-title>
          .
          <source>In Proceedings of the 27th International Conference on Human Factors in Computing Systems (CHI '09)</source>
          . ACM, New York, NY, USA,
          <fpage>2119</fpage>
          -
          <lpage>2128</lpage>
          . https://doi.org/10.1145/ 1518701.1519023
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Zachary</surname>
            <given-names>Chase</given-names>
          </string-name>
          <string-name>
            <surname>Lipton</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>The Mythos of Model Interpretability</article-title>
          .
          <source>CoRR abs/1606</source>
          .03490 (
          <year>2016</year>
          ). arXiv:
          <volume>1606</volume>
          .03490 http://arxiv.org/abs/1606.03490
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Joseph</surname>
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Lyons</surname>
            , Garrett G. Sadler, Kolina Koltai, Henri Battiste, Nhut T. Ho, Lauren C. Hofmann, David Smith,
            <given-names>Walter</given-names>
          </string-name>
          <string-name>
            <surname>Johnson</surname>
            , and
            <given-names>Robert</given-names>
          </string-name>
          <string-name>
            <surname>Shively</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines</article-title>
          .
          <source>In Advances in Human Factors in Robots and Unmanned Systems, Pamela Savage-Knepshield and Jessie Chen (Eds.)</source>
          . Springer International Publishing, Cham,
          <fpage>127</fpage>
          -
          <lpage>136</lpage>
          . https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -41959-6_
          <fpage>11</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Tim</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Explanation in Artificial Intelligence: Insights from the Social Sciences</article-title>
          .
          <source>CoRR abs/1706</source>
          .07269 (
          <year>2017</year>
          ). arXiv:
          <volume>1706</volume>
          .07269 http://arxiv.org/abs/ 1706.07269
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Emilee</surname>
            <given-names>Rader</given-names>
          </string-name>
          , Kelley Cotter, and
          <string-name>
            <given-names>Janghee</given-names>
            <surname>Cho</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Explanations as Mechanisms for Supporting Algorithmic Transparency</article-title>
          .
          <source>In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18)</source>
          . ACM, New York, NY, USA,
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . https://doi.org/10.1145/3173574.3173677
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>Marco</given-names>
            <surname>Tulio</surname>
          </string-name>
          <string-name>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Sameer</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Carlos</given-names>
            <surname>Guestrin</surname>
          </string-name>
          .
          <year>2016</year>
          . “
          <article-title>Why Should I Trust You?”: Explaining the Predictions of Any Classifier</article-title>
          .
          <source>In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16)</source>
          . ACM, New York, NY, USA,
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          . https://doi.org/10. 1145/2939672.2939778
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>Marco</given-names>
            <surname>Tulio</surname>
          </string-name>
          <string-name>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Sameer</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Carlos</given-names>
            <surname>Guestrin</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Anchors: Highprecision model-agnostic explanations</article-title>
          .
          <source>In AAAI Conference on Artificial Intelligence.</source>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>James</surname>
            <given-names>Schafer</given-names>
          </string-name>
          , Prasanna Giridhar, Debra Jones, Tobias Höllerer, Tarek Abdelzaher, and
          <string-name>
            <surname>John O'Donovan</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Getting the Message?: A Study of Explanation Interfaces for Microblog Data Analysis</article-title>
          .
          <source>In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI '15)</source>
          . ACM, New York, NY, USA,
          <fpage>345</fpage>
          -
          <lpage>356</lpage>
          . https://doi.org/10.1145/2678025.2701406
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Munindar</surname>
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Singh</surname>
          </string-name>
          .
          <year>1994</year>
          .
          <article-title>Multiagent systems</article-title>
          . Springer, Berlin, Heidelberg, Germany,
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . https://doi.org/10.1007/BFb0030532
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <source>[32] The European Parliament and Council</source>
          .
          <year>2016</year>
          .
          <article-title>Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data</article-title>
          ,
          <source>and repealing Directive</source>
          <volume>95</volume>
          /46/EC (
          <article-title>General Data Protection Regulation)</article-title>
          .
          <source>Oficial Journal of the European Union</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>Jenifer</given-names>
            <surname>Tidwell</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Designing interfaces: Patterns for Efective Interaction Design</article-title>
          .
          <string-name>
            <surname>O'Reilly Media</surname>
          </string-name>
          , Inc.,
          <string-name>
            <surname>Sebastopol</surname>
          </string-name>
          , Canada. arXiv:arXiv:gr-qc/
          <year>9809069v1</year>
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>N.</given-names>
            <surname>Tintarev</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Masthof</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>A Survey of Explanations in Recommender Systems</article-title>
          .
          <source>In 2007 IEEE 23rd International Conference on Data Engineering Workshop</source>
          . IEEE, New York, NY, UDA,
          <fpage>801</fpage>
          -
          <lpage>810</lpage>
          . https://doi.org/10.1109/ICDEW.
          <year>2007</year>
          .4401070
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <surname>Richard</surname>
            <given-names>Tomsett</given-names>
          </string-name>
          , Dave Braines, Dan Harborne, Alun Preece, and
          <string-name>
            <given-names>Supriyo</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems</article-title>
          . arXiv e-prints (
          <year>June 2018</year>
          ). http://arxiv.org/abs/
          <year>1806</year>
          .07552
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>Adrian</given-names>
            <surname>Weller</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Challenges for Transparency</article-title>
          .
          <source>CoRR</source>
          (
          <year>2017</year>
          ). http://arxiv. org/abs/1708.01870
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <surname>José</surname>
            <given-names>P Zagal</given-names>
          </string-name>
          , Stafan Björk, and
          <string-name>
            <given-names>Chris</given-names>
            <surname>Lewis</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Dark patterns in the design of games</article-title>
          .
          <source>In Foundations of Digital Games</source>
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>