<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, Los Angeles, USA, March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Transparency in Advice-Giving Systems: A Framework and a Research Model for Transparency Provision</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ruijing Zhao</string-name>
          <email>ruijing.zhao@sauder.ubc.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Izak Benbasat</string-name>
          <email>izak.benbasat@sauder.ubc.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hasan Cavusoglu</string-name>
          <email>cavusoglu@sauder.ubc.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Sander School of Business, University of British Columbia</institution>
          ,
          <addr-line>Vancouver</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>Advice-giving systems (AGSs) provide recommendations based on users' unique preferences or needs. Maximizing users' adoptions of AGSs is an effective way for ecommerce websites to attract users and increase profits. AGS transparency, defined as the extent to which information of a system's reasoning is provided and made available to users, has been proved to be effective in increasing users' adoptions of AGSs. While previous studies have identified providing explanations as an effective way of enhancing AGS transparency, most of them failed to further explore the optimal transparency provision strategy of AGSs. We argue that instead of setting a uniform rule of providing AGS transparency, we should develop optimal transparency provision strategies for different types of AGSs and users based on their unique features. In this paper, we first developed a framework of AGS transparency provision and identified six components of AGS transparency provision strategies. We then developed a research model of AGS transparency provision strategy with a set of propositions. We hope that based on this model, researchers could evaluate how to effect transparency for AGSs and users with different characteristics. Our work would contribute to the existing knowledge by exploring how AGS and user characteristics will influence the optimal strategy of providing AGS transparency. Our work would also contribute to the practice by offering design suggestions for AGS explanation interfaces.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CCS Concepts
• Information systems➝Decision support
systems • Human-centered computing➝HCI theory,
concepts and models • Computing
methodologies➝Cognitive science • Applied
computing➝Online shopping
Author Keywords
Advice-giving systems;
justifications; adoption.</p>
      <p>transparency;
explanations;
IUI Workshops'19, March 20, 2019, Los Angeles, USA.</p>
      <p>Copyright © 2019 for the individual papers by the papers' authors.
Copying permitted for private and academic purposes. This volume is
published and copyrighted by its editors.</p>
      <p>
        INTRODUCTION
Advice-giving systems (AGSs) are software systems that
offer users with personalized recommendations or decision
aids based on users’ unique preferences or needs
        <xref ref-type="bibr" rid="ref22 ref41 ref50 ref52 ref54 ref55">(Xiao and
Benbasat 2007; 2014)</xref>
        . Due to their effectiveness in
reducing users’ information overload
        <xref ref-type="bibr" rid="ref12 ref2 ref29 ref48 ref53">(Komiak and
Benbasat, 2008)</xref>
        and facilitating users’ decision-making
process
        <xref ref-type="bibr" rid="ref12 ref2 ref29 ref48 ref53">(Wang and Benbasat, 2008)</xref>
        , maximizing users’
adoptions of AGSs is an effective way for e-commerce
websites to attract users and increase profits
        <xref ref-type="bibr" rid="ref23 ref28">(Komiak and
Benbasat 2006)</xref>
        . System transparency, defined as the extent
to which information of a system’s reasoning is provided
and made available to users
        <xref ref-type="bibr" rid="ref18 ref21 ref4 ref44 ref45 ref46 ref58 ref64">(Amalia, 2017; Cho et al., 2017;
Hosseini et al., 2018; Leape et al. 2009; Yamazaki and
Yoon, 2016; Zhu, 2002)</xref>
        , is considered as a key influential
factor of users’ adoptions of AGSs and their acceptances of
AGS outcomes
        <xref ref-type="bibr" rid="ref2 ref3 ref42 ref5">(Cramer et al., 2008; Pu et al., 2011)</xref>
        .
Previous studies have identified providing explanations as
an effective way of enhancing AGS transparency
        <xref ref-type="bibr" rid="ref11 ref19">(Bilgic
and Mooney 2005; Gedikli et al. 2014; Herlocker 2000)</xref>
        and
users’ adoptions of AGSs
        <xref ref-type="bibr" rid="ref15 ref20 ref22 ref22 ref36 ref41 ref41 ref50 ref50 ref52 ref52 ref54 ref54 ref55 ref55 ref60">(Arnold et al. 2006; Gregor and
Benbasat 1999; Hernando et al. 2013; Mao and Benbasat
2000; Pu and Chen, 2007; Wang and Benbasat 2007; Ye
and Johnson 1995)</xref>
        . Despite the fruitful research findings in
the IS literature, there is still a lack of attentions to the
optimal transparency provision strategy of AGSs. In this
paper, we define optimal AGS transparency provision
strategy as a way of providing transparency which will
maximize users’ adoptions of AGSs and their outcomes.
We argue that there is not a global optimal transparency
provision strategy for all kinds of AGSs and users. Rather,
local optimal transparency provision strategies should be
developed based on different characteristics of both AGSs
and users.
      </p>
      <p>
        The provision of AGSs transparency has a number of
features, the combination of which can form different
provision strategies. For example, when providing
transparency, AGSs can reveal to users what they do (e.g.
how AGSs generate advice) and why they do it (e.g. why
AGSs collect certain user data). The information revealed
by AGSs can be either long or short, complex or simple,
with higher or lower accessibility, use professional or plain
languages, be provided before or after a certain behavior is
done by AGSs, etc. The above features may be more
effective in different contexts. For instance, it has been
shown that feedforward explanations (i.e. explanations
provided before advice-generating process) are preferred by
novice users, and feedback explanations (i.e. explanations
provided after advice-generating process) are preferred by
expert users
        <xref ref-type="bibr" rid="ref8">(Arnold et al. 2006; Dhaliwal and Benbasat,
1996)</xref>
        . However, while some studies considered users’
characteristics when designing explanations, very few
studies have focused on how AGS characteristics might
influence AGSs’ transparency provision strategy.
In recent years, due to the rapid development of
advicegenerating technology, AGSs has transformed from
traditional ones, which explicitly ask users to indicate their
preferences or needs and generate advice accordingly, to
more advanced ones, which employ AI-based techniques
(e.g. collaborative filtering and content-based filtering) to
generate advice based on implicitly collecting users’
personal data (e.g. their age, location, browsing behaviors,
etc.). Due to the implicit user data collection and high
complexity of advice-generating techniques, users may
have higher privacy concerns and be more confused about
how the advice is generated when they use AGSs with
advanced features. Consequently, the existing theories and
rules of transparency provision that have been developed
and tested in the context of traditional AGSs might need to
be modified or redeveloped to adapt the usage of more
advanced AGSs. In this paper, we argue a necessity of
developing a comprehensive model for AGS transparency
provision. Such a model will allow us exploring ways of
providing transparency in different types of AGSs and for
different kinds of users, which can lead to the highest level
of users’ adoptions of AGSs and their acceptances of AGS
outcomes. Using this model, we would like to address the
following research questions:
      </p>
      <p>What are the components of AGS transparency
provision?
How should we explore the optimal way of providing
AGS transparency considering both AGS
characteristics and user characteristics?
Our research would contribute to the existing knowledge by
proposing how AGS and user characteristics will influence
the optimal strategy of providing AGS transparency, and
contribute to the practice by offering design suggestions on
AGS explanation interfaces. The remainder of our paper is
organized as follows: Section 2 defines transparency in
AGSs and provides a literature review about AGS
transparency. Section 3 develops a framework of AGS
transparency provision. Section 4 develops a research
model of AGS transparency provision strategy. Section 5
discusses the contributions, limitations and future research
directions.</p>
      <p>
        TRANSPARENCY IN ADVICE-GIVING SYSTMES
In the domain of AGSs, transparency is defined as users’
understanding of systems’ inner logic, i.e. why a particular
recommendation is recommended
        <xref ref-type="bibr" rid="ref22 ref3 ref41 ref42 ref47 ref50 ref52 ref54 ref55">(Pu et al., 2011;
Swearingen and Sinha, 2002; Tintarev and Masthoff, 2007)</xref>
        .
In other domains such as e-government and health care, it is
also defined as systems’ voluntary release of information
        <xref ref-type="bibr" rid="ref21 ref4">(Amalia, 2017; Hosseini et al., 2018; Leape et al. 2009)</xref>
        or
the visibility and accessibility of such information
        <xref ref-type="bibr" rid="ref64">(Cho et
al., 2017; Zhu, 2002)</xref>
        . This suggests two alternative ways of
measuring system transparency, from users’ perspective
and systems’ perspective respectively. In this paper, we
define objective transparency as the extent to which AGSs
release information regarding what they do and why they
behave in a certain way, and subjective transparency as the
extent to which users perceive that the information
regarding what systems do and why they behave in a certain
way is provided by AGSs and is visible/available/accessible
to them
        <xref ref-type="bibr" rid="ref64">(Cho et al., 2017; Zhu, 2002)</xref>
        .
      </p>
      <p>
        Providing transparency is generally considered to be
beneficial to users. Some studies argued that systems with
high transparency would inform users about who can
collect, access, and use their personal data, and give users
the right to control the utilization of their own personal data
(Hedbom, 2008). Some other studies proposed that highly
transparent systems would articulate the systems’ goals
        <xref ref-type="bibr" rid="ref37 ref57 ref66 ref9">(Zouave and Marquenie, 2017)</xref>
        , why data is collected from
users (Hedbom, 2008), and the rationale for system outputs
        <xref ref-type="bibr" rid="ref2 ref3 ref37 ref37 ref42 ref5 ref57 ref57 ref66 ref66 ref9 ref9">(Cramer et al., 2008; Diakopoulos and Koliska, 2017;
Hedbom, 2008; Pu et al., 2011; Zouave and Marquenie,
2017)</xref>
        . In this way, such systems could increase the
accountability of system algorithms
        <xref ref-type="bibr" rid="ref18 ref37 ref44 ref45 ref46 ref57 ref58 ref66 ref9">(Ananny and Crawford,
2018; Diakopoulos and Koliska, 2017; Spagnuelo and
Lenzini, 2016)</xref>
        , reduce users’ uncertainty
        <xref ref-type="bibr" rid="ref37 ref57 ref66 ref9">(Diakopoulos and
Koliska, 2017)</xref>
        and increase users’ confidence
        <xref ref-type="bibr" rid="ref47">(Sinha and
Swearingen, 2002)</xref>
        when interacting with systems, facilitate
the elicitations of users’ personal information
        <xref ref-type="bibr" rid="ref21">(Hosseini et
al., 2018)</xref>
        , and enhance users’ trust in systems
        <xref ref-type="bibr" rid="ref22 ref37 ref41 ref50 ref52 ref54 ref55 ref57 ref66 ref9">(Diakopoulos
and Koliska, 2017; Tintarev and Masthoff, 2007)</xref>
        and
adoptions of systems outcomes
        <xref ref-type="bibr" rid="ref2 ref3 ref42 ref5">(Cramer et al., 2008; Pu et
al., 2011)</xref>
        .
      </p>
      <p>
        Despite the great benefits brought by providing
transparency, some researchers indicated that misuses of
system transparency might occur if transparency was not
provided in a proper way. Some studies argued that
excessively detailed information with great transparency
might reduce the efficiency of systems in that such
information required too much time to be processed by
users
        <xref ref-type="bibr" rid="ref22 ref41 ref50 ref52 ref54 ref55">(Tintarev and Masthoff, 2007)</xref>
        and might make users
become distracted from the central, more important
information (Ananny and Crawford, 2018). In addition,
Hosseini et al. (2018) and Xu et al. (2018) also warned that
if the information provided by systems was not
understandable/interpretable/ actionable to users, users’
trust in systems might be reduced rather than being
increased.
      </p>
      <p>A FRAMEWORK FOR AGS TRANSPARENCY
PROVISION
Before exploring how AGS transparency should be
provided to users, we first defined what is AGS
transparency provision by developing a framework. Drawn
from the framework of knowledge-based system
explanations developed by Dhaliwal and Benbasat (1996),
we included the characteristics of both the provided
information and information-providing interfaces into our
framework. Specifically, our AGS transparency provision
framework has six components, namely transparency
provision stage, transparency type, the content of the
provided information, the timing of transparency provision,
the type of transparency provision interface, and the format
of the provided information. Each component has a number
of different values. The framework of AGS transparency
provision is shown in Figure 1.</p>
      <p>
        Transparency Provision Stage
Adapted from Xiao and Benbasat (2007; 2014), we divide
the utilization of AGSs into input, process, and output
stages. Input stage is the stage during which users’
preferences or needs are elicited; process stage is the stage
during which advice is generated by systems based on the
data collected in input stage, and output stage is the stage
during which systems present the generated advice to users
        <xref ref-type="bibr" rid="ref22 ref41 ref50 ref52 ref54 ref55">(Xiao and Benbasat, 2007; 2014)</xref>
        . AGS transparency can be
provided to users in each of the three stages. In input stage,
AGSs can reveal the process and rational of their data
collections from users. In process stage, AGSs can disclose
and justify their advice-generating process. In output stage,
AGSs can explain to users why they think the advice is a
good one for users.
      </p>
      <p>
        Transparency Types
In the field of AGSs, revealing explanations about how the
system work is commonly considered to be one feasible
way of providing system transparency to users
        <xref ref-type="bibr" rid="ref22 ref41 ref50 ref52 ref54 ref55">(Tintarev
and Masthoff, 2007)</xref>
        . In addition to this, some studies have
also proposed another way of conveying the idea of how
advice is generated without elucidating precisely the
mechanism of AGSs
        <xref ref-type="bibr" rid="ref35">(Lipton, 2016)</xref>
        , e.g. we recommended
movie A to you because it is more suitable for you
compared to 92% of the movies in our website. In this
paper, we argue that AGSs can provide transparency to
users through presenting two types of messages, namely
explanations, i.e. what systems will do/are doing/have done,
and justifications, i.e. information about why systems
behave in a certain manner. Different from explanations,
which offer users objective information regarding systems’
behaviors, justifications enable AGSs to show users the
advantages of their advice-generating technique and
outcomes, and can thus make users feel less uncertain and
more confident to accept the generated advice. In our
framework, we suggest that AGSs can provide two types of
transparency, i.e. explanations and justifications, in input,
process, and output stage respectively. The content and
examples are shown in Table 11.
1 All the examples in this table are copied from Facebook’s
instructions for their customized ads.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Stage</title>
      <p>Content of Transparency Provision
In addition to stage and transparency type, the content of
the information provided by AGSs, which aims to enhance
systems’ transparency, can also influence the effects of
transparency provision. For example, the information
provided by AGSs can be either long (e.g. one paragraph)
or short (e.g. one sentence). The content of the information
can be either complex (e.g. using profession language) or
easy to understand (e.g. using plain language). The
information regarding how systems work can be either very
detailed (e.g. providing the formulas of calculating the
similarity between users) or less detailed (e.g. briefly saying
“users who are similar to you also bought…”).</p>
      <p>
        Timing, Interface, and Information Format of
Transparency Provision
In addition to the above-mentioned factors, there are still
some other factors that can be components of AGS
transparency provision strategies, including timing of
provision, interface design, and information format. Timing
of provision refers to the fact that AGSs can choose to
provide information to explain or justify their behaviors
either before or after a specific behavior is performed by
AGSs. The transparency provided before performing a
specific behavior is called ex-ante transparency, while the
transparency provided after a specific behavior is done is
called ex-post transparency
        <xref ref-type="bibr" rid="ref18 ref44 ref45 ref46 ref58">(Ananny and Crawford, 2018;
Spagnuelo and Lenzini, 2016)</xref>
        . Interface design is also an
influential factor of transparency provision. Dhaliwal and
Benbasat (1996) indicated that there were two types of
AGS transparency-providing interfaces – one employs
active strategy where the information is automatically
presented to users, and the other one employs passive
strategy where users have to make explicit requests to
access the provided information. Finally, the format of the
provided information may also make a difference on AGS
transparency provision. AGSs can reveal information about
their inner logic in a variety of formats, e.g. text, histogram,
graph, matching score, or even formula, etc. Users may
have different perceptions when the same information is
provided by AGSs in different formats
        <xref ref-type="bibr" rid="ref19">(Herlocker et al.,
2000)</xref>
        .
      </p>
      <p>
        A RESEARCH MODEL OF AGS TRANSPARENCY
PROVISION STRATEGY
In order to explore the optimal way of providing AGS
transparency, which can help maximize users’ adoptions of
systems, we developed a research model for AGS
transparency provision strategies (see Figure 2) for future
studies to test. In this model, we take into consideration the
characteristics of both AGSs and users. The model
proposed in our research can be tested through conducting
lab experiments and field studies. Through testing the
propositions generated based on this model, we hope future
research could find out the best ways of providing
transparency in different types of AGSs and for different
types of users.
General Propositions
In this paper, we define users’ perceived understanding of
AGSs as the extent to which users perceive that they
understand the meaning of the information provided by
AGSs, which explains what AGSs do and justifies why they
do it. We study users’ subjective perception of how much
they understand the inner logic of AGSs rather than their
true level of understanding how AGSs work because it has
been proved that compared to users’ actual knowledge,
users’ subjective perceptions of their understanding of a
website are more influential on their intentions to reuse the
website
        <xref ref-type="bibr" rid="ref22 ref41 ref50 ref52 ref54 ref55">(Jiang and Benbasat, 2007)</xref>
        . Providing explanations
regarding how AGSs work has been proved to be effective
in improving users’ perceived understanding and
acceptance of AGSs
        <xref ref-type="bibr" rid="ref11 ref18 ref22 ref31 ref33 ref34 ref40 ref41 ref50 ref52 ref54 ref55 ref55 ref56 ref65">(Hengstler et al. 2016; Lakkaraju et al.
2016; Lehikoinen and Koistinen 2014; Lim et al. 2009;
Pieters, 2011; Wang and Benbasat 2007; 2008; Zliobaite et
al. 2012)</xref>
        . In addition, providing information regarding
AGSs’ data collection in input stage can help users know
better about what kind of their personal data is collected
and why it is collected, and thus have an effect on their
privacy concerns.
      </p>
      <p>P1: The provision of AGS transparency will improve users’
perceived understanding of how AGSs work and why AGSs
perform certain behaviors.</p>
      <p>P2: The provision of AGS transparency will increase users’
privacy concerns.</p>
      <p>In our model, we assume that the effect of transparency
provision on users’ perceptions will be moderated by both
AGS characteristics and user characteristics. When users
have higher level of domain knowledge of AGSs, they will
have the ability to process and comprehend more complex
information provided by AGSs. In this case, providing
information which is harder to process and understand will
have a more positive influence on users with higher levels
of domain knowledge compared to those who have lower
levels of domain knowledge.</p>
      <p>P3: User characteristics will moderate the influence of
transparency provision on users’ perceived understanding
of how AGSs work and why AGSs perform certain
behaviors.</p>
      <p>In addition, the effect of transparency provision may be
different in different types of AGSs. For example,
compared to AGSs with explicit user data collection, input
stage transparency may have a stronger effect on users’
perceptions of systems in AGSs with implicit user data
collection because users are less likely to know the details
of how and why their data is collected when interacting
with such AGSs, and may thus benefit more from the
explanations and justifications provided by AGSs.
P4: AGS characteristics will moderate the influence of
transparency provision on users’ perceived understanding
of how AGSs work and why AGSs perform certain
behaviors.</p>
      <p>
        P5: AGS characteristics will moderate the influence of
transparency provision on users’ privacy concerns.
Previous studies have proved that users’ perceived
understanding of AGSs will positively influence their
attitudes towards AGSs, intentions of adopting AGSs, and
acceptances of AGS outcomes
        <xref ref-type="bibr" rid="ref2 ref22 ref3 ref41 ref42 ref5 ref50 ref52 ref54 ref55">(Cramer et al., 2008; Pu et
al., 2011; Wang and Benbasat, 2007; 2008)</xref>
        , while users’
privacy concerns will negatively influence them
        <xref ref-type="bibr" rid="ref18 ref59">(Hengstler
et al., 2016; Wang and Benbsat; 2008; Yan et al., 2016)</xref>
        .
We also assume that whether or not users’ expectations of
AGSs can be met will moderate the effect of users’
perceived understanding of AGSs on their attitudes towards
AGSs, intentions of adopting AGSs, and acceptances of
AGS outcomes. Users’ expectations of AGSs include both
process and behavior expectations, e.g. “Yelp should ask
me where do I live” or “Yelp should make
recommendations based on the restaurants that I have been
to”, and outcome expectations, e.g. “Yelp should
recommend me some Chinese restaurants”
        <xref ref-type="bibr" rid="ref12 ref2 ref29 ref48 ref53">(Wang and
Benbasat, 2008)</xref>
        . A higher-level perceived understanding of
AGSs can help users know better how AGSs actually work,
and can thus confirm users’ perceptions of
consistency/inconsistency between their expectations and the way AGSs
work. Therefore, we assume that when there is a
consistency, users’ perceived understanding of AGSs will
positively influence their attitudes towards AGSs,
intentions of adopting AGSs, and acceptances of AGS
outcomes because they are clearer that their expectations
are met by AGSs. However, when there is an inconsistency,
a higher-level perceived understanding of AGSs will have
negative influences in that users become more aware that
the way AGSs work are different from their original
expectations.
      </p>
      <p>P6: The effect of users’ perceived understanding of AGSs
on users’ attitudes towards AGSs, intentions of adopting
AGSs, and acceptances of AGS outcomes will be
moderated by the consistency between users’ expectations
of AGSs and the way AGSs actually work. Specifically, if
there is a consistency, users’ perceived understanding of
AGSs will have positive influences. On the contrary, if
there is an inconsistency, users’ perceived understanding of
AGSs will have negative influences.</p>
      <p>P7: Users’ privacy concerns will negatively influence their
attitudes towards AGSs, intentions of adopting AGSs, and
acceptances of AGS outcomes.</p>
      <p>
        Transparency Provision Strategy Considering AGS
Characteristics
Input Stage Characteristics
In the input stage, different AGSs have different ways of
collecting user data. Some of them collect data through
explicitly asking users to indicate their preferences or needs
(e.g. a filter), while others collect data through implicitly
tracking and recording users’ interactive behaviors (e.g.
browsing behaviors, transaction records, locations, etc.). It
is traditionally thought that providing explanations and
justifications in input stage will positively influence users’
attitude towards AGSs
        <xref ref-type="bibr" rid="ref22 ref41 ref50 ref52 ref54 ref55">(Wang and Benbasat, 2007)</xref>
        because
this enables users to know more about data-collecting
process. However, we propose in our model that we should
also consider another “side effect” of doing so in AGSs
with implicit user data collection – it makes users become
more aware of the fact that their personal data is being
collected by AGSs. In this case, providing explanations
about what kind of data is collected and how it is collected
may increase users’ privacy concerns when interacting with
AGSs.
      </p>
      <p>P8: For AGSs with explicit input data collection, both input
stage explanations (P8a) and justifications (P8b) will have
a positive influence on users’ perceived understanding of
AGSs.</p>
      <p>P9: For AGSs with implicit input data collection, both input
stage explanations (P9a) and justifications (P9b) will have
a positive influence on users’ perceived understanding of
AGSs.</p>
      <p>P10: For AGSs with implicit input data collection, input
stage explanations will increase users’ privacy concerns.
Process Stage Characteristic
In process stage, different AGSs employ different
techniques to generate advice. Some techniques have
relatively easy inner logics (e.g. information retrieval),
while some other techniques’ logics are more like black
boxes and may be beyond non-professional users’
comprehensions (e.g. machine learning techniques). We
assume that both process explanations and justifications of
easily-understood advice-generating techniques will be
comprehended by users, and can thus help users understand
AGSs better. However, for complex advice-giving systems,
while the process justifications can still be understood by
users, the process explanations may not. We argue that the
process explanations can help users understand AGSs only
when users have enough time to process the explanations
and enough ability to figure out the meaning of the
explanations. Once the explanations are beyond a user’s
comprehension, users may feel more confused because they
may become aware that their original ideas of how AGSs
work are not accurate, and realize that they actually know
very little about the real way AGSs work.
P11: For AGSs whose advice-generating techniques are
easily understood by users, both process stage explanations
(P11a) and justifications (P11b) will have positive effects
on users’ perceived understanding of AGSs.</p>
      <p>P12: For AGSs whose advice-generating techniques are not
easily understood by users, the process stage explanations
can positively influence users’ perceived understanding of
AGSs only when users have the ability to process and
comprehend the explanations. Once the provided
explanations are beyond users’ comprehensions, it will start
to negatively influence users’ perceived understanding of
AGSs.</p>
      <p>
        P13: For AGSs whose advice-generating techniques are not
easily understood by users, process stage justifications will
positively influence users’ perceived understanding of
AGSs
Transparency Provision Strategy Considering User
Characteristics
In addition to AGS characteristics, we also assume that
users’ domain knowledge of AGSs will moderate the effect
of the provision of AGS transparency on users’ perceived
understanding of AGSs. According to the Elaboration
Likelihood Model
        <xref ref-type="bibr" rid="ref39">(Petty and Cacioppo, 1986)</xref>
        , users who
have higher levels of domain knowledge will use a central
route to process the provided information and thus focus
more on explanations, which describe AGSs’ behaviors. In
contrast, users who have lower levels of domain knowledge
will be more likely to use peripheral route to process
information and focus more on justifications, which justify
AGSs’ behaviors through emphasizing the
importance/advantages of doing so.
      </p>
      <p>P14: For users with higher levels of domain knowledge,
explanations will have stronger positive effect on their
perceived understanding of AGSs (P14a), while
justifications will have less positive effect on their
perceived understanding of AGSs (P14b).</p>
      <p>P15: For users with lower levels of domain knowledge,
justifications will have stronger positive effect on their
perceived understanding of AGSs (P15a), while
explanations will have less positive effect on their
perceived understanding of AGSs (P15b).</p>
      <p>DISCUSSION
Conclusions
Providing information of AGSs’ reasoning has been proved
to be effective in enhancing users’ adoptions of AGSs and
acceptances of AGS outcomes. Despite fruitful research
findings, little attention has been paid to the explorations of
optimal transparency provision strategy for different types
of AGSs and users. In this paper, we first defined
transparency in the context of AGSs and summarized the
existing research findings of AGS transparency. We then
developed a framework of AGS transparency provision,
identifying six components of AGS transparency provision
strategies, i.e. transparency provision stage, transparency
type, the content of the provided information, the timing of
transparency provision, the type of transparency provision
interface, and the format of the provided information.
Finally, we developed a research model of AGS
transparency provision strategy and proposed a set of
propositions. Based on this model, we are expecting to find
out the optimal way of providing transparency for AGSs
and users with different characteristics.</p>
      <p>Contributions
Our research has both academic and practical significance.
It could contribute to the existing literature by indicating
that the strategies of providing transparency should not be
the same across all types of AGSs and users, and exploring
the optimal ways of providing transparency for different
types of AGSs and users. Our research could also contribute
to the practice by offering design suggestions on AGS
explanation interfaces.</p>
      <p>Limitations and Future Research
Due to the limited time and cost, we have not yet conducted
empirical studies to test the propositions in our research
model. In addition, as an early-stage exploration of AGS
transparency provision strategies, the model developed in
this paper only considered the influence of a single factor
(e.g. the complexity of advice-generation techniques, users’
domain knowledge, etc.) on the selection of AGS
transparency provision strategy. However, despite the
limitations, our work figured out the possible rules of
providing transparency for different types of AGSs and
users. Future research could be conducted based on our
work through refining and expanding our model by
including more factors and considering the combined
effects of multiple factors, as well as testing this model by
conducting empirical research.</p>
      <p>ACKNOWLEDGMENTS
The authors would like to thank the anonymous referees for
their valuable advice and suggestions.</p>
      <p>implementation: barriers to enhance information
transparency and accountability." In SHS Web of
Conferences, vol. 34, 02003.</p>
    </sec>
    <sec id="sec-3">
      <title>Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New</title>
      <p>Media &amp; Society 20, 3: 973-989.</p>
      <p>Vicky Arnold, Nicole Clark, Philip A. Collier, Stewart
A. Leech, and Steve G. Sutton. 2006. The differential
use and effect of knowledge-based system explanations
in novice and expert judgment decisions. MIS</p>
      <p>Quarterly: 79-97.
7. Izak Benbasat, and Weiquan Wang. 2005. Trust in and
adoption of online recommendation agents. Journal of
the association for information systems 6, 3: 4.</p>
    </sec>
    <sec id="sec-4">
      <title>Mustafa Bilgic and Raymond J. Mooney. 2005.</title>
      <p>Explaining recommendations: Satisfaction vs.
promotion. In Beyond Personalization Workshop (IUI
'05), Vol. 5, 153.</p>
    </sec>
    <sec id="sec-5">
      <title>Bangho Cho, Sung Yul Ryoo, and Kyung Kyu Kim. 2017. Interorganizational dependence, information transparency in interorganizational information systems, and supply chain performance. European</title>
      <p>Journal of Information Systems 26, 2: 185-205.
argumentation. Information Systems Research 17, 3:
286-300.</p>
      <p>Perceptions of a Lack‐of‐Group Bias and Transparency
in the Performance Evaluation System Relate to Job
Satisfaction. Human Resource Management 55, 6:
1059-1077.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Ibrahim</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Al-Jabri</surname>
            and
            <given-names>Roztocki</given-names>
          </string-name>
          <string-name>
            <surname>Narcyz</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Adoption of ERP systems: Does information transparency matter?</article-title>
          .
          <source>Telematics and Informatics</source>
          <volume>32</volume>
          ,
          <issue>2</issue>
          :
          <fpage>300</fpage>
          -
          <lpage>310</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Sameh</given-names>
            <surname>Al‐Natour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          , and
          <string-name>
            <surname>Ronald</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Cenfetelli</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>The effects of process and outcome similarity on users' evaluations of decision aids</article-title>
          .
          <source>Decision Sciences</source>
          <volume>39</volume>
          ,
          <issue>2</issue>
          :
          <fpage>175</fpage>
          -
          <lpage>211</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Sameh</given-names>
            <surname>Al-Natour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Ron</given-names>
            <surname>Cenfetelli</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>The adoption of online shopping assistants: perceived similarity as an antecedent to evaluative beliefs</article-title>
          .
          <source>Journal of the Association for Information Systems</source>
          <volume>12</volume>
          , 5:
          <fpage>347</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Fitri</given-names>
            <surname>Amalia</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Socio-technical analysis of Indonesian government e-procurement system</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          10.
          <string-name>
            <surname>Henriette</surname>
            <given-names>Cramer</given-names>
          </string-name>
          , Vanessa Evers, Satyan Ramlal, Maarten Van Someren,
          <string-name>
            <surname>Lloyd Rutledge</surname>
            , Natalia Stash, Lora Aroyo, and
            <given-names>Bob</given-names>
          </string-name>
          <string-name>
            <surname>Wielinga</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>The effects of transparency on trust in and acceptance of a contentbased art recommender</article-title>
          .
          <source>User Modeling and UserAdapted Interaction</source>
          <volume>18</volume>
          ,
          <issue>5</issue>
          :
          <fpage>455</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          11.
          <string-name>
            <surname>James</surname>
            <given-names>Davidson</given-names>
          </string-name>
          , Benjamin Liebald, Junning Liu, Palash Nandy, Taylor Van Vleet,
          <string-name>
            <surname>Ullas Gargi</surname>
          </string-name>
          , Sujoy Gupta et al.
          <year>2010</year>
          .
          <article-title>The YouTube video recommendation system</article-title>
          .
          <source>In Proceedings of the fourth ACM conference on Recommender systems</source>
          ,
          <volume>293</volume>
          -
          <fpage>296</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          12.
          <string-name>
            <surname>Fred</surname>
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Davis</surname>
          </string-name>
          .
          <year>1989</year>
          .
          <article-title>Perceived usefulness, perceived ease of use, and user acceptance of information technology</article-title>
          .
          <source>MIS quarterly:</source>
          <fpage>319</fpage>
          -
          <lpage>340</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          13.
          <string-name>
            <surname>Jasbir</surname>
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Dhaliwal</surname>
            and
            <given-names>Izak</given-names>
          </string-name>
          <string-name>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>1996</year>
          .
          <article-title>The use and effects of knowledge-based system explanations: theoretical foundations and a framework for empirical evaluation</article-title>
          .
          <source>Information systems research 7</source>
          , 3:
          <fpage>342</fpage>
          -
          <lpage>362</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          14. Nicholas Diakopoulos and
          <string-name>
            <given-names>Michael</given-names>
            <surname>Koliska</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Algorithmic transparency in the news media</article-title>
          .
          <source>Digital Journalism</source>
          <volume>5</volume>
          ,
          <issue>7</issue>
          :
          <fpage>809</fpage>
          -
          <lpage>828</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          15.
          <string-name>
            <given-names>Gerhard</given-names>
            <surname>Friedrich</surname>
          </string-name>
          and
          <string-name>
            <given-names>Markus</given-names>
            <surname>Zanker</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>A taxonomy for generating explanations in recommender systems</article-title>
          .
          <source>AI</source>
          Magazine
          <volume>32</volume>
          ,
          <issue>3</issue>
          :
          <fpage>90</fpage>
          -
          <lpage>98</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          16.
          <string-name>
            <surname>Fatih</surname>
            <given-names>Gedikli</given-names>
          </string-name>
          , Dietmar Jannach, and
          <string-name>
            <given-names>Mouzhi</given-names>
            <surname>Ge</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>How should I explain? A comparison of different explanation types for recommender systems</article-title>
          .
          <source>International Journal of Human-Computer Studies 72</source>
          ,
          <issue>4</issue>
          :
          <fpage>367</fpage>
          -
          <lpage>382</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          17. David Gefen,
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Paula</given-names>
            <surname>Pavlou</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>A research agenda for trust in online environments</article-title>
          .
          <source>Journal of Management Information Systems</source>
          <volume>24</volume>
          , 4:
          <fpage>275</fpage>
          -
          <lpage>286</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          18. David Gefen,
          <string-name>
            <given-names>Elena</given-names>
            <surname>Karahanna</surname>
          </string-name>
          , and
          <string-name>
            <surname>Detmar</surname>
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Straub</surname>
          </string-name>
          .
          <year>2003</year>
          .
          <article-title>Trust and TAM in online shopping: an integrated model</article-title>
          .
          <source>MIS quarterly 27</source>
          , 1:
          <fpage>51</fpage>
          -
          <lpage>90</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          19.
          <string-name>
            <surname>Nelson</surname>
            <given-names>Granados</given-names>
          </string-name>
          , Alok Gupta, and
          <string-name>
            <given-names>Robert J.</given-names>
            <surname>Kauffman</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Research commentary-information transparency in business-to-consumer markets: concepts, framework, and research agenda</article-title>
          .
          <source>Information Systems Research</source>
          <volume>21</volume>
          ,
          <issue>2</issue>
          :
          <fpage>207</fpage>
          -
          <lpage>226</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          20.
          <string-name>
            <given-names>Shirley</given-names>
            <surname>Gregor</surname>
          </string-name>
          and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>1999</year>
          .
          <article-title>Explanations from intelligent systems: Theoretical foundations and implications for practice</article-title>
          .
          <source>MIS quarterly:</source>
          <fpage>497</fpage>
          -
          <lpage>530</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          21.
          <string-name>
            <surname>Ido</surname>
            <given-names>Guy</given-names>
          </string-name>
          , Naama Zwerdling, David Carmel,
          <string-name>
            <given-names>Inbal</given-names>
            <surname>Ronen</surname>
          </string-name>
          , Erel Uziel, Sivan Yogev, and Shila OfekKoifman.
          <year>2009</year>
          .
          <article-title>Personalized recommendation of social software items based on social relations</article-title>
          .
          <source>In Proceedings of the third ACM conference on Recommender systems</source>
          ,
          <volume>53</volume>
          -
          <fpage>60</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          22.
          <string-name>
            <surname>Hans</surname>
            <given-names>Hedbom</given-names>
          </string-name>
          , Tobias Pulls, and
          <string-name>
            <given-names>Marit</given-names>
            <surname>Hansen</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Transparency tools</article-title>
          .
          <source>In Privacy and Identity Management for Life</source>
          ,
          <volume>135</volume>
          -
          <fpage>143</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          23.
          <string-name>
            <surname>Monika</surname>
            <given-names>Hengstler</given-names>
          </string-name>
          , Ellen Enkel, and
          <string-name>
            <given-names>Selina</given-names>
            <surname>Duelli</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Applied artificial intelligence and trust-The case of autonomous vehicles and medical assistance devices</article-title>
          .
          <source>Technological Forecasting and Social Change</source>
          <volume>105</volume>
          :
          <fpage>105</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          24.
          <string-name>
            <surname>Jonathan L. Herlocker</surname>
          </string-name>
          , Joseph A.
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>and John</given-names>
          </string-name>
          <string-name>
            <surname>Riedl</surname>
          </string-name>
          .
          <year>2000</year>
          .
          <article-title>Explaining collaborative filtering recommendations</article-title>
          .
          <source>In Proceedings of the 2000 ACM conference on Computer supported cooperative work</source>
          ,
          <fpage>241</fpage>
          -
          <lpage>250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          25. Antonio Hernando, JesúS Bobadilla, Fernando Ortega, and Abraham GutiéRrez.
          <year>2013</year>
          .
          <article-title>Trees for explaining recommendations made through collaborative filtering</article-title>
          .
          <source>Information Sciences</source>
          <volume>239</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          26.
          <string-name>
            <surname>Mahmood</surname>
            <given-names>Hosseini</given-names>
          </string-name>
          , Alimohammad Shahri, Keith Phalp, and
          <string-name>
            <given-names>Raian</given-names>
            <surname>Ali</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Four reference models for transparency requirements in information systems</article-title>
          .
          <source>Requirements Engineering</source>
          <volume>23</volume>
          ,
          <issue>2</issue>
          :
          <fpage>251</fpage>
          -
          <lpage>275</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          27.
          <string-name>
            <given-names>Zhenhui</given-names>
            <surname>Jiang</surname>
          </string-name>
          and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>The effects of presentation formats and task complexity on online consumers' product understanding</article-title>
          .
          <source>MIS Quarterly:</source>
          <fpage>475</fpage>
          -
          <lpage>500</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          28.
          <string-name>
            <given-names>Dongmin</given-names>
            <surname>Kim</surname>
          </string-name>
          and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>The effects of trust-assuring arguments on consumer trust in Internet stores: Application of Toulmin's model of</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          29.
          <string-name>
            <given-names>Dongmin</given-names>
            <surname>Kim</surname>
          </string-name>
          and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Trust-assuring arguments in B2C e-commerce: impact of content, source, and price on trust</article-title>
          .
          <source>Journal of Management Information Systems</source>
          <volume>26</volume>
          , 3:
          <fpage>175</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          30.
          <string-name>
            <given-names>Dongmin</given-names>
            <surname>Kim</surname>
          </string-name>
          and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Designs for effective implementation of trust assurances in internet stores</article-title>
          .
          <source>Communications of the ACM 53</source>
          ,
          <issue>2</issue>
          :
          <fpage>121</fpage>
          -
          <lpage>126</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          31.
          <string-name>
            <surname>René</surname>
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Kizilcec</surname>
          </string-name>
          .
          <year>2016</year>
          <article-title>How much information?: Effects of transparency on trust in an algorithmic interface</article-title>
          .
          <source>In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <volume>2390</volume>
          -
          <fpage>2395</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          32. Sherrie Xiao Komiak and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2004</year>
          .
          <article-title>Understanding customer trust in agent-mediated electronic commerce, web-mediated electronic commerce, and traditional commerce</article-title>
          .
          <source>Information technology and management 5</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          :
          <fpage>181</fpage>
          -
          <lpage>207</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          33.
          <string-name>
            <surname>Sherrie YX</surname>
            Komiak and
            <given-names>Izak</given-names>
          </string-name>
          <string-name>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>The effects of personalization and familiarity on trust and adoption of recommendation agents</article-title>
          .
          <source>MIS quarterly:</source>
          <fpage>941</fpage>
          -
          <lpage>960</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          34.
          <string-name>
            <surname>Sherrie YX</surname>
            Komiak and
            <given-names>Izak</given-names>
          </string-name>
          <string-name>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>A twoprocess view of trust and distrust building in recommendation agents: A process-tracing study</article-title>
          .
          <source>Journal of the Association for Information Systems</source>
          <volume>9</volume>
          ,
          <issue>12</issue>
          :
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          35.
          <string-name>
            <surname>Joseph</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Konstan</surname>
          </string-name>
          and John Riedl.
          <year>2012</year>
          .
          <article-title>Recommender systems: from algorithms to user experience. User modeling and user-</article-title>
          <source>adapted interaction 22</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          :
          <fpage>101</fpage>
          -
          <lpage>123</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          36.
          <string-name>
            <surname>Himabindu</surname>
            <given-names>Lakkaraju</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stephen H. Bach</surname>
            , and
            <given-names>Jure</given-names>
          </string-name>
          <string-name>
            <surname>Leskovec</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Interpretable decision sets: A joint framework for description and prediction</article-title>
          .
          <source>In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</source>
          ,
          <fpage>1675</fpage>
          -
          <lpage>1684</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          37.
          <string-name>
            <surname>Lucian</surname>
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Leape</surname>
          </string-name>
          .
          <year>1994</year>
          .
          <article-title>Error in medicine</article-title>
          .
          <source>Jama</source>
          <volume>272</volume>
          ,
          <issue>23</issue>
          :
          <fpage>1851</fpage>
          -
          <lpage>1857</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          38.
          <string-name>
            <given-names>Juha</given-names>
            <surname>Lehikoinen</surname>
          </string-name>
          and
          <string-name>
            <given-names>Ville</given-names>
            <surname>Koistinen</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>In big data we trust?</article-title>
          .
          <source>Interactions 21</source>
          ,
          <issue>5</issue>
          :
          <fpage>38</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          39.
          <string-name>
            <surname>Brian</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <surname>Anind K. Dey</surname>
            , and
            <given-names>Daniel</given-names>
          </string-name>
          <string-name>
            <surname>Avrahami</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Why and why not explanations improve the intelligibility of context-aware intelligent systems</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</source>
          ,
          <volume>2119</volume>
          -
          <fpage>2128</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          40.
          <string-name>
            <surname>Zachary</surname>
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Lipton</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>The mythos of model interpretability</article-title>
          .
          <source>arXiv preprint arXiv:1606</source>
          .
          <fpage>03490</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          41.
          <string-name>
            <surname>Ji-Ye Mao</surname>
            and
            <given-names>Izak</given-names>
          </string-name>
          <string-name>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2000</year>
          .
          <article-title>The use of explanations in knowledge-based systems: Cognitive perspectives and a process-tracing analysis</article-title>
          .
          <source>Journal of Management Information Systems</source>
          <volume>17</volume>
          , 2:
          <fpage>153</fpage>
          -
          <lpage>179</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          42.
          <string-name>
            <given-names>Olivera</given-names>
            <surname>Marjanovic</surname>
          </string-name>
          and
          <string-name>
            <given-names>Dubravka</given-names>
            <surname>Cecez-Kecmanovic</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Exploring the tension between transparency and datification effects of open government IS through the lens of Complex Adaptive Systems</article-title>
          .
          <source>The Journal of Strategic Information Systems</source>
          <volume>26</volume>
          , 3:
          <fpage>210</fpage>
          -
          <lpage>232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          43.
          <string-name>
            <surname>David McSherry</surname>
          </string-name>
          .
          <year>2005</year>
          .
          <article-title>Explanation in recommender systems</article-title>
          .
          <source>Artificial Intelligence Review</source>
          <volume>24</volume>
          ,
          <issue>2</issue>
          :
          <fpage>179</fpage>
          -
          <lpage>197</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          44. Richard E. Petty and John T. Cacioppo.
          <year>1986</year>
          .
          <article-title>The elaboration likelihood model of persuasion</article-title>
          .
          <source>In Communication and persuasion</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          45.
          <string-name>
            <given-names>Wolter</given-names>
            <surname>Pieters</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Explanation and trust: what to tell the user in security and AI?</article-title>
          .
          <source>Ethics and information technology 13</source>
          , 1:
          <fpage>53</fpage>
          -
          <lpage>64</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          46.
          <string-name>
            <given-names>Pearl</given-names>
            <surname>Pu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Li</given-names>
            <surname>Chen</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Trust-inspiring explanation interfaces for recommender systems</article-title>
          .
          <source>Knowledge-Based Systems 20</source>
          , 6:
          <fpage>542</fpage>
          -
          <lpage>556</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          47.
          <string-name>
            <surname>Pearl</surname>
            <given-names>Pu</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Li</given-names>
            <surname>Chen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Rong</given-names>
            <surname>Hu</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>A user-centric evaluation framework for recommender systems</article-title>
          .
          <source>In Proceedings of the fifth ACM conference on Recommender systems</source>
          ,
          <volume>157</volume>
          -
          <fpage>164</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          48.
          <string-name>
            <surname>Pearl</surname>
            <given-names>Pu</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Li</given-names>
            <surname>Chen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Rong</given-names>
            <surname>Hu</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Evaluating recommender systems from the user's perspective: survey of the state of the art</article-title>
          .
          <source>User Modeling and UserAdapted Interaction</source>
          <volume>22</volume>
          ,
          <fpage>4</fpage>
          -
          <lpage>5</lpage>
          :
          <fpage>317</fpage>
          -
          <lpage>355</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          49. Marco Tulio Ribeiro,
          <string-name>
            <given-names>Sameer</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Carlos</given-names>
            <surname>Guestrin</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Why should i trust you?: Explaining the predictions of any classifier</article-title>
          .
          <source>In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</source>
          ,
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          50.
          <string-name>
            <surname>Andrew</surname>
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Schnackenberg</surname>
            and
            <given-names>Edward C.</given-names>
          </string-name>
          <string-name>
            <surname>Tomlinson</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Organizational transparency: A new perspective on managing trust in organization-stakeholder relationships</article-title>
          .
          <source>Journal of Management</source>
          <volume>42</volume>
          ,
          <issue>7</issue>
          :
          <fpage>1784</fpage>
          -
          <lpage>1810</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          51.
          <string-name>
            <given-names>Dayana</given-names>
            <surname>Spagnuelo</surname>
          </string-name>
          and
          <string-name>
            <given-names>Gabriele</given-names>
            <surname>Lenzini</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Patient-centred transparency requirements for medical data sharing systems</article-title>
          .
          <source>In New Advances in Information Systems and Technologies</source>
          ,
          <volume>1073</volume>
          -
          <fpage>1083</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          52.
          <string-name>
            <given-names>Kirsten</given-names>
            <surname>Swearingen</surname>
          </string-name>
          and
          <string-name>
            <given-names>Rashmi</given-names>
            <surname>Sinha</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>Interaction design for recommender systems</article-title>
          .
          <source>In Designing Interactive Systems</source>
          , vol.
          <volume>6</volume>
          , no.
          <volume>12</volume>
          ,
          <fpage>312</fpage>
          -
          <lpage>334</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          53.
          <string-name>
            <surname>Panagiotis</surname>
            <given-names>Symeonidis</given-names>
          </string-name>
          , Alexandros Nanopoulos, and
          <string-name>
            <given-names>Yannis</given-names>
            <surname>Manolopoulos</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>Providing justifications in recommender systems</article-title>
          .
          <source>IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans</source>
          <volume>38</volume>
          ,
          <issue>6</issue>
          :
          <fpage>1262</fpage>
          -
          <lpage>1272</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          54.
          <string-name>
            <surname>Panagiotis</surname>
            <given-names>Symeonidis</given-names>
          </string-name>
          , Alexandros Nanopoulos, and
          <string-name>
            <given-names>Yannis</given-names>
            <surname>Manolopoulos</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>MoviExplain: a recommender system with explanations</article-title>
          .
          <source>RecSys</source>
          <volume>9</volume>
          :
          <fpage>317</fpage>
          -
          <lpage>320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          55.
          <string-name>
            <given-names>Nava</given-names>
            <surname>Tintarev</surname>
          </string-name>
          and
          <string-name>
            <given-names>Judith</given-names>
            <surname>Masthoff</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>A survey of explanations in recommender systems</article-title>
          .
          <source>In 2007 IEEE 23rd international conference on data engineering workshop</source>
          , 801-
          <fpage>810</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          56.
          <string-name>
            <given-names>Nava</given-names>
            <surname>Tintarev</surname>
          </string-name>
          and
          <string-name>
            <given-names>Judith</given-names>
            <surname>Masthoff</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Evaluating the effectiveness of explanations for recommender systems</article-title>
          .
          <source>User Modeling and User-Adapted Interaction 22</source>
          ,
          <fpage>4</fpage>
          -
          <lpage>5</lpage>
          :
          <fpage>399</fpage>
          -
          <lpage>439</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          57.
          <string-name>
            <given-names>Weiquan</given-names>
            <surname>Wang</surname>
          </string-name>
          and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs</article-title>
          .
          <source>Journal of Management Information Systems</source>
          <volume>23</volume>
          , 4:
          <fpage>217</fpage>
          -
          <lpage>246</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          58.
          <string-name>
            <given-names>Weiquan</given-names>
            <surname>Wang</surname>
          </string-name>
          and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>Attributions of trust in decision support technologies: A study of recommendation agents for e-commerce</article-title>
          .
          <source>Journal of Management Information Systems</source>
          <volume>24</volume>
          , 4:
          <fpage>249</fpage>
          -
          <lpage>273</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          59.
          <string-name>
            <given-names>Bo</given-names>
            <surname>Xiao</surname>
          </string-name>
          and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>E-commerce product recommendation agents: use, characteristics, and impact</article-title>
          .
          <source>MIS quarterly 31</source>
          , 1:
          <fpage>137</fpage>
          -
          <lpage>209</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          60.
          <string-name>
            <given-names>Bo</given-names>
            <surname>Xiao</surname>
          </string-name>
          and
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Research on the use, characteristics, and impact of e-commerce product recommendation agents: A review and update for 2007-2012</article-title>
          . In Handbook of Strategic e-Business Management,
          <fpage>403</fpage>
          -
          <lpage>431</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          61. Jingjun David Xu,
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          , and
          <string-name>
            <surname>Ronald</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Cenfetelli</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>The Nature and Consequences of Trade-off Transparency in the Context of Recommendation Agents</article-title>
          .
          <source>MIS quarterly 38</source>
          ,
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          62. David Jingjun Xu,
          <string-name>
            <given-names>Izak</given-names>
            <surname>Benbasat</surname>
          </string-name>
          , and
          <string-name>
            <surname>Ronald</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Cenfetelli</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>A Two-Stage Model of Generating Product Advice: Proposing and Testing the Complementarity Principle</article-title>
          .
          <source>Journal of Management Information Systems</source>
          <volume>34</volume>
          , 3:
          <fpage>826</fpage>
          -
          <lpage>862</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          63.
          <string-name>
            <given-names>Yoshitaka</given-names>
            <surname>Yamazaki</surname>
          </string-name>
          and
          <string-name>
            <given-names>Jeewhan</given-names>
            <surname>Yoon</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <string-name>
            <given-names>A</given-names>
            <surname>Cross‐National Study</surname>
          </string-name>
          of Fairness in Asia: How
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          64.
          <string-name>
            <surname>Zheng</surname>
            <given-names>Yan</given-names>
          </string-name>
          , Jun Liu,
          <string-name>
            <surname>Robert H. Deng</surname>
          </string-name>
          , and Francisco Herrera.
          <year>2016</year>
          .
          <article-title>Trust management for multimedia big data</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          65. L. Richard Ye and
          <string-name>
            <given-names>Paul E.</given-names>
            <surname>Johnson</surname>
          </string-name>
          .
          <year>1995</year>
          .
          <article-title>The impact of explanation facilities on user acceptance of expert systems advice</article-title>
          .
          <source>MIS Quarterly:</source>
          <fpage>157</fpage>
          -
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          66.
          <string-name>
            <given-names>Fahri</given-names>
            <surname>Yetim</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>A Framework for Organizing Justifications for Strategic Use in Adaptive Interaction Contexts</article-title>
          .
          <source>In ECIS</source>
          ,
          <fpage>815</fpage>
          -
          <lpage>825</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          67.
          <string-name>
            <surname>Liying</surname>
            <given-names>Zhou</given-names>
          </string-name>
          , Weiquan Wang, Jingjun David Xu, Tao Liu, and
          <string-name>
            <given-names>Jibao</given-names>
            <surname>Gu</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Perceived information transparency in B2C e-commerce: An empirical investigation</article-title>
          .
          <source>Information &amp; Management</source>
          <volume>55</volume>
          ,
          <issue>7</issue>
          :
          <fpage>912</fpage>
          -
          <lpage>927</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          68.
          <string-name>
            <surname>Xujuan</surname>
            <given-names>Zhou</given-names>
          </string-name>
          , Yue Xu,
          <string-name>
            <given-names>Yuefeng</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Audun</given-names>
            <surname>Josang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Clive</given-names>
            <surname>Cox</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>The state-of-the-art in personalized recommender systems for social networking</article-title>
          .
          <source>Artificial Intelligence Review</source>
          <volume>37</volume>
          ,
          <issue>2</issue>
          :
          <fpage>119</fpage>
          -
          <lpage>132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          69.
          <string-name>
            <given-names>Kevin</given-names>
            <surname>Zhu</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>Information transparency in electronic marketplaces: Why data transparency may hinder the adoption of B2B exchanges</article-title>
          .
          <source>Electronic markets 12</source>
          , 2:
          <fpage>92</fpage>
          -
          <lpage>99</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          70.
          <string-name>
            <surname>Indre</surname>
            <given-names>Zliobaite</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Albert Bifet</surname>
          </string-name>
          , Mohamed Gaber, Bogdan Gabrys, Joao Gama, Leandro Minku, and
          <string-name>
            <given-names>Katarzyna</given-names>
            <surname>Musial</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Next challenges for adaptive learning systems</article-title>
          .
          <source>ACM SIGKDD Explorations Newsletter</source>
          <volume>14</volume>
          ,
          <issue>1</issue>
          :
          <fpage>48</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref66">
        <mixed-citation>
          71.
          <string-name>
            <surname>Erik</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Zouave</surname>
            and
            <given-names>Thomas</given-names>
          </string-name>
          <string-name>
            <surname>Marquenie</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>An Inconvenient Truth: Algorithmic Transparency &amp; Accountability in Criminal Intelligence Profiling</article-title>
          .
          <source>In 2017 European Intelligence and Security Informatics Conference</source>
          ,
          <volume>17</volume>
          -
          <fpage>23</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>