<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Open, Scrutable and Explainable Interest Models for Transparent Recommendation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mouadh Guesmi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed Amine Chatti</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yiqi Sun</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shadi Zumor</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fangzheng Ji</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Arham Muslim</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Laura Vorgerd</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shoeb Ahmed Joarder</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National University of Sciences and Technology</institution>
          ,
          <country country="PK">Pakistan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Duisburg-Essen</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Enhancing explainability in recommender systems has drawn more and more attention in recent years. In this paper, we address two aspects that are under-investigated in explainable recommendation research, namely providing explanations that focus on the input (i.e. user model) and presenting personalized explanations with varying level of details. To address this gap, we propose the transparent Recommendation and Interest Modeling Application (RIMA) that aims at opening, scrutinizing, and explaining the user's interest model based on three levels of details. The results of a preliminary interview-based user study demonstrated potential benefits in terms of transparency, scrutability, and user satisfaction with the explainable recommender system.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable artificial intelligence</kwd>
        <kwd>explainable recommender systems</kwd>
        <kwd>explainability</kwd>
        <kwd>transparency</kwd>
        <kwd>scrutability</kwd>
        <kwd>user model</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        a recommender system is trying to explain, i.e., the
recommendation input, process, or output. Explainable
Explanations in recommender systems have gained an recommendation focusing on the recommendation
proincreasing importance in the last few years. An ex- cess aims to understand how the algorithm works. The
planation can be considered as a piece of information explainability of the recommendation output focuses
presented to the user to expose the reason behind a on the recommended items. This approach treats the
recommendation [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Explanations can have a large recommendation process as a black box and tries to
efect on how users respond to recommendations [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. justify why the recommendation was presented. The
Recent research focused on diferent dimensions of ex- explainability of the recommendation input focuses
plainable recommendation and proposed several clas- on the user model. This approach provides a
descripsifications [
        <xref ref-type="bibr" rid="ref3 ref4 ref5 ref6">3, 4, 5, 6</xref>
        ]. For instance, Guesmi et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] tion that summarizes the system’s understanding of
classified explainable recommender systems based on the user’s preferences and allows the user to
scrutifour dimensions, namely the explanation aim (trans- nize this summary and thereby directly modify his or
parency, efectiveness, eficiency, scrutability, persua- her user model [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Compared to explainability of the
siveness, trust, satisfaction), explanation focus (input: recommendation output or the recommendation
prouser model, process: algorithm, output: recommended cess, focusing on the recommendation input (i.e., user
items), explanation type (collaborative-based, content- model) is under-explored in explainable
recommendabased, social, hybrid) and explanation display (textual, tion research [
        <xref ref-type="bibr" rid="ref2 ref8">2, 8</xref>
        ].
visual). Besides these four dimensions, other essential Another crucial design choice in explainable
recomdesign choices must be considered, such as the scope mendation relates to the level of explanation detail that
and level of detail of the explanation [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. should be provided to the end-user. Results of
previThe focus of an explanation refers to the part that ous research on explainable AI (XAI) showed that for
specific users or user groups, the detailed explanation
IUI ’21: Joint Proceedings of the ACM IUI 2021 Workshops, April 13-17, does not automatically result in higher trust and user
2"02m1,oCuoaldlehg.eguSteastmioin@, sUtSuAd.uni-due.de (M. Guesmi); satisfaction because the provision of additional
explamohamed.chatti@uni-due.de (M.A. Chatti); nations increases cognitive efort, and diferent users
yiqi.sun@stud.uni-due.de (Y. Sun); shadi.zumor@stud.uni-due.de (S. have diferent needs for explanation [ 9, 10, 11]. Recent
aZruhmamor.)m;fuasnligmzh@esnege.Jcis@.esdtuu.dp.kun(Ai-.dMueu.dslei m(F).; Ji); studies on explainable recommendation showed that
laura.vorgerd@stud.uni-due.de (L. Vorgerd); personal characteristics have an efect on the
percepshoeb.joarder@stud.uni-due.de (S.A. Joarder) tion of explanations and that it is important to take
personal characteristics into account when designing
explanations [12, 13]. Consequently, Millecamp et al.
      </p>
      <p>© 2021 Copyright © 2021 for this paper by its authors. Use permitted under
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g CCreEatUivReCWommoornkssLhiceonpse APtrtroibcuteioend4i.0nIgntsern(aCtioEnUal R(C-C WBYS4..0o).rg)
[13] suggest that (1) users should be able to choose output (i.e. why an item was recommended) or the
whether or not they wish to see explanations and (2) recommendation process (i.e. how a recommendation
explanation components should be flexible enough to was generated) is well researched in the explainable
present varying levels of details depending on users’ recommendation community, researchers have only
preferences. Concrete solutions following this second recently begun exploring methods that support the
exdesign guideline are, however, still lacking in explain- ploration and understanding of the recommendation
able recommendation research. input (i.e. the user model) to provide transparency</p>
      <p>
        In this paper, we implemented a transparent Recom- in recommender systems [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In general, research on
mendation and Interest Modeling Application (RIMA) input-based explainable recommendation can be
classithat aims at achieving transparency by opening, scruti- fied into three groups with increasing complexity. The
nizing, and explaining the user’s interest model based ifrst group focuses on opening and exposing the black
on three diferent levels of details. Our contributions box user model. The second group adds means to
exare: (1) a human-centered explainable recommendation plore and scrutinize the exposed user model. And, the
approach driven by open, scrutable, and explainable third group provides methods that support the
underinterest models and (2) a shift from a one-size-fits-all standing of the user model through explanations.
to a personalized approach to explainable
recommendation with varying level of details to meet the needs 2.1.1. Opening the User Model
and preferences of diferent users.
      </p>
      <p>
        The rest of the paper is organized as follows: Sec- Several tools have represented and exposed the user
tion 2 summarizes related work. Section 3 discusses model behind the recommendation mechanism. For
the RIMA application. Section 4 presents a preliminary instance, ‘System U’ [15] focuses on the
recommendaevaluation of the application. Finally, Section 5 sum- tion input by visually exposing the user model, which
marizes the work and outlines future research plans. consists of Big Five personality characteristics,
fundamental needs, and human values. In order to make
students understand why a certain learning activity is
2. Background and Related Work recommended to them, ’Mastery Grids’ [16] highlights
the concepts related to the recommended activity based
User models can be used as explanations in recom- on fine-grained open learner models. The exposed user
mender systems [14] and depending on the user type, model in ‘PeerFinder’ [17] consists of diferent student
diferent explanation levels of detail may be appropri- features (e.g., gender, age, program) used to recommend
ate [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In the following, we discuss related work on similar peers. However, scrutability is lacking in these
explainable recommendation that focus on the user tools.
model and provide explanation with varying level of
details.
2.1.2. Scrutinizing the User Model
2.1. Input-based Explainable Explaining recommendations can enable or improve
the scrutability of a recommender system, that is,
allow
      </p>
      <p>
        Recommendation ing users to tell the system if it is wrong [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Scrutability
The rise of distrust and skepticism related to the collec- is thus related to user control, which can be applied
tion and use of personal data, and privacy concerns in to diferent parts of the recommendation pipeline (i.e.
general has led to an increased interest in transparency input, process, and output) [18, 19]. Compared to
enof black-box user models, used to provide recommen- abling scrutability of the system’s output or process,
dations [14]. Graus et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] stress the importance of only few works have presented systems that provide
enabling transparency by opening and explaining the user control on the input layer of the recommender
typically black box user profiles, that serve as the rec- system by allowing users to correct their models when
ommender system’s input. The authors further point they disagree with (parts of) it or modify their models
out that user profile explanations can contribute to in order to adjust the recommendation results
accordscrutability to allow users to provide explicit feedback ing to their needs and preferences.
on the internally constructed user profiles and self- The first attempt to provide scrutable explanations
actualization to support users in understanding and was presented in [20]. In this work, a holiday
recomexploring their personal preferences. mender provides a text-based explanation and the user
      </p>
      <p>
        While to task of opening the black box of recom- can ask why certain assumptions (like a low budget)
mender systems by explaining the recommendation were made. Selecting this option takes them to a page
with a further explanation and an opportunity to mod- understand the recommendations made and improve
ify this in their user model. Similarly, the recommender them. We also aim at explaining the user model, but
unsystem in [21] provides explanations in the form of like Balog et al.’s approach, we leverage visualizations
overlapping and diference tag clouds between a seed instead of natural language explanations.
item and a recommended item. Users can then steer
the recommendations by manipulating the tag clouds. 2.2. Explanation with Varying Level of
Bakalov et al. [22] proposed an approach to control user Details
models and personalization efects in recommender
systems. It uses visualization to explain users’ adaptive In this work, the level of detail refers to the amount
behavior by allowing them to see their profiles and of information exposed in an explanation. A critical
adjust their preferences. Jin et al. [23] aimed at pro- question in the research of explainable
recommendaviding controllability over the received advertisements. tion is whether the relationship between the level of
The authors used a flow chart to provide a visual ex- detail and transparency is a linear one. To answer
planation of the process by opening the user profile this question, we need first to discriminate between
used to select the ads and allowing users to scrutinize objective transparency and user-perceived transparency.
their profile to get more relevant ads. Du et al. [24] Objective transparency means that the recommender
presented a personalizable and interactive sequence system reveals the underlying algorithm of the
recrecommender system that uses visualizations to ex- ommendations. However, the algorithm might be too
plain the decision process and justify its results. It complex to be described in a human-interpretable
manalso provides controls and guidance to help users per- ner. Therefore, it might be more appropriate to provide
sonalize the recommended action plans. Zürn et al. “justifications” instead of “explanations”, which are
of[25] discussed possible UI extensions to explicitly sup- ten superficial and more user-oriented. On the other
port What if? interactions with recommender systems, hand, user-perceived transparency is thus based on the
which allow users to explore, investigate and question users’ subjective opinion about how good the system
algorithmic decision-making. is capable of explaining its recommendations [28].
In the field of explainable AI in general, Mohseni
2.1.3. Explaining the User Model et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] argue that diferent user groups will have
other goals in mind while using such systems. For
In this work, explaining user models goes beyond just example, while machine learning experts might
preexposing and manipulating the user model to provide fer highly-detailed visual explanations of deep models
concrete explanations on how the user model was in- to help them optimize and diagnose algorithms,
layferred. Explaining user models in recommender sys- users do not expect fully detailed explanations for
evtems has been demonstrated to be efective [ 26] and has ery query from a personalized agent. Instead, systems
many benefits. It facilitate users’ self-actualization, i.e. with lay-users as target groups aim to enhance the user
supporting users in developing, exploring, and under- experience with the system through improving their
standing their unique personal tastes [27]. Moreover, understanding and trust. In the same direction, Miller
it helps users build a more accurate mental model of [29] argue that providing the exact algorithm which
the recommender system, thus leading to increased generated the specific recommendation is not
necessartransparency and trust in the system. Furthermore, ily the best explanation. People tend not to judge the
it can help detect biases which is crucial to produce quality of explanations around their generation process,
fair recommendation. Yet, the task of explaining the but instead around their usefulness. Besides the goals
user model remains under-investigated in explainable of the users, another vital aspect that will influence
recommendation research. their understanding of explanations are their cognitive
      </p>
      <p>
        Sullivan et al. [14] focus on explaining user profiles capabilities [11]. Only when users have enough time to
constructed from aggregated reading behavior data, process the information and enough ability to figure out
used to provide content-based recommendations. The the meaning of the information, a higher level of detail
authors expose the user model by summarizing and vi- in the explanation will lead to a better understanding.
sualizing the recommender’s high dimensional internal But as soon as the amount of information is beyond
representations of users. Visualizations explaining how the users’ comprehension, the explanation could lead
the user model was inferred are, however, not provided. to information overload and bring confusion. Without
Balog et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] present a set-based recommendation the understanding of how the system works, users may
technique that allows the user model to be explicitly perceive the system as not transparent enough, which
presented in natural language, in order to help users could, in turn, reduce the users’ trust in the system
[28, 11].
      </p>
      <p>
        In summary, it could be assumed that a higher level
of explanation detail increases the system’s objective
transparency but is also associated with a risk of
reducing the user-perceived transparency, and that this risk
depends on the user’s characteristics. Therefore,
recommender systems are expected to provide the right type
of explanations for the right group of users [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. One
approach is to ofer on-demand explanations that are
flexible enough to present varying level of details depending
on the users’ need or expertise [
        <xref ref-type="bibr" rid="ref7">7, 13</xref>
        ]. For example,
Millecamp et al. [13] developed a music recommender
system that not only allows users to choose whether or
not to see the explanations by using a "Why?" button
but also to select the level of detail by clicking on a
"More/Hide" button. However, providing on-demand
explanations with varying level of details remains rare
in the literature on explainable recommendation.
ests. Further, Wikipedia is used to find the categories of
the Wikipedia-based interests and generate Wikipedia
category-based interests.
3. RIMA Diferent charts are provided to summarize and
visualize the interest model used to provide content-based
The transparent Recommendation and Interest Mod- recommendation of tweets and Twitter users, as shown
eling Application (RIMA) has the goal to not just ex- in Figure 2. The short-term interest model (based on
plaining why an item was recommended, but to support the Wikipedia-based interest model) displays the user’s
users in exploring, developing, and understanding their top 5 interests, based on the tweets published in the
own interests in order to provide more transparent and last month and the publications published in the last
personalized recommendation. The application is an year. We selected a pie chart to visualize this model
implementation of a human-centered explainable rec- since each slice’s size provides users a quick
indicaommendation approach driven by open, scrutable, and tion of the weight of the specific short-term interest (
explainable interest models with varying level of details Figure 2a). The long-term interest model (also based
to meet the needs and preferences of diferent users. on the Wikipedia-based interest model) displays the
We focus in this work on recommending tweets and top 15 interests in the last five years, using a word
Twitter users (see Figure 1) and leveraging explanatory cloud (Figure 2b). The potential interest model (based
visualizations to provide insights into the recommenda- on the Wikipedia category-based interest model)
altion process by opening, scrutinizing, and explaining lows users to identify interests that are semantically
the user’s interest model based on three diferent levels similar to their interests. We selected a node-link
diaof details. gram to connect the the user’s long-term interests (on
the left) with their associated Wikipedia categories (on
3.1. Opening the Interest Model the right) (Figure 2c). Finally, the evolution of interest
model (based on the Wikipedia-based interest model)
The aim of opening and exposing the interest model allows users to track how their top 10 interests have
in RIMA is to let users become aware of the underly- shifted over time, using a stream graph (Figure 2d).
ing interest model used for recommendation. These
interest models are generated from users’ publications 3.2. Scrutinizing the Interest Model
and tweets. The application uses Semantic Scholar and
Twitter IDs provided by users to gather their publica- The main aim behind enabling users to provide explicit
tions and tweets. It applies unsupervised keyphrase feedback and modify their interest models in RIMA
extraction algorithms on the collected publications is to make those models more accurate. As shown in
and tweets to generate keyphrase-based interests. In Figure 3, the application provides an interface where
order to address semantic issues, Wikipedia is lever- users can manage their global interest model by adding
aged as a knowledge base to map the keyphrases to or removing interests. They can also modify the weight
Wikipedia pages and generate Wikipedia-based inter- given to an interest, reflecting its importance in their
(a) Short-term interest model
(b) Long-term interest model
(c) Potential interest model
(d) Evolution of interest model
changes on the system recommendations. For instance,
users can add new interests in the search box or remove
existing ones. The search box is initially populated
with user’s interests, ordered by their weights as
generated by the system. The users can change the order of
the interests through a drag and drop feature to alter
their importance. By clicking on the info button next
to the search box, the user can use interactive sliders
to adjust each keyword’s weight (see Figure 4a).
Another option to display the interest model is provided
through a radar chart, where the user can change the
interests’ position through drag and drop to modify
their relevance. The distances to the center represent
the relevance of the interests, with closer to the center
meaning more important (see Figure 4b).
      </p>
      <p>Adding, removing, and weighting the interests will
influence the order of the recommended tweets. This
exploratory approach would support users in
answering diferent What if? questions, such as "What if I
would have interest in X rather than Y?" or "What if I
would change the importance of interest Z?".
3.3. Explaining the Interest Model
Our approach to explaining tweet recommendations
is based on explaining the underlying user interest
models that are used to provide the recommendations.
interest model. The aim of explaining the interest model in RIMA is</p>
      <p>Empowering users to control the system and have to foster user’s awareness of the raw data
(publicaan active say in the process would also make the rec- tions and tweets) and the derived data (interest model)
ommendation more transparent, thus leading to better that the recommender system uses as input to generate
trust and user satisfaction. To achieve this, the appli- recommendations, in order to increase transparency
cation supports What-if? interactions that give users and promote understandability of the recommendation.
full control over the input of the recommender sys- Moreover, this may let users become aware of system
tem (their interest model) as well as its output (the errors and consequently help them give feedback and
recommendations that result from the defined input). correction in order to improve future
recommendaThrough interactive visualizations, users can explore tions.
and adjust the input to adapt the system output based The application provides on-demand explanations,
on their needs and preferences. Moreover, users can that is, the users can decide whether or not to see the
modify their interests and see the influence of these</p>
      <p>(a) Basic explanation
(b) Intermediate explanation
explanation and they can also choose which level of
explanation detail they want to see. In the basic expla- (c) Advanced explanation
nation (Figure 5a), the user can hover over an interest Figure 5: Explaining the interest model with three levels of
in the word cloud to see its source (i.e. publications details
or tweets). When the user clicks on an interest in the
word cloud, the intermediate explanation provides more
information through a pop-up window highlighting the based on their Semantic Scholar and Twitter IDs and to
occurrence of the selected interest in the tweets or ti- see the visualizations corresponding to their interest
tle/abstract of publications (Figure 5b). The next level models. Then, the participants were presented with
of detail is provided in the advanced explanation which the three visualizations representing the basic,
interfollows an explanation by example approach to show mediate, and advanced explanations of their generated
in detail the logic of the algorithm used to infer the interest models. Thereafter, they were asked to explore
interest model (see Figure 5c). the recommended tweets and use the provided features
to manipulate their interest models to influence the
recommendation results. Finally, the participants were
4. Evaluation asked about their opinions towards the provided
explanations, guided by the statements summarized in
We conducted a preliminary interview-based study Table 1 and other open-ended questions such as " they
with ten researchers from diferent disciplines to gauge want to see the explanations of their interest models"
the potential of our proposed approach to improve and "which explanation level (i.e. basic, intermediate,
transparency, scrutability, and user satisfaction with advanced) they prefer to see".
the explainable recommender system. At the begin- In general, the participants showed an overall
posning of the interview, each participant was briefly in- itive opinion towards the usefulness of having
explatroduced to the fields of recommender systems and nations of their inferred interest models as well as the
user modeling. Next, the participants were asked to possibility of manipulating them. However, they gave
use the RIMA application to create their interest models diferent reasons why they want to see the explanations.
Two participants expressed that they had in their inter- planations, and the efects of these two variables on
est model wrong or not expected interests and wanted the perception of and interaction with the explainable
to check them. Other participants mentioned that they recommender system.
were just curious to see how their interest model was
generated. This is in line with the findings in the study
by Putnam and Conati [30] in an intelligent tutoring References
systems (ITS) context.</p>
      <p>Moreover, the participants had diferent opinions
regarding what level of detail they prefer to see. This
implies that potential individual user diferences
influence their preferences towards the explanation level;
an important design choice in explainable
recommendation that needs in depth exploration.</p>
    </sec>
    <sec id="sec-2">
      <title>5. Conclusion and Future Work</title>
      <p>In recent years, various attempts have been made to
address the black-box issue of recommender systems by
providing explanations that enable users to understand
the recommendations. In this paper, we addressed two
aspects under-explored in explainable recommendation
research, namely providing explanations that focus on
the input (i.e., user model) and presenting personalized
explanations with varying levels of detail. To this end,
we proposed the transparent Recommendation and
Interest Modeling Application (RIMA) that aims at the
opening, scrutinizing, and explaining the user’s interest
model based on three levels of details. The preliminary
evaluation results demonstrate the usefulness of the
RIMA approach in creating input-based on-demand
explanations.</p>
      <p>In future work we plan to apply the proposed
approach to explain recommendations of publications,
researchers, and conferences. We will also explore
other possible visualizations to provide explanations
at the three levels of detail. Furthermore, a more
extensive quantitative and qualitative user study will be
conducted to investigate the relationship between the
users’ characteristics and the level of detail of the
exceedings of the FATREC Workshop on Responsi- and future research challenges and opportunities,
ble Recommendation, 2018. Expert Systems with Applications 56 (2016) 9–27.
[9] R. F. Kizilcec, How much information? efects of [19] M. Jugovac, D. Jannach, Interacting with
recomtransparency on trust in an algorithmic interface, menders—overview and research directions, ACM
in: Proceedings of the 2016 CHI Conference on Transactions on Interactive Intelligent Systems
Human Factors in Computing Systems, 2016, pp. (TiiS) 7 (2017) 1–46.</p>
      <p>2390–2395. [20] M. Czarkowski, A scrutable adaptive hypertext,
[10] T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan, Doctor of philosophy ph.d., 2006. URL: http://hdl.</p>
      <p>W.-K. Wong, Too much, too little, or just right? handle.net/2123/10206.
ways explanations impact end users’ mental mod- [21] S. J. Green, P. Lamere, J. Alexander, F. Maillet,
els, in: 2013 IEEE Symposium on Visual Lan- S. Kirk, J. Holt, J. Bourque, X.-W. Mak,
Generatguages and Human Centric Computing, IEEE, ing transparent, steerable recommendations from
2013, pp. 3–10. textual descriptions of items, in: Proceedings
[11] R. Zhao, I. Benbasat, H. Cavusoglu, Do users of the third ACM conference on Recommender
always want to know more? investigating the systems, 2009, pp. 281–284.
relationship between system transparency and [22] F. Bakalov, M.-J. Meurs, B. König-Ries, B. Sateli,
users’trust in advice-giving systems (2019). R. Witte, G. Butler, A. Tsang, An approach to
con[12] P. Kouki, J. Schafer, J. Pujara, J. O’Donovan, trolling user models and personalization efects
L. Getoor, Personalized explanations for hybrid in recommender systems, in: Proceedings of the
recommender systems, in: Proceedings of the 2013 international conference on Intelligent user
24th International Conference on Intelligent User interfaces, 2013, pp. 49–56.</p>
      <p>Interfaces, 2019, pp. 379–390. [23] Y. Jin, K. Seipp, E. Duval, K. Verbert, Go with the
[13] M. Millecamp, N. N. Htun, C. Conati, K. Verbert, lfow: efects of transparency and user control on
To explain or not to explain: the efects of personal targeted advertising using flow charts, in:
Procharacteristics when explaining music recommen- ceedings of the International Working Conference
dations, in: Proceedings of the 24th International on Advanced Visual Interfaces, 2016, pp. 68–75.
Conference on Intelligent User Interfaces, 2019, [24] F. Du, S. Malik, G. Theocharous, E. Koh,
Personpp. 397–407. alizable and interactive sequence recommender
[14] E. Sullivan, D. Bountouridis, J. Harambam, S. Na- system, in: Extended Abstracts of the 2018 CHI
jafian, F. Loecherbach, M. Makhortykh, D. Kelen, Conference on Human Factors in Computing
SysD. Wilkinson, D. Graus, N. Tintarev, Reading tems, 2018, pp. 1–6.
news with a purpose: Explaining user profiles for [25] M. Zürn, M. Eiband, D. Buschek, What if?
interself-actualization, in: Adjunct Publication of the action with recommendations, in: ExSS-ATEC@
27th Conference on User Modeling, Adaptation IUI, 2020.</p>
      <p>and Personalization, 2019, pp. 241–245. [26] P. Bonhard, M. A. Sasse, ’knowing me,
know[15] H. Badenes, M. N. Bengualid, J. Chen, L. Gou, ing you’—using profiles and social networking to
E. Haber, J. Mahmud, J. W. Nichols, A. Pal, improve recommender systems, BT Technology
J. Schoudt, B. A. Smith, et al., System u: auto- Journal 24 (2006) 84–98.
matically deriving personality traits from social [27] B. P. Knijnenburg, S. Sivakumar, D. Wilkinson,
media for people recommendation, in: Proceed- Recommender systems for self-actualization, in:
ings of the 8th ACM Conference on Recommender Proceedings of the 10th ACM Conference on
Recsystems, 2014, pp. 373–374. ommender Systems, 2016, pp. 11–14.
[16] J. Barria-Pineda, P. Brusilovsky, Making educa- [28] F. Gedikli, D. Jannach, M. Ge, How should i
extional recommendations transparent through a plain? a comparison of diferent explanation types
ifne-grained open learner model., in: IUI Work- for recommender systems, International Journal
shops, 2019. of Human-Computer Studies 72 (2014) 367–382.
[17] F. Du, C. Plaisant, N. Spring, B. Shneiderman, [29] T. Miller, Explanation in artificial intelligence:
Visual interfaces for recommendation systems: Insights from the social sciences, Artificial
IntelliFinding similar and dissimilar peers, ACM Trans- gence 267 (2019) 1–38.
actions on Intelligent Systems and Technology [30] V. Putnam, C. Conati, Exploring the need for
ex(TIST) 10 (2018) 1–23. plainable artificial intelligence (xai) in intelligent
[18] C. He, D. Parra, K. Verbert, Interactive recom- tutoring systems (its), in: Joint Proceedings of the
mender systems: A survey of the state of the art ACM IUI 2019 Workshops, 2019.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Herlocker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Konstan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Riedl</surname>
          </string-name>
          ,
          <article-title>Explaining collaborative filtering recommendations</article-title>
          ,
          <source>in: Proceedings of the 2000 ACM conference on Computer supported cooperative work</source>
          ,
          <year>2000</year>
          , pp.
          <fpage>241</fpage>
          -
          <lpage>250</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>K.</given-names>
            <surname>Balog</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Radlinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Arakelyan</surname>
          </string-name>
          ,
          <article-title>Transparent, scrutable and explainable user models for personalized recommendation</article-title>
          ,
          <source>in: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>265</fpage>
          -
          <lpage>274</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Guesmi</surname>
          </string-name>
          , m. A.
          <string-name>
            <surname>Chatti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Muslim</surname>
          </string-name>
          ,
          <article-title>A review of explanatory visualizations in recommender systems</article-title>
          ,
          <source>in: Companion Proceedings 10th International Conference on Learning Analytics and Knowledge (LAK20)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>480</fpage>
          -
          <lpage>491</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Nunes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jannach</surname>
          </string-name>
          ,
          <article-title>A systematic review and taxonomy of explanations in decision support and recommender systems, User Modeling</article-title>
          and
          <source>UserAdapted Interaction</source>
          <volume>27</volume>
          (
          <year>2017</year>
          )
          <fpage>393</fpage>
          -
          <lpage>444</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Tintarev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Masthof</surname>
          </string-name>
          ,
          <article-title>Explaining recommendations: Design and evaluation</article-title>
          , in: Recommender systems handbook, Springer,
          <year>2015</year>
          , pp.
          <fpage>353</fpage>
          -
          <lpage>382</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Explainable recommendation: A survey and new perspectives</article-title>
          , arXiv preprint arXiv:
          <year>1804</year>
          .
          <volume>11192</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohseni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zarei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Ragan</surname>
          </string-name>
          ,
          <article-title>A multidisciplinary survey and framework for design and evaluation of explainable ai systems</article-title>
          , arXiv (
          <year>2018</year>
          ) arXiv-
          <fpage>1811</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Graus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sappelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Manh</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <article-title>"let me tell you who you are" - explaining recommender systems by opening black box user profiles</article-title>
          , in: Pro-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>