<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, Los Angeles, USA, March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Designing Explanation Interfaces for Transparency and Beyond</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Chun-Hua Tsai</string-name>
          <email>cht77@pitt.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter Brusilovsky</string-name>
          <email>peterb@pitt.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Pittsburgh</institution>
          ,
          <addr-line>Pittsburgh</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>In this work-in-progress paper, we presented a participatory process of designing explanation interfaces for a social recommender system with multiple explanatory goals. We went through four stages to identify the key components of the recommendation model, expert mental model, user mental model, and target mental model. We reported the results of an online survey of current system users (N=14) and a controlled user study with a group of target users (N=15). Based on the findings, we proposed five set of explanation interfaces for five recommendation models (N=25) and discussed the user preference of the interface prototypes.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Information systems → Recommender systems; •
Humancentered computing → HCI design and evaluation methods.</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>
        Enhancing explainability in recommender systems has drawn more
and more attention in the field of Human-Computer Interaction
(HCI). Further, the newly initiated European Union’s General Data
Protection Regulation (GDPR) required the owner of any
datadriven application to maintain a “right to the explanation” of
algorithmic decisions [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which urging to gain transparency in all
existing intelligent systems. Self-explainable recommender systems
have been proved to gain user perception on system transparency
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], trust [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and accepting the system suggestions [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Instead of
the ofline performance improvements, more and more researches
focused on the works of evaluating the system from the user
experience, i.e., what is the user perception on the explanation interfaces?
      </p>
      <p>
        Explaining recommendations (i.e., enhancing the system
explainability) can achieve diferent explanatory goals which help users to
make a better decision or persuading them to accept the
suggestions from a system [
        <xref ref-type="bibr" rid="ref14 ref16">14, 16</xref>
        ]. We followed the seven explanatory
goals that proposed by Tintarev and Masthof [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]: Transparency,
Scrutability, Trust, Persuasiveness, Efectiveness , Eficiency , and
Satisfaction. Since it is hard to have a single explanation interface that
IUI Workshops’19, March 20, 2019, Los Angeles, USA
Copyright © 2019 for the individual papers by the papers’ authors. Copying permitted
for private and academic purposes. This volume is published and copyrighted by its
editors.
achieves all these goals equally well, the designer needs to make
a trade-of while choosing or designing the form of interface [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
For instance, an interactive interface can be adapted to increase
the user trust and satisfaction but may prolong the decision and
explore process while using the system (i.e., lead to decreasing of
eficiency) [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>
        Over the past few years, several approaches have been discussed
to enhance the explainability in the recommender systems. The
approaches can be summarized by diferent styles, reasoning models,
paradigms and information [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. 1) Styles: Kouki et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] conducted
an online user survey to explore the user preference in nine
explanation styles. They found Venn diagrams outperformed all other
visual and text-based interfaces. 2) Reasoning Models: Vig et al.
[
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] used tags to explain the recommended item and the user’s
profile. The approach emphasized the factor of why a specific
recommendation is plausible, instead of revealing the process of
recommendation or data. 3) Paradigms: Herlocker et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] presented
a model for explanations based on the user’s conceptual model of
the collaborative-based recommendation process. The result of the
evaluation indicates two interfaces - “Histogram with grouping”
and “Presenting past performance” - improved the acceptance of
recommendations. 4) Information: Pu and Chen [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] proposed
explanations tailored to the user and recommendation, i.e., although
one recommendation is not the most popular one, the explanation
would justify the recommendation by providing the reasons.
      </p>
      <p>
        Although many approaches have been proposed to enhance the
recommender explainability, bringing explanation interfaces to an
existing recommender system is still a challenging task. More
recently Eiband et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] suggested a diferent approach to improve
user mental model (UMM) while bringing transparency
(explanations) to a recommender system. The model described the process of
a user builds an internal conceptualization of the system or interface
along with user-system interactions, i.e., building the knowledge
of how to interact with the system. If the model is misguided or
opaque, the users will face dificulties in predicting or
interpreting the system [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Hence, the researchers suggested to improve
the mental model, so the users can gain awareness while using the
system as well as the explanation interfaces.
      </p>
      <p>
        In this work-in-progress paper, we presented a stage-based
participatory process [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for integrating seven exploratory goals into
real-world hybrid social recommender system. First, we introduced
the Expert Mental Model to summarize the key components of each
recommendation feature. Second, we conducted an online survey
to identify the User Mental Model of seven explanatory goals from
the current system users. Third, we did a user study with
cardsorting and semi-interview to determine the user’s Target User
Model. Fourth, we proposed a total of 25 explanation interfaces for
ifve recommendation features and compared the user perceptions
across designs.
We adopted the stage-based participatory framework from Eiband
et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which intends to answer two key questions while
designing the explainable user interface (UI): a) What to Explain? And
b) How to explain? The process can be summarized in four stages.
1) Expert Mental Model: What can be explained? We defined an
expert as the recommender system developer. 2) User Mental Model:
What is the user mental model of the system based on its current
UI? The model should be built through the current recommender
system users. 3) Target Mental Model: Which key components of
the algorithm do users want to be made explainable in the UI? The
target user is the users who are new to the system. 4) Iterative
Prototyping: How can the target mental model be reached through UI
design. The key is to measure if the proposed explanation interfaces
achieved the explanatory goals.
      </p>
      <p>
        In this work, we aimed to enhance the explainability in a
conference support system - Conference Navigator 3 (CN3). The system
has been used to support more than 45 conferences at the time
of writing this paper and has data on approximately 7,045 articles
presented at these conferences; 13,055 authors; 7,407 attendees;
32,461 bookmarks; and 1,565 social connections. Our work was
informed by the results of a controlled user study where we explored
an earlier version of the social recommender interface Relevance
Tuner [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] (shown in Figure 1). It was a controllable interface for
the user to fuse weightings of multiple recommendation models
and to inspect the explanations.
      </p>
      <p>A total of five recommendation models were introduced in this
study: 1) Publication Similarity: the degree of cosine similarity of
users’ publication text. 2) Topic Similarity: the overlap of research
interests (using topic modeling). 3) Co-Authorship Similarity: the
degree of connection, based on a shared network of co-authors.
4) Interest Similarity: the number of papers co-bookmarked, as
well as the authors co-followed. 5) Geographic Distance: a
measurement of the geographic distance between afiliations. Based
on the stage-based participatory framework, we went through the
same four stages for each recommendation model to identify the
user-preferred user interface design. We aimed to design
explanation interfaces for each recommendation model with multiple
exploratory goals.
3</p>
    </sec>
    <sec id="sec-3">
      <title>FIRST STAGE: EXPERT MENTAL MODEL</title>
      <p>
        Instead of interactive recommender [
        <xref ref-type="bibr" rid="ref23 ref7">7, 23</xref>
        ], we attached an
explanation icon next to each social recommendation. The users have
a choice of requesting the explanations while exploring or
browsing the recommendations. We adopted a hybrid explanation
approach [
        <xref ref-type="bibr" rid="ref12 ref8">8, 12</xref>
        ], which mixed multiple visualizations to explain the
details of the recommendation model. We would like to let the users
understand both a) the mutual relationship (similarity) between
him/herself and the recommended scholar and b) the key
component in each recommendation model. We then discussed the Expert
Mental Model through the system developing process of the five
recommendation models.
      </p>
      <p>1) Publication Similarity: The similarity was determined by
the degree of text similarity between two scholars’ publications
using cosine similarity. We applied tf-idf to create the vector with
a word frequency upper bound of 0.5 and a lower bound of 0.01 to
eliminate both common and rarely used words. In this model, the
key components were the terms of the paper title and abstract as
well as its term frequency.</p>
      <p>
        2) Topic Similarity: This similarity was determined by
matching research interests using topic modeling. We used latent Dirichlet
allocation (LDA) to attribute collected terms from publications to
one of the topics. We chose 30 topics to build the topic model for all
scholars. Based on the model, we then calculated the topic similarity
between any two scholars. The key components were the research
topics and the topical words of each research topic [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ].
      </p>
      <p>3) Co-Authorship Similarity: This similarity approximated
the network distance between the source and recommended users.
For each pair of the scholar, we tried to find six possible paths for
connecting them, based on their coauthorship relationships. The
network distance is determined by the average distance of the six
paths. The key components were the coauthors (as nodes),
coauthorship (as edges) and the distance of connection the two scholars.</p>
      <p>4) CN3 Interest Similarity: This similarity was determined by
the number of co-bookmarked conference papers and co-connected
authors in the experimental social system (CN3). We simply used
the number of shared items as the CN3 interest similarity. The key
component is the shared conference papers and authors.
5) Geographic Distance: This similarity was a measurement of
the geographic distance between attendees. We retrieved longitude
and latitude data based on attendees’ afiliation information. We
used the Haversine formula to compute the geographic distance
between scholars. The key components are the geographic distance
and afiliation information of the scholars .
4</p>
    </sec>
    <sec id="sec-4">
      <title>SECOND STAGE: USER MENTAL MODEL</title>
      <p>
        As a first step towards understanding the design factors of
explanatory interfaces, we deployed a survey through a social recommender
system, Conference Navigator [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], and analyzed data from the
respondents. We targeted the users who had created an account and
interacted with the system in their previous conference attendance
(at least using the system for one conference). The survey was
initiated by sending an invitation to the qualified users in
December 2017. We sent out 89 letters to the conference attendees of
UMAP/HT 2016, and a total of 14 participants (7 female) replied to
create the pool of participants for the user study. The participants
were from 13 diferent countries; their ages ranged from 20 to 40
(M=31.36, SE=5.04). We did an online survey to collect necessary
demographic information and self-reflection about how to design
an explanation function in seven explanatory goals [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        The proposed questions were: How can an explanation function
help you to perceive system 1) Transparency - explain how the
system works? 2) Scrutability - allow you to tell the system it
is wrong? 3) Trust - increase your confidence in the system? 4)
Persuasiveness - convince you to explore or to follow new friends?
5) Efectiveness - help you make good decisions? 6) Eficiency
help you to make decisions faster? 7) Satisfaction - make using the
system fun and useful? We asked the participants to answer each
question in 50-100 words, in particular reflecting the explanatory
goals of the social recommendation. The data was published in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>1) Transparency: 71% of respondents pointed out the reasons of
generated social recommendation that help them to perceive higher
system transparency, i.e., the personalized explanation, the linkage
and data sources, reasoning method and understandability. We
then summarized the feedback into five factors: 1) The visualization
presents the similarity between my interest and the recommended
person. 2) The visualization presents the relationship between the
recommended person and me. 3) The visualization presents where did
the data were retrieved. 4) The visualization presents more in-depth
information on how the score amounts up. 5) The visualization allows
me to see the connections between people and understand how they
are connected.</p>
      <p>2) Scrutability: Half of the respondents mentioned they needed
“inspectable details” to figure out the wrong recommendation. 35%
of respondents suggested the mechanism of accepting user feedback
on improving wrong recommendations, such as a space to submit
user ratings or yes/no options. 14% of respondents preferred a
dynamic exploration process to determine the recommendation
quality. We then summarized the feedback into four factors: 6) The
visualization allows me to understand whether the recommendation
is good or not. 7) The visualization presents the data for making the
recommendations. 8) The visualization allows me to compare and
decide whether the system is correct or wrong. 9) The visualization
allows me to explore and then determine the recommendation quality.
3) Trust: 28% of respondents mentioned that they trusted the
system more when they perceived the benefits of using the
system. 35% of respondents preferred to trust a system with reliable
and informative explanations, more detailed information or
understandable. 35% of respondents mentioned they trust a system
with transparency or passed their verification. We then
summarized the feedback into three factors: 10) The visualization presents
a convincing explanation to justify the recommendation. 11) The
visualization presents the components (e.g., algorithm) that influenced the
recommendation. 5) The visualization allows me to see the connections
between people and understand how they are connected.</p>
      <p>4) Persuasiveness: Half of the respondents mentioned the
explanation of social familiarity would persuade them to explore novel
social connections; namely, when shown social context details or
shared interests. 21% of respondents indicated that an informative
interface could boost the exploration of new friendship. 28% of
respondents preferred a design that inspired curiosity, implicit
relationships. We then summarized the feedback into three factors: 12)
The visualization shows me the shared interests, i.e., why my interests
are aligned with the recommended person. 13) The visualization has
a friendly, easy-to-use interface. 14) The visualization inspired my
curiosity (to discover more information).</p>
      <p>5) Efectiveness: 64% of respondents mentioned that the
aspects of social recommendation relevance helped them to make a
good decision. The aspect included explaining the recommendation
process, understandable or more informative. 28% of respondents
suggested a reminder that a historical or successful decision could
help them to make a good decision, i.e., a previously-made user
decision and success stories. We then summarized the feedback
into three factors: 15) The visualization presents the
recommendation process. 5) The visualization allows me to see the connections
between people and understand how they are connected. 11) The
visualization presents the components (e.g., algorithm) that influenced
the recommendation.</p>
      <p>6) Eficiency: 28% of respondents mentioned that a proper
highlighting of the recommendation helped to make the decision faster.
For example, they are emphasizing the relatedness, identifying the
top recommendations or providing success stories. 28% of
respondents preferred a tune-able or visualized interface to accelerate the
decision process, such as tuning the recommendation features,
visualizing the recommendations. However, the explanations may not
always be useful. 21% of respondents argued that the explanation
would prolong the decision process instead of speeding it up: the
user may need to take extra time to examine the explanations. We
then summarized the feedback into two factors: 16) The
visualization presents highlighted items/information that is strongly related to
me. 17) The visualization presents aggregated, non-obvious relations
to me.</p>
      <p>7) Satisfaction: The feedback on how an explanation can help
the user satisfy the system was varied. Three aspects received an
equal 7% of respondents’ preferences. That is, users preferred to
view the feedback from the community, shown the historical
interaction record and provided a personalized explanation. Two aspects
received an equal 14% of respondents’ preference; i.e., a focus on a
friendly user interface and saved decision time. 21% of respondents
reported a higher satisfaction on using the explanation as a “small
talk topic”, i.e., as an initial conversation in a conference. 28% of
respondents preferred an interactive interface for perceiving the
system to be fun, e.g., a controllable interface. We then summarized
the feedback into four factors: 18) The visualization presents the
feedback from other users, i.e., I can see how others rated the
recommended person. 19) The visualization allows me to tell why does this
system recommend the person to me. 1) The visualization presents the
similarity between my interest and the recommended person. 13) The
visualization is a friendly, easy-to-use interface.</p>
      <p>Based on the result of the online survey, we concluded a total of
19 factors in the second stage of building the user mental model.
(12) The visualization shows me the shared interests, i.e., why
my interests are aligned with the recommended person.
(13) The visualization has a friendly, easy-to-use interface
(14) The visualization inspired my curiosity (to discover more
information).
(15) The visualization presents the recommendation process clearly.
(16) The visualization presents highlighted items/information
that is strongly related to me.
(17) The visualization presents aggregated, non-obvious relations
to me.
(18) The visualization presents feedback from other users, i.e., I
can see how others rated a recommended person.
(19) The visualization allows me to tell why does this system
recommend the person to me.</p>
      <p>We also found some factors across diferent exploratory goals.
For example, Factor 1 were shared by the exploratory goal of
Transparency and Satisfaction. Factor 5 were shared by Transparency,
Trust and Efectiveness . Factor 11 was shared by Trust and
Efectiveness. Factor 13 was shared by Persuasiveness and Satisfaction.
5</p>
    </sec>
    <sec id="sec-5">
      <title>THIRD STAGE: TARGET MENTAL MODEL</title>
      <p>In this stage, we conducted a controlled lab study for creating
the Target Mental Model. The model is used to identify the key
components of the recommendation model that the users might
want to be explainable in the UI. Since the goal is to identify the
information need for new users, we specifically selected subjects
who never used the CN3 system. A total of 15 (6 female) participants
(N=15) were recruited for this study. They are first, or second-year
graduate students (major in information sciences) at the University
of Pittsburgh with age ranged from 20 to 30 (M=25.73, SE=2.89). All
participants had no previous experience of using the CN system.
Each participant received USD$20 compensation and signed an
informed consent form.</p>
      <p>We asked the subjects to complete a card-sorting task about their
preference for the 19 factors we identified in the second stage. We
started by presenting the CN3 system (shown in Figure 1) to the
subjects and introducing the five recommendation models through
the Expert Mental Model. After the tutorial, the subjects were asked
to do a closed card-sorting that assigns cards into four predefined
groups. The four groups are 1) very important; 2) less important; 3)
not important and 4) not relevant.</p>
      <p>The survey result is reported in Table 1. We found that for the
target users, factor 1, 13, 16 outperformed other factors: more than
ten subjects assigned the three factors into the “very important”
group. The factor 2, 6, 10, 12, 14, 15 and 19 formed the secondary
preference group with at least 10 subject assigning them into “very
important” or “less important” groups. The subjects least preferred
factor were 3, 7, 11, 18 with at least nine subjects assigning these
factors into “not important” or “not relevant” groups.</p>
      <p>Based on the card-sorting result, we found the user preferred an
explainable UI is presenting the similarity between his/her interests
and the recommended person (F1). The UI should be friendly and
easy-to-use (F13) as well as highlighted the items or information
that is strongly related to the user (F16). Besides, some factors are
also liked by the subjects. For instance, the UI is presenting the
mutual relationship (F2), shared interests (F12) and recommendation
process (F15). The UI should also allow the user to understand (F6)
and justify (F10) the quality of recommendation as well as inspired
the curiosity of exploration (F14) and recommendation process
(F19). Interestingly, we also found the users were less interested in
a UI of presenting the data source (F3) and raw data (F7) as well
as the detail of algorithm (F11) and the recommendation feedback
from the other users in the same community (F18).</p>
      <p>Hence, we decide to filter out the factors that were less preferred
by the subjects. We choose to keep the factors with more than
ten votes in the groups of “Very Important” and “Less Important”,
which are F1, F2, F6, F10, F12, F13, F14, F15, F16, F19, the chosen
factors were highlighted in red color in Table 1 We can project
the factors back to the original explanatory goals. The mentioned
percentage of each exploration goal is listed as below: Transparency
(40%, 2 out of 5), Scrutability (0%, 0 out of 4), Trust (33%, 1 out of
3), Persuasiveness (67%, 2 out of 3), Efectiveness (33%, 1 out of 3),
Eficiency (50%, 1 out of 2) and Satisfaction, (75%, 3 out of 4). That
is, the Target Mental model was built through the exploratory goal
of (rank from high to low importance) Satisfaction, Persuasiveness,
Eficiency, Transparency, Trust, and Efectiveness.
6</p>
    </sec>
    <sec id="sec-6">
      <title>FOURTH STAGE: ITERATIVE</title>
    </sec>
    <sec id="sec-7">
      <title>PROTOTYPING</title>
      <p>The fourth stage: interactive prototyping was performed within
the same user study as the third stage. After the card-sorting task,
we asked the subject to identify the chosen ten factors across some
UI prototypes. A total of 25 interfaces (five interfaces for each
recommendation model) were exposed in this stage. We used a
within-subject design, i.e., all participants required to do a
cardsorting task. In each session, the participants were asked to sort
the given five interfaces into groups 1 to 5 (1: Strongly Agree, 5:
Strongly Disagree), in each exploratory factor. If one interface is not
contributing to the factor, the participant can mark it as irrelevant
(not applicable). We continued with a semi-interview after the
subject completed each session to collect the qualitative feedback.</p>
      <p>There were a total of five card-sorting sessions for all five
recommendation model. At the beginning of each session, we introduced
the recommendation model through the Expert Mental Model, i.e.,
tell the participant how the similarity is calculated and what data
were adopted in this process, to make sure the subject understands
the details of each recommendation model. After that, we provided
ifve interface printouts, a paper sheet with a table contains 19
exploratory factors and a pen - the subjects were expected to write
down rankings on the paper sheet. All subjects took around 80-100
minutes to complete the study.
6.1</p>
    </sec>
    <sec id="sec-8">
      <title>Explaining Publication Similarity</title>
      <p>
        The key component of publication similarity is terms and term
frequency of the publication as well as its mutual relationship (i.e.,
the common terms) between two scholars. We presented four visual
interface prototypes (shown in Figure 2) for explaining publication
similarity and one text-based interface (E1-1), which simply says
“You and [the scholar] have common words in [W1], [W2], [W3].”
6.1.1 E1-2: Two-way Bar Chart. The bar chart is a common
approach in analyzing the text mining outcome [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] using a histogram
of terms and term frequency. We extended the design to a two-way
bar chart to show the mutual relationship of two scholars’
publication terms and term frequency, i.e., one scholar in positive and the
other scholar on a negative scale. The design is shown in Figure 2a.
6.1.2 E1-3: Word Clouds. Word cloud is a common design in
explaining text similarity [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. We adopted the word cloud design from
[
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], which presented the term in the cloud and the term frequency
by the font size. This interface provided two word clouds (one for
each scholar) so the user can perceive the mutual relationship. The
design is shown in Figure 2b.
6.1.3 E1-4: Venn Word Cloud. Venn diagram was recognized as
an efective hybrid explanation interface by Kouki et al . [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. This
interface could be considered as a combination of a word cloud
and a Venn diagram [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], which presents term frequency using the
font size. The unique terms of each scholar are shown in a diferent
color (green and blue) while the common terms are presented in
the middle, with red color, for determining the mutual relationship.
The design is shown in Figure 2c.
6.1.4 E1-5: Interactive Word Cloud. A word cloud can be interactive.
We extend the idea from [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] and used Zoomdata Wordcloud [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ],
(a) E1-2: Two-way Bar Chart
(b) E1-3: Word Clouds
(c) E1-4: Venn Word Cloud
(d) E1-5: Interactive Word Cloud
which follows the common approach to visualize term frequency
with the font size. The font color was selected to distinguish the
scholars’ terms, i.e., diferent term color for each scholar. A slider
was attached to the bottom of the interface that provides real-time
interactive functionality to increase or decrease the number of
terms in the word cloud. The design was shown in Figure 2d.
6.1.5 Results. The card-sorting result was presented in Table 2.
We found the E1-4 Venn Word Cloud was preferred by the
participants, received 76 votes in Rank 1, which was outperformed other
four interfaces. According to the post-session interview, 13 subjects
agreed E1-4 is the best interface versus the other four interfaces.
The supporting reasons can be summarized as 1) the Venn
diagram provided common terms in the middle, which highlighted
the common terms and shared relationship; 2) it is useful to show
non-overlapping terms on the sides (N=5) and 3) the design is
simple, easy to understand and require less time to process (N=3). Two
subjects mentioned they preferred E1-2 the most due to histograms
gives them the “concrete numbers” for “calculating” the similarity,
which was harder when using word clouds.
6.2
      </p>
    </sec>
    <sec id="sec-9">
      <title>Explaining Topic Similarity</title>
      <p>
        The key component of topic similarity is research topics and
topical words of the scholar as well as its mutual relationship (i.e., the
common research topics) between two scholars. We presented four
visual interfaces prototypes (shown in Figure 3) and one text-based
prototype for explaining the topic similarity. The text-based
interface (E2-1) simply says “You and [the scholar] have common
research topics on [T1], [T2], [T3].”
6.2.1 E2-2: Topical Words. This interface followed the approach
of [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], which attempted to help users in interpreting the meaning
of each topic by presented topical words in a table. We adopted
the idea as E2-2 Topical Words that present the topical words in
two multi-column tables (each column contains the top 10 words
of each topic). The design is shown in Figure 3a.
6.2.2 E2-3: FLAME. This interface followed Wu and Ester [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ],
which adopted a bar chart and two word clouds in displaying the
opinion mining result. In their design, each bar would be considered
as a “sentiment”; then the user can interpret the model by the figure
(for the beta value of topic) and table (for the topical words). We
extended the idea as E2-3: FLAME that showed two sets of research
topics (top 5) and the relevant topic words in two word clouds (one
for each scholar). The design is shown in Figure 3b.
6.2.3 E2-4: Topical Radar. The E2-4 Topical Radar was used in Tsai
and Peter [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. The radar chart was presented in the left. We picked
the top 5 topics (ranked by beta value from a total of 30 topics) of
the user and compared them with the examined attendee through
the overlay. A table with topical words was presented in the right
so that the user can inspect the context of each research topic. The
design is shown in Figure 3c.
6.2.4 E2-5: Topical Bars. We adopted several bar charts in this
interface as E2-5: Topical Bar. The interface showed top three topics
of two scholars (top row and the second row) and the topical
information (top eight topical words in the y-axis and topic beta value
in x-axis) using a bar chart with histograms. The design was shown
in Figure 3d.
(a) E2-2: Topical Words
(b) E2-3: FLAME
(c) E2-4: Topical Radar
(d) E2-5: Topical Bar
6.2.5 Results. The card-sorting result was presented in Table 2.
We found the E2-4 Topical Radar received 86 votes in Rank 1
outperforming all other interfaces. E2-3 ended up being second with most
votes in the R2 group. According to the post-session interview, 13
subjects agreed E2-4 is the best interface among all examined
interfaces. One subject preferred E2-3, and one subject suggested a mix
of E2-3 and E2-4 as the best design. The supporting reasons for E2-4
can be summarized as 1) It is easy to see the relevance through the
overlapping area from the Radar chart and the percentage numbers
from the table (N=12). 2) It is informative to compare the shared
research topics and topical words (N=9).
6.3
      </p>
    </sec>
    <sec id="sec-10">
      <title>Explaining Co-Authorship Similarity</title>
      <p>
        The key component of co-authorship similarity is coauthors,
coauthorship and distance of connections of the scholars as well as its
mutual relationship (i.e., the connecting path) between two
scholars. We presents the five prototyping interfaces (shown in Figure 4,
E3-1 presented in text below) for explaining publication similarity.
In addition to four visualized interfaces, we also include one
textbased interface (E3-1). That is, “You and [the scholar] have common
co-authors, they are [A1], [A2], [A3].”
6.3.1 E3-2: Correlation Matrix. E3-2 Correlation matrix was
inspired by Heckel et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] that was used to present overlapping
user-item co-clusters in a scalable and interpretable product
recommendation model. We extended the interface to a user-to-user
correlation matrix that the user can inspect the scholar co-authorship
network. The design was shown in Figure 4(a).
6.3.2 E3-3: ForceAtlas2. E3-3: ForceAtlas2 was inspired by Garnett
et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] that presented Co-authorship graph of NiMCS and
related research with both high and low-level network structure
and information. Nodes and edges are representing authors and
co-authorship, respectively. Graph layout uses the ForceAtlas2
algorithm [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Clusters are calculated via Louvain modularity and
delineated by color. The frequency of co-authorship is calculated
via Eigenvector centrality and represented by size. The design was
shown in Figure 4(b).
6.3.3 E3-4: Strength Graph. E3-4 Strength Graph was inspired by
Tsai and Brusilovsky [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] that tried to present the co-authorship
network using D3plus network style [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Nodes and edges are
representing authors and co-authorship, respectively. The edge thickness
is the weighting of the coauthorship (number of co-worked papers).
The node was assigned diferent color by their groups, i.e., the
original scholar, target scholar and via scholars. The design was shown
in Figure 4(c).
6.3.4 E3-5: Social Viz. The E3-5 Social Viz was used in [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. There
were six possible paths (one shortest and five alternatives). The
user will be presented in the left with a yellow circle. The target
user will be presented in the right with red color. The circle size
represented the weighting of the scholar, which was determined by
the appearing frequency in the six paths. For example, the scholar
Peter is the only node that scholar Chu can reach scholar Nav, so
the circle size was the largest one (size = 6). The design was shown
in Figure 4(d).
6.3.5 Results. The card-sorting result was presented in Table 2.
We found the E3-4 Strength Graph was preferred by the participants,
received 45 votes in Rank 1. However, the votes were close with
(a) E3-2: Correlation Matrix
(b) E3-3: ForceAtlas2
(c) E3-4: Strength Graph
(d) E3-5: Social Viz
E3-2 Correlation Matrix (37 votes) and E3-3 ForceAtlas2 (32 votes).
According to the post-session interview, four subjects agreed E3-4
is the best interface versus the other four interfaces. The supporting
reasons were the interface highlighted the mutual relations and
let the user can understand the path between two scholars. The
arrow and edge thickness were also useful. Two subjects supported
E3-2, they liked the correlation matrix provided a clear number
and correlation information that easier for them to process. Three
subjects supported E3-3, they preferred the interface provided a
piece of high-level information by giving a “big picture”. Also, E3-3
would be good to explore the coauthorship network beyond the
connecting path, although the interface was reported to be too
complicated as an explanation. Four subjects supported E3-5, they
enjoy the simple, clear and “straightforward” connecting path as
the explanation for coauthorship network.
6.4
      </p>
    </sec>
    <sec id="sec-11">
      <title>Explaining CN3 Interest Similarity</title>
      <p>
        The key component of CN3 interest similarity is papers and authors
of the system bookmarking as well as its mutual relationship (i.e.,
the common terms) between two scholars. We presented the five
prototyping interfaces (shown in Figure 5, E4-1 presented in the
text below) for explaining publication similarity. In addition to four
visualized interfaces, we also include one text-based interface (E4-1).
That is, “You and [the scholar] have common bookmarking, they
are [P1], [P2], [P3].”
6.4.1 E4-2: Similar Keywords. E4-2 Similar Keywords was proposed
and deployed in Conference Navigator [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. We extended the
interface to explain shared bookmarks between two scholars. The
interface represents the scholars in two sides and the common
cobookmarking items (e.g., the five common co-bookmark papers or
authors) in the middle. A strong (solid line) or weak (dash line) tie
will be used to connect the item was bookmarked by the one-side
or two-sides. The design was shown in Figure 5(a).
6.4.2 E4-3: Tagsplanations. E4-3 Tagsplanations was proposed by
Vig et al. [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. The idea is to show both tag, user preference, and
relevance that used to recommending movies. We extended the
interface to explain the co-bookmarking information. In our design,
(a) E4-2: Similar Keywords
(b) E4-3: Tagsplanations
(c) E4-4: Venn Tags
(d) E4-5: Itemized List
the co-bookmarked item will be listed and ranked by its social
popularity, i.e., how many users have followed/bookmarked the
item? The design was shown in Figure 5(b).
6.4.3 E4-4: Venn Tags. The study of [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] has pointed out the user
preferred the Venn diagram as an explanation in a recommender
system. In the interface of E4-4: Venn Tags, we implemented the
same idea with the bookmarked items. The idea is to present the
bookmarked item, using an icon, in the Venn diagram. The two sides
are the bookmarked item belong to one party. The co-bookmarked
or co-followed item will be placed in the middle. The users can
hover the icon for detail information, i.e., paper title or author
name. The design was shown in Figure 5(c).
6.4.4 E4-5: Itemized List. An itemized list has been adopted to
explain the bookmark in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. We proposed E4-5: Itemized List that
presented the bookmarked or followed items in two lists. The design
was shown in Figure 5(d).
6.4.5 Results. The card-sorting result was presented in Table 2.
We found the E4-4 Venn Tags was preferred by the participants,
received 64 votes in Rank 1, which was outperformed all other
four interfaces. E4-4 Venn Tags was also be favored by the subject,
which received 49 votes. According to the post-session interview,
eight subjects agreed E4-4 is the best interface versus the other
four interfaces. The supporting reasons can be summarized as 1)
the Venn diagram is more familiar or clear than other interfaces
(N=4); 2) The Venn diagram is simple and easy to understand (N=4).
Three subjects mentioned they preferred E4-3 the most due to the
interface provide extra attribution, don’t need to hover for detail
and easy-to-use.
6.5
      </p>
    </sec>
    <sec id="sec-12">
      <title>Explaining Geographic Similarity</title>
      <p>
        The key component of geographic similarity is location and distance
of the two scholars as well as their mutual relationship (i.e., the
geographic distance). We presented the five prototyping interfaces
(shown in Figure 6, E5-1 presented in the text below) for explaining
the geographic similarity. In addition to four visualized interfaces,
we also include one text-based interface (E5-1). That is, “From
[Institution A] to [sample]’s afiliation ([Institution B]) = N miles.”
6.5.1 E5-2: Earth Style. Using Google Map [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] for explaining
geographic distance in a social recommender system has been discussed
in Tsai and Brusilovsky [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. We extended the interface to a diferent
style. In E5-2 Earth Style, we “zoom out” the map to an earth surface
and place the two connected icons (with geographic distance) on
the map. The design was shown in Figure 6(a).
(a) E5-2: Earth Style
(b) E5-3: Navigation Style
(c) E5-4: Icon Style
(d) E5-5: Label Style
6.5.2 E5-3: Navigation Style. E5-3 Navigation Style followed the
same Google Map API (shown in E5-2), but presented navigation
between the two locations, either by car or flight. To be noted, the
transportation time, i.e., the fly or driving time in E5-2 or E5-3, did
not be considered in the recommendation model. The design was
shown in Figure 6(b).
6.5.3 E5-4: Icon Style. E5-4 Icon Style followed the same Google
Map API (shown in E5-2), but presented two icons on the map
without any navigation information. The users can hover to see
the detail afiliation, but the geographic distance information was
not presented. The design was shown in Figure 6(c).
6.5.4 E5-5: Label Style. E5-4 Label Style followed the same Google
Map API (shown in E5-2), but presented two labels on the map
without any navigation information. The users can see the detail
afiliation profile through the floating label without extra clicking
or hovering interactions. The design was shown in Figure 6(d).
6.5.5 Results. The card-sorting result was presented in Table 2. We
found the E5-3 Navigation Style was preferred by the participants,
received 42 votes in Rank 1. However, the votes are close with
E3-5 Label Style (40 votes). According to the post-session interview,
six subjects agreed E5-3 is the best interface versus the other four
interfaces. But there were three subjects particularly mentioned
the navigation function was irrelevant in explaining or exploring
the social recommendations. The supporting reasons of E5-3 can
be summarized as 1) The map is informative (N=2). 2) It is useful
to see navigation (N=5). Three subjects mentioned they preferred
E5-5 the most due to the label contains afiliation information that
they can understand the afiliation without extra actions. Although
there is no geographic distance information, one subject pointed
out he will realize the distance after knowing the afiliation title.
7
      </p>
    </sec>
    <sec id="sec-13">
      <title>DISCUSSION AND CONCLUSIONS</title>
      <p>In this work-in-progress paper, we presented a participatory process
of bringing explanation interfaces to a social recommender system.
We proposed four stages in responding to the challenge questions in
identifying the key components of explanation models and mental
models. In the first stage, we discussed the Expert Mental Model by
discussing the key components (based on the similarity algorithm)
of each recommendation model. In the second stage, we reported
an online survey of current system users (N=14) and identified
19 explanatory goals as the User Mental Model. In the third stage,
we reported the card-sorting results of a controlled user study
(N=15) that created the Target Mental Model through the target
users’ preference of the explanatory factors.</p>
      <p>In the fourth stage, we proposed a total of 25 explanation
interfaces for five recommendation models and reported the card-sorting
and semi-interview result. We found, in general, the participants
preferred visualization interfaces more than the text-based
interface. Based on the study, we found E1-4: Venn Word Cloud, E2-4:
Topical Radar, E3-4: Strength Graph, E4-4: Venn Tags, E5-3:
Navigation Style were preferred by the study participants. We further
discussed the top-rated and second-rated explanation interfaces
and user feedback in each session. Based on the experiment results,
we concluded the design guideline of bringing the explanation
interface to a real-world social recommender system.</p>
      <p>A further controlled study will be required to test if the proposed
explanation interface can achieve the target mental as we identified
in this paper. In our future works, we plan to implement the
toprated explanation interfaces and deploy those interfaces to the CN3
system. Moreover, we expect to provide the explanation interfaces
with an information-seeking task, so we can analyze how and why
does a user adopt the explanation interfaces in exploring the social
recommendations.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Malin</given-names>
            <surname>Eiband</surname>
          </string-name>
          , Hanna Schneider,
          <string-name>
            <given-names>Mark</given-names>
            <surname>Bilandzic</surname>
          </string-name>
          , Julian Fazekas-Con,
          <string-name>
            <given-names>Mareike</given-names>
            <surname>Haug</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Heinrich</given-names>
            <surname>Hussmann</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Bringing Transparency Design into Practice</article-title>
          .
          <source>In 23rd International Conference on Intelligent User Interfaces. ACM</source>
          ,
          <volume>211</volume>
          -
          <fpage>223</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Gerhard</given-names>
            <surname>Friedrich</surname>
          </string-name>
          and
          <string-name>
            <given-names>Markus</given-names>
            <surname>Zanker</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>A taxonomy for generating explanations in recommender systems</article-title>
          .
          <source>AI</source>
          Magazine
          <volume>32</volume>
          ,
          <issue>3</issue>
          (
          <year>2011</year>
          ),
          <fpage>90</fpage>
          -
          <lpage>98</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Alex</given-names>
            <surname>Garnett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Grace</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Judy</given-names>
            <surname>Illes</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Publication trends in neuroimaging of minimally conscious states</article-title>
          .
          <source>PeerJ</source>
          <volume>1</volume>
          (
          <year>2013</year>
          ),
          <year>e155</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Reinhard</given-names>
            <surname>Heckel</surname>
          </string-name>
          , Michail Vlachos, Thomas Parnell, and
          <string-name>
            <given-names>Celestine</given-names>
            <surname>Dünner</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Scalable and interpretable product recommendations via overlapping co-clustering</article-title>
          .
          <source>In Data Engineering (ICDE)</source>
          ,
          <source>2017 IEEE 33rd International Conference on. IEEE</source>
          ,
          <fpage>1033</fpage>
          -
          <lpage>1044</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Jonathan</surname>
            <given-names>L Herlocker</given-names>
          </string-name>
          ,
          <article-title>Joseph A Konstan,</article-title>
          and John Riedl.
          <year>2000</year>
          .
          <article-title>Explaining collaborative filtering recommendations</article-title>
          .
          <source>In Proceedings of the 2000 ACM conference on Computer supported cooperative work. ACM</source>
          ,
          <volume>241</volume>
          -
          <fpage>250</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Google</given-names>
            <surname>Inc</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Google Maps Directions API</article-title>
          . https://developers.google.com/ maps/documentation/directions/intro
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Bart</surname>
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Knijnenburg</surname>
          </string-name>
          , Svetlin Bostandjiev,
          <string-name>
            <surname>John O'Donovan</surname>
            ,
            <given-names>and Alfred</given-names>
          </string-name>
          <string-name>
            <surname>Kobsa</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Inspectability and Control in Social Recommenders</article-title>
          .
          <source>In 6th ACM Conference on Recommender System</source>
          .
          <fpage>43</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Pigi</given-names>
            <surname>Kouki</surname>
          </string-name>
          , James Schafer, Jay Pujara,
          <string-name>
            <surname>John O'Donovan</surname>
            ,
            <given-names>and Lise</given-names>
          </string-name>
          <string-name>
            <surname>Getoor</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>User preferences for hybrid explanations</article-title>
          .
          <source>In Proceedings of the Eleventh ACM Conference on Recommender Systems. ACM</source>
          ,
          <volume>84</volume>
          -
          <fpage>88</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Lawrence</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Customize D3plus network style</article-title>
          . https://codepen.io/choznerol/ pen/evaYyv
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Julian</surname>
            <given-names>McAuley</given-names>
          </string-name>
          and
          <string-name>
            <given-names>Jure</given-names>
            <surname>Leskovec</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Hidden factors and hidden topics: understanding rating dimensions with review text</article-title>
          .
          <source>In Proceedings of the 7th ACM conference on Recommender systems. ACM</source>
          ,
          <volume>165</volume>
          -
          <fpage>172</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Conference</given-names>
            <surname>Navigator</surname>
          </string-name>
          .
          <year>2018</year>
          . Paper Tuner. http://halley.exp.sis.pitt.edu/cn3/ portalindex.php
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Alexis</surname>
            <given-names>Papadimitriou</given-names>
          </string-name>
          , Panagiotis Symeonidis, and
          <string-name>
            <given-names>Yannis</given-names>
            <surname>Manolopoulos</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>A generalized taxonomy of explanations styles for traditional and social recommender systems</article-title>
          .
          <source>Data Mining and Knowledge Discovery</source>
          <volume>24</volume>
          ,
          <issue>3</issue>
          (
          <year>2012</year>
          ),
          <fpage>555</fpage>
          -
          <lpage>583</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Pearl</given-names>
            <surname>Pu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Li</given-names>
            <surname>Chen</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Trust-inspiring explanation interfaces for recommender systems</article-title>
          .
          <source>Knowledge-Based Systems 20</source>
          ,
          <issue>6</issue>
          (
          <year>2007</year>
          ),
          <fpage>542</fpage>
          -
          <lpage>556</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Amit</given-names>
            <surname>Sharma</surname>
          </string-name>
          and
          <string-name>
            <given-names>Dan</given-names>
            <surname>Cosley</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Do social explanations work?: studying and modeling the efects of social explanations in recommender systems</article-title>
          .
          <source>In Proceedings of the 22nd international conference on World Wide Web. ACM</source>
          ,
          <volume>1133</volume>
          -
          <fpage>1144</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Julia</given-names>
            <surname>Silge</surname>
          </string-name>
          and
          <string-name>
            <given-names>David</given-names>
            <surname>Robinson</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>tidytext: Text mining and analysis using tidy data principles in r</article-title>
          .
          <source>The Journal of Open Source Software</source>
          <volume>1</volume>
          ,
          <issue>3</issue>
          (
          <year>2016</year>
          ),
          <fpage>37</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Nava</given-names>
            <surname>Tintarev</surname>
          </string-name>
          and
          <string-name>
            <given-names>Judith</given-names>
            <surname>Masthof</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Evaluating the efectiveness of explanations for recommender systems</article-title>
          .
          <source>User Modeling and User-Adapted Interaction 22</source>
          ,
          <fpage>4</fpage>
          -
          <lpage>5</lpage>
          (
          <issue>1</issue>
          Oct.
          <year>2012</year>
          ),
          <fpage>399</fpage>
          -
          <lpage>439</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Nava</given-names>
            <surname>Tintarev</surname>
          </string-name>
          and
          <string-name>
            <given-names>Judith</given-names>
            <surname>Masthof</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Explaining recommendations: Design and evaluation</article-title>
          .
          <source>In Recommender systems handbook</source>
          . Springer,
          <fpage>353</fpage>
          -
          <lpage>382</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Chun-Hua Tsai</surname>
            and
            <given-names>Peter</given-names>
          </string-name>
          <string-name>
            <surname>Brusilovsky</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Providing Control and Transparency in a Social Recommender System for Academic Conferences</article-title>
          .
          <source>In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. ACM</source>
          ,
          <volume>313</volume>
          -
          <fpage>317</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Chun-Hua Tsai</surname>
            and
            <given-names>Peter</given-names>
          </string-name>
          <string-name>
            <surname>Brusilovsky</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Beyond the Ranked List: User-Driven Exploration and Diversification of Social Recommendation</article-title>
          .
          <source>In 23rd International Conference on Intelligent User Interfaces. ACM</source>
          ,
          <volume>239</volume>
          -
          <fpage>250</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Chun-Hua Tsai</surname>
            and
            <given-names>Peter</given-names>
          </string-name>
          <string-name>
            <surname>Brusilovsky</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Explaining Social Recommendations to Casual Users: Design Principles and Opportunities</article-title>
          .
          <source>In Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion. ACM</source>
          ,
          <volume>59</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Chun-Hua Tsai</surname>
            and
            <given-names>Peter</given-names>
          </string-name>
          <string-name>
            <surname>Brusilovsky</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Exploring Social Recommendations with Visual Diversity-Promoting Interfaces</article-title>
          .
          <source>TiiS 1</source>
          ,
          <issue>1</issue>
          (
          <year>2019</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>1</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Chun-Hua Tsai</surname>
            and
            <given-names>Brusilovsky</given-names>
          </string-name>
          <string-name>
            <surname>Peter</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Explaining Recommendations in an Interactive Hybrid Social Recommender</article-title>
          .
          <source>In Proceedings of the 2019 Conference on Intelligent User Interface. ACM</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Katrien</surname>
            <given-names>Verbert</given-names>
          </string-name>
          , Denis Parra,
          <string-name>
            <given-names>Peter</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Erik</given-names>
            <surname>Duval</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Visualizing recommendations to support exploration, transparency and controllability</article-title>
          .
          <source>In Proceedings of the 2013 international conference on Intelligent user interfaces. ACM</source>
          ,
          <volume>351</volume>
          -
          <fpage>362</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Jesse</surname>
            <given-names>Vig</given-names>
          </string-name>
          , Shilad Sen,
          <string-name>
            <given-names>and John</given-names>
            <surname>Riedl</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Tagsplanations: explaining recommendations using tags</article-title>
          .
          <source>In Proceedings of the 14th international conference on Intelligent user interfaces. ACM</source>
          ,
          <volume>47</volume>
          -
          <fpage>56</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>Yao</given-names>
            <surname>Wu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Martin</given-names>
            <surname>Ester</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>FLAME: A Probabilistic Model Combining Aspect Based Opinion Mining and Collaborative Filtering</article-title>
          .
          <source>In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining (WSDM '15)</source>
          . ACM, New York, NY, USA,
          <fpage>199</fpage>
          -
          <lpage>208</lpage>
          . https://doi.org/10.1145/2684822.2685291
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Yao</given-names>
            <surname>Wu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Martin</given-names>
            <surname>Ester</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Flame: A probabilistic model combining aspect based opinion mining and collaborative filtering</article-title>
          .
          <source>In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining. ACM</source>
          ,
          <volume>199</volume>
          -
          <fpage>208</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Zoomdata</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Real-time Interactive Zoomdata Wordcloud</article-title>
          . https://visual.ly/community/interactive-graphic/
          <article-title>social-media/ real-time-interactive-zoomdata-wordcloud</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>