=Paper= {{Paper |id=Vol-1884/paper2 |storemode=property |title=Enhancing Recommendation Diversity Through a Dual Recommendation Interface |pdfUrl=https://ceur-ws.org/Vol-1884/paper2.pdf |volume=Vol-1884 |authors=Chun-Hua Tsai,Peter Brusilovsky |dblpUrl=https://dblp.org/rec/conf/recsys/TsaiB17 }} ==Enhancing Recommendation Diversity Through a Dual Recommendation Interface== https://ceur-ws.org/Vol-1884/paper2.pdf
             Enhancing Recommendation Diversity Through a Dual
                         Recommendation Interface
                                Chun-Hua Tsai                                                               Peter Brusilovsky
                            University of Pittsburgh                                                      University of Pittsburgh
                             135 N Bellefield Ave                                                          135 N Bellefield Ave
                             Pittsburgh, PA 15260                                                          Pittsburgh, PA 15260
                                cht77@pitt.edu                                                               peterb@pitt.edu

ABSTRACT                                                                                example, by showing recommender sources and their overlaps as
The beyond-relevance objectives of recommender system are draw-                         set diagrams [16, 26] - could further address this problem. However,
ing more and more attention. For example, a diversity-enhanced                          the set-based approach has limited applicability, since it ignores
interface has been shown to positively associate with overall levels                    the strength of relevance (which is a continuous variable). In this
of user satisfaction. However, little is known about how a diversity-                   paper, we attempt to overcome the limitations of set-based visual
enhanced interface can help users to accomplish various real-world                      fusion by exploring a visual fusion approach that represents the
tasks. In this paper, we present a visual diversity-enhanced inter-                     continuous nature of relevant aspects while keeping the fusion
face that presents recommendations in a two-dimensional scatter                         process transparent.
plot. Our goal was to design a recommender system interface to                             When selecting a visual metaphor for the transparent fusion of
explore the different relevance prospects of recommended items in                       recommendation sources, we focused on better informing users
parallel and to stress their diversity. A within-subject user study                     about the diversity of the recommender results. It has been demon-
with real-life tasks was conducted to compare our visual interface                      strated that a proper user interface could promote diversity in
to a standard ranked list interface. Our user study results show                        information exploration. A diversity-enhancing interface evaluated
that the visual interface significantly reduced exploration efforts                     in [8] led to higher user satisfaction than the ranking list interface.
required for explored tasks. Also, the users’ subjective evaluation                     Several attempts to design a diversity-focused interface using a
shows significant improvement on many user-centric metrics. We                          dimension reduction technique to present opinion similarity by
show that the users explored a diverse set of recommended items                         latent distance have been presented in [7, 20, 27]. However, the
while experiencing an improvement in overall user satisfaction.                         clustering distance was not easily interpreted, and as a result, a
                                                                                        user was unable to make a personalized judgment.
KEYWORDS                                                                                   In this paper, we attempted to use a scatter two-dimensional plot
                                                                                        visualization to present recommendations with several dimensions
Recommender System; Diversity; Beyond Relevance; User control
                                                                                        of relevance. A scatter plot is an intuitive way to present multidi-
                                                                                        mensional data [10]. In our context, the scatter plot interface was
1    INTRODUCTION                                                                       used to help users combine different aspects of relevance for each
Recommending people in a social system is a challenging task. The                       recommended item. The user can further filter the recommendation
user may look for other people for a range of reasons; for example,                     results within each dimension. We conducted a user study during
they may wish to re-connect with an acquaintance or to find a new                       an international conference to compare the ranking list and scatter
friend with similar interests [5]. This diversity among user needs                      plot interface. Our user study results show that the new visual
makes it difficult to generate a ranked list that fits all cases.                       interface did reduce exploration efforts on the proposed tasks. Also,
    A specific case in which a single ranked list might not work well                   the users’ subjective evaluation shows significant improvement on
is in a parallel hybrid recommendation system that fuses several                        many user-centric metrics. We provide empirical evidence that the
recommendation sources. In this case, different sources might be                        user explored a diverse set of recommended items while improving
preferred for different needs (i.e., social similarity could work best                  the overall levels of user satisfaction.
for finding known friends while content-based similarity could be
used to find people with similar interests). Several authors argued                     2    RELATED WORKS
that the best approach in this situation is to offer users the ability
                                                                                        The social recommender system should provide more diverse con-
to control the fusion by choosing various algorithms [5, 6] or data
                                                                                        tent for the user to extend the social connection outside of the
sources [2]. However, it is not clear whether a casual user with no
                                                                                        personal bubble. However, not every user equally values the diver-
computer science background can fine-tune the provided interface
                                                                                        sity with the same standard [1]. The level of diversity-seeking is an
to adjust the results to their exploration interests. Providing a visual
                                                                                        existing individual difference. For instance, [14] classified people
interface that makes the process of fusion more transparent - for
                                                                                        into two group the group of ”Diversity Seeking” and the group of
Joint Workshop on Interfaces and Human Decision Making for Recommender Systems,         ”Challenge Averse”. The author described the difference between
Como, Italy                                                                             stratification and level of diversity exposure among the two groups.
2017. Copyright for the individual papers remains with the authors. Copying permitted   It explained the individual difference in the information seeking
for private and academic purposes. This volume is published and copyrighted by its
editors.                                                                                process. A social recommender system with enhanced diversity
DOI:                                                                                    needs a different interface to fit the need of their prior conviction.
Furthermore, to only present the different information may not             0.5 ratio. Section C is the standard ranking list. More exactly is a
facilitate users to interact with diverse contents. A reinforce effect     combination of four ranked lists produced by four recommender
may happen if the user feels threat on the unfamiliar information          engines explained below. To make four dimensions more clear, a
[11].                                                                      normalized relevance of each user to the target user generated by
    Providing visual interface is an approach to improve the rec-          each recommender engine is shown on the right side of the ranked
ommendation diversity. There are some previous works has been              list. Section D presents more detailed information about the person
conducted. For example, adopting a visual discovery interface can          selected in either the visualization or the ranked list. Among other
increase click-through rate (CTR) across different item categories in      aspects, four of the six tabs visually explain how each recommender
an e-commerce website [20]. The user can explore the new or rele-          engine calculates the relevance of the selected user to the target
vant products without the need for search queries. The key factor of       user. The design detail of the explanation functions can be found
the interface is to provide the controllability to the user of filtering   in the work of [24].
the recommendation contents. [17, 27] proposed a user-controllable
interface for the user to interactively change the ranking or feature      3.1    Personalized Relevance Model
weighting, for a better-personalized ranking. [7, 8] proposed inter-       To rank other attendees by their relevance to the target user, the
faces to show the various recommendation result which promotes             system uses four separate recommender engines that rank other
the user perceive the diversity of recommendation. The study of            attendees along four dimensions that we call features: text simi-
[23] shown a more diverse exploration pattern when the user was            larity of their academic publications, social similarity through the
adopting a two-dimension interface, versus the standard ranked             co-authorship network, current interests of CN3 activities, and the
list.                                                                      distance of their affiliated place to the target user. Each of the fea-
    There is some design principle from the literature review. The         tures is defined as below:
study of [27] adopted the dimension reduction techniques to project
the multidimensional data in two or three for the visualization            (1) The Academic feature is determined by the degree of pub-
purpose. However, the user can not distinguish the meaning of each         lication similarity between two attendees using cosine similarity
axis, which pushes the user to explore the closer items around them        [12, 25]. The function is defined as:
[7]. [3] argues for considering a ”diverse conceptions of democracy”
when we design a diversity enhancing tool or application. The
                                                                                         Sim Academic (x, y) = (t x · ty )/kt x kkty k          (1)
literature showed that to only presenting the comparison between
the difference was not enough to help the user to explore more                where t is word vectors for user x and y.
diverse results. Besides, the design should create the perception
of the difference of the recommendation items. That is, a useful           (2) The Social feature approximates the social similarity between
diversity enhances interface should provide the controllability to         the target and recommended users by combining co-authorship net-
the users and make the filtered result is interpretable.                   work distance and common neighbor similarity from publication
                                                                           data. We adopted the depth-first search (DFS) method to calculate
                                                                           the shortest path p [19] and common neighborhood (CN) [15] for
3    BEYOND THE RANKING LIST                                               the number n of coauthor overlapping in two degrees.
We propose a recommender system to help conference attendees to
find other relevant attendees to meet with a dual interface, which                                 Sim Social (x, y) = p + n                    (2)
includes a ranking list and visual scatter plot components. The            for user x and y.
ranking list is a classic way of presenting recommended results in
a single dimension, listed from high to low relevance. The scatter         (3) The Interest feature is determined by the the number of co-
plot was added as a diversity-promoting interface to show the              bookmarked papers and co-connected authors within the experi-
recommended result in two dimensions, with the second dimension            mental social system.The function is defined as
used to reveal the overall diversity. Figure 1 illustrates the design
of the dual interface.                                                                Sim I nt er est (x, y) = (bx ) ∩ (by ) + (c x ) ∩ (cy )   (3)
   Section A is the proposed scatter plot. The interface presents
                                                                                where bx , by represent the paper bookmarking of user x and y;
each item (a conference attendee) on the canvas as a circle. The
                                                                           c x , cy represents the friend connection of user x and y.
user can move the mouse over the circle to highlight the selection.
Section B shows the control panel with which the user can interact.
                                                                           (4) The Distance feature is simply a measure of geographic dis-
The user can select the number of recommendations to display,
                                                                           tance between attendees. We retrieve the longitude and latitude
and both the major feature and the extra feature to visualize the
                                                                           data based on attendees’ affiliation information. We used the Haver-
recommendations on the scatter plot. The major feature is used to
                                                                           sine formula to compute the geographic distance between any pair
rank the results along the X axis and in the ranked list (section C),
                                                                           of attendees [25].
while the extra feature shows the diversity of results in the selected
aspect along the Y axis. To further investigate the diversity of the
displayed recommendations, the user can also use a single aspect of                   Sim Dist ance (x, y) = Haversine(Geo x , Geoy )           (4)
the data as a category to color-code the results. The default category       where Geo are pairs of latitude and longitude coordinates for
was Smart Balance, which color codes in the four quadrants with a          user x and y.
                         Figure 1: (A) Scatter Plot; (B) Control Panel; (C) Ranking List; (D) User Profile Page.


3.2    Diversity Navigation Model                                         as high academic and high social features, or high academic and
The system determines the personalized relevance score for all            low social features. The range of diversity is measured by:
conference attendees. Instead of ranking the recommended people
                                                                                                                4
by ensemble value, the user can filter the items based on multiple                                              Õ
                                                                                             Entropy : du = −         pi loд4pi            (6)
aspects of relevance through our system. There are two kinds of
                                                                                                                i=1
diversification.
                                                                             where pi is the probability for a particular quadrant (feature or
   1) Feature diversification: the user can select any two pairs
                                                                          category) and the proportion of all of the user’s selections [13].
of proposed features and spot the recommended items from the
relevance intersection. All of the proposed features were calculated
                                                                          4 EXPERIMENT
on a different scale. For example, the distance feature is the physical
distance in miles, while the academic feature is calculated as a          4.1 Data and Participants
percentage. To enable comparison of diverse features, we adopted          The recommendations produced by all four engines are mostly
a standard Z-score to normalize all the features to the same scale        based on data collected by the Conference Navigator 3 (CN3) system
from 0 to 1. The function was defined as:                                 [4]. The system has been used to support 38 conferences at the
                                     xi − u j                             time of writing this paper and has data on approximately 6,398
                         ZScore =                                   (5)   articles presented at these conferences, 11,939 authors, 6,500 users
                                        σj
                                                                          (attendees of these conferences), 28,590 bookmarks, and 1,336 social
   where x i is ith recommended item and j represents the corre-          connections. To mediate the cold start issue for academic and
sponding features from 1 to 4. Then, we use the standard Z-table          social engines that occurs when users have no publications or co-
to convert the ZScore to the corresponding percentile pi j . Hence,       authorship within CN3 [22], we used the Aminer dataset [18]. This
we can list all the features on the same scale for presentation in a      dataset includes 2,092,356 papers, 1,712,433 authors, and 4,258,615
ranking list or scatter plot diagram.                                     authors with co-authorship.
   2) Coverage diversification: a diversification model to help the          A total of 25 participants (13 female) were recruited for the user
user select the recommended item from a different category. [9]. In       study. All of the participants were attendees at the 2017 Intelligent
the SCATTER interface, we color-code the item from different cate-        User Interfaces Conference (IUI 2017). Since the main goal of our
gories, such as title, position, and country. In the RANK interface,      system was to help junior scholars connect with other people in
we listed the category as one column for a user to access.                the field, we specifically selected junior scholars, such as graduate
   We can then measure the user selection diversity through the           students or research assistants. The participants came from 15
two diversification model. We observe the user interaction with           different countries; their age ranged from 20 to 50. All of them
items from different ”quadrants” (feature intersections) [21], such       could be considered as knowledgeable in the area of the intelligent
interface for at least one academic publication from IUI 2017. To                      Control Panel Usage Explanation Tab Usage
control for any prior experience with the recommender system, we                       RANK SCATTER RANK              SCATTER
included a question about in the background questionnaire. The                Task 1    3.88       4.12       8.56        8.56
average answer score was 3.28 on a five-point scale.                          Task 2    2.88       2.88       6.56        4.8
                                                                              Task 3    2.56       2.84       8.12        6.76
4.2    Experiment Design and Procedure                                       Overall    9.32       9.84      23.23       20.12
To assess the value of the diversity visualization, we compared the      Table 1: Usage Analysis: control panel usage (the frequency
dual interface with the scatter plot and the ranked list (SCATTER)       of user change and submit the setting of control bar), expla-
with a baseline interface using only a ranked list (RANK) with           nation tab usage (the frequency of the user switch the tab on
part A removed. The study used a within-subjects design. All             User Profile Page). Column 2 & 3 shows the comparison of
participants were asked to use each interface consecutively for          user clicks between RANK / SCATTER interfaces.
three tasks and to fill out a post-stage questionnaire at the end of
their work with each interface. At the end of the study, participants
were asked to explicitly compare interfaces along of their preference.
                                                                                          Hover       Click       Time     Engage
The order of using interfaces was randomized to control for the
                                                                              Task 1     -37.16%    -69.71%(*) +9.21% +161.7%(*)
effect of ordering. In other words, half of the participants started
                                                                              Task 2 -59.53%(*) -63.67%(*) -11.91% +115.2%(*)
the study with the SCATTER interface. To minimize the learning
                                                                              Task 3 -55.51%(*) -66.45%(*) +50.14% +179.6%(*)
effect (getting familiar with data), we used data from two years
                                                                             Overall -48.35%(*) -67.07%(*) +9.47% +134.8%(*)
of the same conference: the SCATTER interface used papers and
attendees from IUI 2017, while the RANK interface used the same          Table 2: Efficiency Analysis: the frequency of hover, click,
data from IUI 2016.                                                      task time (seconds for finish each task) and engage time (sec-
    Participants were given the same three tasks for each interface.     onds between each click). All columns show incremental
    T ask1 : Your Ph.D. adviser asked you to find four Committee         changes between RANK and SCATTER interfaces. (*) indi-
Member candidates for the dissertation defense. You need to find         cates statistical significance at the 0.05 level.
candidates with expertise close to your research field while trying
to lower the travel cost to the defense.
    T ask2 : Your adviser asked you to meet four attending scholars,               Diversity Coverage - Country Coverage - Position
preferably from different regions across the world, with a close          Task 1 -20.4%(*)           -6.42%               -15.10%
connection to your research group.                                        Task 2 +24.29%(*)        +46.59%(*)             -17.16%
    T ask3 : You want to find four junior scholars (not yet faculty
                                                                          Task 3 +35.8%(*)         +45.45%(*)             -23.07%
members) with reasonably similar interests among the conference
                                                                         Table 3: Diversity Analysis: the test of diversity and cover-
attendees to establish your networking.
                                                                         age with two category variable. All columns show incremen-
    The participants were asked to pick suitable candidates among
                                                                         tal changes between RANK and SCATTER interfaces.
conference attendees based on their best judgment in each task.
When designing the tasks, we attempted to make them realistic,
yet focused on multiple aspects of relevance, as many real tasks are.
We consider that task 1 is relevance-oriented and tasks 2 & 3 are
diversity-oriented. For a relevance-oriented task, we expect to see         Table 2 shows the work efficiency comparison between the two
if the proposed interface helps the user to filter the desired target    interfaces. We counted how many mouseovers (hovering) and clicks
efficiently. For the diversity-oriented task, in contrast, we expect     the users made to complete each task and expressed the number of
to see the user interact with the recommendation result diversely,       actions done in the SCATTER interface as a percentage increase or
compared to the baseline interface.                                      decrease from the RANK interface. The data shows that with the
                                                                         SCATTER interface, users completed the same tasks with 40-60%
5 ANALYSIS OF RESULTS                                                    fewer mouseovers and about 66% fewer clicks. At the same time,
                                                                         we found no significant difference in the time spent on the tasks.
5.1 User’s Objective Evaluation                                          The data hints that each action taken in the SCATTER interface
The result of the users’ click pattern is shown in Figure 2. The arc     delivered more interesting information to explore. Indeed, we found
diagram shows a different click pattern when the user is using the       that with the SCATTER interface, the users spent significantly more
two interfaces. The users click to a more diverse recommendation         time engaged in analyzing results.
through the scatter plot interface. This finding supports the design        Table 3 shows the diversity analysis for each task and interface.
of dual interfaces can facilitate users to explore the recommendation    We found that the diversity and coverage measurement shows the
result beyond the top rankings. Table 1 shows the system usage for       task difference. All three tasks are with a significant feature diver-
two interfaces. The data indicate that participants extensively used     sity difference between two interfaces but in the different aspect
both the control panel and explanation tabs to complete the tasks.       of features. Task 1 (relevance-oriented) shows less diversity on
There is no significant difference between the interfaces, although,     academic/distance features and less coverage on the country and
in the SCATTER interface, the users tend to use the explanation          position variables. The SCATTER interface helped users to more
functions less.                                                          accurately explore the attendees with multiple types of relevance.
Figure 2: Arc Diagram of Top 50 Recommendation: this figure shows the users’ click pattern of the two interfaces. The blue
color (left-hand side) links indicate the click from Ranked List (RANK). The orange color (right-hand side) links mean the
click from Scatter Plot (SCATTER). The node in the middle means the ranking position of each recommended item (from 1 to
50, smaller number is in the top of the order). The width of the edge represents the clicks frequency from each interface.




Figure 3: Usability and user satisfaction assessment results. A cut off value at 3.5 on the 5 point scale. (*) means significant
differences at the 5% level (p-value < 0.05)


Tasks 2 & 3 (diversity-oriented) show more diversity in the inter-        to see that the RANK interface scored a bit higher (though not
est/distance and social/distance features, respectively, as well as       significantly) on explanation usefulness, which hints that the lack
higher coverage in the country category. The result shows the user        of visualization made explanations more important in the RANK
response to the same task with a different pattern of exploration         interface. In the final preference test, the SCATTER interface re-
on diversity and coverage.                                                ceived much stronger support than the RANK interface in the user
                                                                          preference feedback (Figure 4). Most importantly, a majority of
5.2    Subjective Evaluation                                              users (84%) considered the SCATTER interface to be a better system
To compare subjective feedback, responses to the post-stage ques-         for recommending attendees and a better help in diversity-oriented
tions were analyzed using paired sample t-tests. The result of this       tasks, as well as better for recommending.
analysis is shown in Figure 3. We compared the eight aspects of sub-
jective feedback from the participants. Among them, the SCATTER           6   CONCLUSION
interface received a significantly higher rating for six aspects: Trust   In this paper, we presented a dual visual interface for recommending
(Q4), Supportiveness (Q5), Interest (Q6), Satisfaction (Q8), Intention    attendees at a research conference. A research conference context
to Reuse (Q9), and Enjoyable (Q11). In two questions, facilitation        introduces several dimensions of attendee relevance, such as social,
(Q7) and the control-reversed Benefit Question (Q12), the SCAT-           academic, interest, and distance similarities. Due to these factors,
TER interface scored higher, but not significantly. It is interesting     a traditional ranked list makes it difficult to express the diversity
                                                                                             New Review of Hypermedia and Multimedia (2016), 1–31.
                                                                                         [5] Jilin Chen, Werner Geyer, Casey Dugan, Michael Muller, and Ido Guy. 2009.
                                                                                             Make new friends, but keep the old: recommending people on social networking
                                                                                             sites. In Proceedings of the SIGCHI Conference on Human Factors in Computing
                                                                                             Systems. ACM, 201–210.
                                                                                         [6] Michael D Ekstrand, Daniel Kluver, F Maxwell Harper, and Joseph A Konstan.
                                                                                             2015. Letting users choose recommender algorithms: An experimental study. In
                                                                                             Proceedings of the 9th ACM Conference on Recommender Systems. ACM, 11–18.
                                                                                         [7] Siamak Faridani, Ephrat Bitton, Kimiko Ryokai, and Ken Goldberg. 2010. Opinion
                                                                                             space: a scalable tool for browsing online comments. In Proceedings of the SIGCHI
                                                                                             Conference on Human Factors in Computing Systems. ACM, 1175–1184.
                                                                                         [8] Rong Hu and Pearl Pu. 2011. Helping Users Perceive Recommendation Diversity..
                                                                                             In DiveRS@ RecSys. 43–50.
                                                                                         [9] Marius Kaminskas and Derek Bridge. 2016. Diversity, Serendipity, Novelty, and
                                                                                             Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in
                                                                                             Recommender Systems. ACM Transactions on Interactive Intelligent Systems (TiiS)
                                                                                             7, 1 (2016), 2.
                                                                                        [10] Hannah Kim, Jaegul Choo, Haesun Park, and Alex Endert. 2016. Interaxis:
                                                                                             Steering scatterplot axes via observation-level interaction. IEEE transactions on
                                                                                             visualization and computer graphics 22, 1 (2016), 131–140.
                                                                                        [11] Q Vera Liao and Wai-Tat Fu. 2013. Beyond the filter bubble: interactive effects
Figure 4: Preference Results: the final preference test after                                of perceived threat and topic involvement on selective exposure to information.
                                                                                             In Proceedings of the SIGCHI conference on human factors in computing systems.
user experienced the two interfaces.                                                         ACM, 2359–2368.
                                                                                        [12] Christopher D Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Intro-
                                                                                             duction to information retrieval. Vol. 1. Cambridge university press Cambridge.
                                                                                        [13] Jennifer Moody and David H Glass. 2016. A Novel Classification Framework
of recommended items (attendees). By spreading ranking over two                              for Evaluating Individual and Aggregate Diversity in Top-N Recommendations.
dimensions, the suggested interface helps users in exploring recom-                          ACM Transactions on Intelligent Systems and Technology (TIST) 7, 3 (2016), 42.
                                                                                        [14] Sean A Munson and Paul Resnick. 2010. Presenting diverse political opinions:
mendations and recognizing their diversity in several aspects. Our                           how and how much. In Proceedings of the SIGCHI conference on human factors in
approach can be applied to any recommender system with mul-                                  computing systems. ACM, 1457–1466.
tiple relevance features and item categories. To assess the visual                      [15] Mark EJ Newman. 2001. Clustering and preferential attachment in growing
                                                                                             networks. Physical Review E 64, 2 (2001), 025102.
approach, we conducted a user study in a real conference envi-                          [16] Denis Parra and Peter Brusilovsky. 2015. User-controllable personalization: A
ronment to compare our interface (SCATTER) with a traditional                                case study with SetFusion. International Journal of Human-Computer Studies 78
                                                                                             (2015), 43–67.
ranked list (RANK) in three practical tasks.                                            [17] J Ben Schafer, Joseph A Konstan, and John Riedl. 2002. Meta-recommendation
   Our experimental result shows the tangible incremental impact                             systems: user-controlled integration of diverse recommendations. In Proceedings
the metrics of system usage, efficiency, and diversity. We found                             of the eleventh international conference on Information and knowledge management.
                                                                                             ACM, 43–51.
that the SCATTER interface benefits more on the aspect of per-                          [18] Jie Tang, Jing Zhang, Limin Yao, Juanzi Li, Li Zhang, and Zhong Su. 2008. Ar-
ceived tasks and helps enhance diversity tasks. Results from the                             netminer: extraction and mining of academic social networks. In Proceedings of
final preference survey show a strong preference for the SCATTER                             the 14th ACM SIGKDD international conference on Knowledge discovery and data
                                                                                             mining. ACM, 990–998.
interface. Interestingly, we also found that users of the SCATTER                       [19] Robert Tarjan. 1972. Depth-first search and linear graph algorithms. SIAM
interface benefited more from the feature diversity tasks. The user                          journal on computing 1, 2 (1972), 146–160.
                                                                                        [20] Choon Hui Teo, Houssam Nassif, Daniel Hill, Sriram Srinivasan, Mitchell Good-
feedback suggests that it would be easier to find and categorize                             man, Vijai Mohan, and SVN Vishwanathan. 2016. Adaptive, Personalized Diver-
variables through the RANK interface. However, even the user feed-                           sity for Visual Discovery. In Proceedings of the 10th ACM Conference on Recom-
back indicates an ease of use for selecting and inspecting an item                           mender Systems. ACM, 35–38.
                                                                                        [21] Chun-Hua Tsai. 2017. An Interactive and Interpretable Interface for Diversity in
by category through the RANK interface. The users of the SCAT-                               Recommender Systems. In Proceedings of the 22Nd International Conference on
TER interface still show significantly higher coverage measurement                           Intelligent User Interfaces Companion (IUI ’17 Companion). ACM, New York, NY,
between tasks.                                                                               USA, 225–228. DOI:http://dx.doi.org/10.1145/3030024.3038292
                                                                                        [22] Chun-Hua Tsai and Peter Brusilovsky. 2016. A personalized people recommender
   The main contribution of this paper is to prove that the enhanced                         system using global search approach. IConference 2016 Proceedings (2016).
diversity interface not only helps the user to perceive diversity [8],                  [23] Chun-Hua Tsai and Peter Brusilovsky. 2017. Leveraging Interfaces to Improve
                                                                                             Recommendation Diversity. In Adjunct Publication of the 25th Conference on User
but also helps the user to improve usability in the real world beyond                        Modeling, Adaptation and Personalization. ACM, 65–70.
simple relevance tasks. We provide empirical evidence on how                            [24] Chun-Hua Tsai and Peter Brusilovsky. 2017. Providing Control and Transparency
to design a recommender system interface for users to explore a                              in a Social Recommender System for Academic Conferences. In Proceedings of
                                                                                             the 25th Conference on User Modeling, Adaptation and Personalization. ACM,
diverse set of recommended items while simultaneously improving                              313–317.
the user stratification.                                                                [25] Chun-Hua Tsai and Yu-Ru Lin. 2016. Tracing and Predicting Collaboration
                                                                                             for Junior Scholars. In Proceedings of the 25th International Conference Compan-
                                                                                             ion on World Wide Web. International World Wide Web Conferences Steering
REFERENCES                                                                                   Committee, 375–380.
 [1] Jisun An, Daniele Quercia, and Jon Crowcroft. 2013. Why individuals seek           [26] Katrien Verbert, Denis Parra, Peter Brusilovsky, and Erik Duval. 2013. Visualizing
     diverse opinions (or why they don’t). In Proceedings of the 5th Annual ACM Web          recommendations to support exploration, transparency and controllability. In
     Science Conference. ACM, 15–18.                                                         Proceedings of the 2013 international conference on Intelligent user interfaces. ACM,
 [2] Svetlin Bostandjiev, John O’Donovan, and Tobias Höllerer. 2012. TasteWeights:          351–362.
     a visual interactive hybrid recommender system. In Proceedings of the sixth ACM    [27] David Wong, Siamak Faridani, Ephrat Bitton, Björn Hartmann, and Ken Goldberg.
     conference on Recommender systems. ACM, 35–42.                                          2011. The diversity donut: enabling participant control over the diversity of
 [3] Engin Bozdag and Jeroen van den Hoven. 2015. Breaking the filter bubble:                recommended responses. In CHI’11 Extended Abstracts on Human Factors in
     democracy and design. Ethics and Information Technology 17, 4 (2015), 249–265.          Computing Systems. ACM, 1471–1476.
 [4] Peter Brusilovsky, Jung Sun Oh, Claudia López, Denis Parra, and Wei Jeng. 2016.
     Linking information and people in a social system for academic conferences.