=Paper=
{{Paper
|id=Vol-1884/paper3
|storemode=property
|title=IntersectionExplorer: the Flexibility of Multiple Perspectives
|pdfUrl=https://ceur-ws.org/Vol-1884/paper3.pdf
|volume=Vol-1884
|authors=Bruno Cardoso,Peter Brusilovsky,Katrien Verbert
|dblpUrl=https://dblp.org/rec/conf/recsys/CardosoBV17
}}
==IntersectionExplorer: the Flexibility of Multiple Perspectives==
IntersectionExplorer: the Flexibility of Multiple Perspectives
Bruno Cardoso Peter Brusilovsky Katrien Verbert
Dept. of Computer Science School of Information Sciences, Dept. of Computer Science
KU Leuven University of Pittsburgh KU Leuven
Celestijnenlaan 200A Pittsburgh, PA, USA Celestijnenlaan 200A
3001 Heverlee, Belgium 3001 peterb@pitt.edu 3001 Heverlee, Belgium 3001
bruno.cardoso@cs.kuleuven.be katrien.verbert@cs.kuleuven.be
ABSTRACT In addition to the “black box” problem, other factors have an
Recommender systems are currently an ubiquitous presence on the impact in how recommender systems perform with users (e.g., the
web, helping us find relevant items in the ever-growing plethora “cold start” issue), and research indicates that the nature of the
of information available. However, there is not a one-size fits-all system itself and that of its users may also condition recommen-
for recommender systems, and flexibility and control are crucial dation acceptance. Indeed, as Guy el al. [6] have noticed “for some
for enabling the possibility of adapting the recommender system to users, recommendations based on people work better, while for others,
different user preferences. In this paper, we present the results of a recommendations based on tags are more effective”. Addressing this
study designed to assess user interaction with IntersectionExplorer need for flexibility in accommodating user’s preferences and expec-
(IEx), a multi-perspective tool for exploring conference paper rec- tations (among other requirements), we developed and presented
ommendations. The study was conducted at the Digital Humanities Intersection Explorer (IEx) in previous work [13].
2016 Conference, an event with a rather large, heterogeneous, and IEx is a tool for exploring conference papers that proposes a
not technology-oriented audience. The results obtained indicate different way of interacting with recommendations - through the
that the IEx multi-perspective approach lends enough flexibility to exploration of multiple, intertwining perspectives of relevance. In
accommodate different user preferences. When contrasting these this work we define “perspective of relevance” as an umbrella term
results with a previous study conducted at a conference with a encompassing the source and nature of recommendations. We iden-
highly technological audience, it becomes apparent that the flex- tify three types of perspective, each one occupying its own place in
ibility of IEx is key to empower users with different profiles to IEx’s user interface (UI): (1) the perspective of personalized relevance;
customize their approach to finding relevant recommendations. (2) the perspective of social relevance and (3) the perspective of con-
tent relevance. The first of these perspectives is composed by sets
CCS CONCEPTS of papers that have been suggested by different recommendation
engines: since recommender systems leverage previous knowledge
•Information systems →Information systems applications;
about the user to provide suggestions that would likely fit his/her
KEYWORDS interests and goals, their suggestions are relevant mainly because
they are personalized. The perspective of social relevance is com-
Recommender Systems, User Interfaces, User Study posed by sets of papers that have been marked as relevant by other
ACM Reference format: users of the system: if another user is perceived as like-minded,
Bruno Cardoso, Peter Brusilovsky, and Katrien Verbert. 2016. Intersection- a collection of his/her items of interest may likely be considered
Explorer: the Flexibility of Multiple Perspectives. In Proceedings of Joint as a set worth exploring. Finally, the perspective of content rel-
Workshop on Interfaces and Human Decision Making for evance is composed of sets of papers tagged by the community
Recommender Systems, Como, Italy, August 27, 2017, 4 pages.
with the same keywords applied by the user. Since these keywords
DOI:
are usually drawn or derived from the contents or the experience
of people with an item, they provide insightful glances about the
1 INTRODUCTION contents of the tagged items. A key feature of IEx is the seamless
Recommender systems are nowadays a common fixture in many way it allows users to combine sets from these three perspectives,
environments like the web, where they play a pivotal role in helping making no distinction between them in terms of interaction or UI
us find our way through the ever more dense information jungle representation.
[7]. However, there is evidence that user trust tends to be lost when This approach lends IEx enough flexibility to allow its users to
recommendations fail, particularly when users can not understand explore and combine recommendations based on human-generated
the rationale for those recommendations - the “black box” issue. data and produced by automatic agents in a seamless manner, all
There are, of course, many ways of addressing this problem, ranging carrying the same potential weight and relevance. In order to un-
from textual explanations to more elaborate, visual approaches like derstand if users do indeed leverage IEx’s adaptability potential,
TasteWeights [2]. we conducted a user study at the 2016 edition of the Digital Hu-
manities (DH2016), a conference with a heterogeneous and not
Joint Workshop on Interfaces and Human Decision Making for
Recommender Systems, Como, Italy technology-oriented audience. We discuss the results of this study
2017. Copyright for the individual papers remains with the authors. Copying permitted in this work and contrast our findings with those of a previous
for private and academic purposes. This volume is published and copyrighted by its study [13] conducted with participants sampled from the audience
editors. .
DOI:
Joint Workshop on Interfaces and Human Decision Making for
Recommender Systems, August 27, 2017, Como, Italy Bruno Cardoso, Peter Brusilovsky, and Katrien Verbert
of a technology-oriented event, the European Conference on Tech- between sets. It is separated in three connected views (Figure 1, top
nology Enhanced Learning (EC-TEL2015). green callouts).
The Set Selection View allows the user to select sets of recom-
mendations from three different perspectives: the Perspective of
2 RELATED WORK Personalized Relevance, the Perspective of Social Relevance and the
Social recommendation based on people and tags has been re- Perspective of Content Relevance (Figure 1, labels a, b and c, respec-
searched extensively (e.g., [10]). For instance, SFViz (Social Friends tively). The Perspective of Personalized Relevance lists the papers
Visualization) [5] visualizes social connections between users and suggested by different recommendation engines, the Perspective of
their interests in order to increase awareness of others and thereby Social Relevance is composed of papers that have been bookmarked
help people find potential friends with similar interests. by other users of the system and, finally, the Perspective of Content
We can also find research focused on hybrid recommenders, i.e., Relevance shows sets of papers labelled by the community with
systems involving different recommendation techniques in synergy. a specific tag. While the first perspective is clearly associated to
An interesting reflection on this approach was made by Guy et automatic processes, the last two are based on human-generated
al. [6], who found that a hybrid people-tag-based recommender data meaning that, in a sense, IEx’s users play the role of ”human
has a slightly higher accuracy than a tag or people-only approach. recommenders”.
Other advantages are also mentioned in their work, such as “low In the Set Exploration View the user can explore all possible
proportion of expected items, high diversity of item types, richer combinations between the sets selected in the Set Selection View.
explanations” and, as previously stated, “the simple fact that for some Sets of papers are represented as columns (the current user is high-
users, recommendations based on people work better, while for others, lighted in blue) and set combinations are depicted as rows (e.g.,
recommendations based on tags are more effective” [6]. Although we Figure 1, label d), where intersecting sets are represented as filled
also combine different user-generated data sources in IEx, we do circles. The horizontal bar next to circle rows represents the relative
not merge them automatically into a hybrid recommender system. (the row itself) and the absolute (the number by the row) amount of
Instead, we empower users to select which users and tags they are papers in the selected intersection. For example, the row selected
interested in and also - akin to the idea of enabling users to switch in Figure 1 (the fourth row) indicates that there are 5 papers in
between recommenders presented by Ekstrand et al. [4] - to choose common between the suggestions of the bookmark-based agent
which automatic recommendation agents’ suggestions they want and papers bookmarked by the user named “User 1”.
to explore. The Intersection Exploration View allows the user to explore
Regarding visualization-based approaches, TasteWeights is a sys- the details and bookmark the papers contained in the selected
tem designed to allow its users to control the influence of friends’ intersection (Figure 1, label e). In the example of Figure 1, the user
and peers’ profiles and behaviors on the recommendation processes is exploring the 5 papers contained in the intersection represented
and, like IEx, it features a UI for presenting and interacting with by the fourth row of the Set Exploration View.
recommendations. The recommendation process is adapted at run-
time by user-entered preference and relevance feedback. This idea 4 USER STUDY
can be traced back to the work of Schafer et al. [12] concerning
meta-recommendation systems, where users are provided with 4.1 Setup and Demographics
personalized control over the generation of recommendations by To provide IEx with data, we have deployed it on top of Conference
altering the importance of specific factors on a scale from 1 to 5. In Navigator 3 (CN3) [3]. CN3 is a social, personalized web-based
the same line, SetFusion [11] is another example that allows users system that supports academic conference attendees and suggests
to fine-tune the weights of a hybrid recommender system, rep- talks using different recommendation engines. In IEx’s UI these
resenting relationships between recommendations through Venn engines’ recommendations are metaphorized as “agents” and com-
diagrams. IEx extends these concepts by focusing on the visualiza- pose the Perspective of Personalized Relevance (Figure 1, label a).
tion of relationships between perspectives of relevance, including The engines are: (1) the top-10 agent that suggests the 10 papers
human-generated data such as user bookmarks and community that have been bookmarked the most; (2) the tag-based agent that
tags in addition to recommender outputs in a scalable, set-based matches the tags assigned to papers by the current user to those of
visualization, the UpSet [8]. The UpSet is a visualization technique other users (using the Okapi BM25 algorithm [9]); (3) the bookmark-
dedicated to the analysis of sets, their intersections, and aggregates based agent models the user interest profile as a vector of terms
of intersections. Set intersections are visualized in a matrix layout with weights based on the TF-IDF statistic [1] using the contents
that enables the effective representation of associated data, such of the papers bookmarked by the user; (4) the external bookmark
as the number of elements in set aggregates and intersections (see recommender engine, that combines both the contents of the pa-
Figure 1, Set Exploration View callout). pers bookmarked by the user in CN3 and other social bookmarking
systems like Mendeley, CiteUlike, or BibSonomy [14]; and finally,
(5) the bibliography recommender engine uses the content of papers
3 INTERSECTIONEXPLORER (IEX) previously published by the user [14].
As previously stated, IEx is a platform that allows for multi-perspective The CN3 also supplies IEx with data regarding other user’s book-
exploration of recommendations. An overview of its user interface marks and community-tagged papers, which respectively compose
is shown in Figure 1. IEx uses a simplified version of UpSet [8], a the latter’s perspectives of social- and content-relevance. To ad-
matrix-based visualization technique to represent sets and overlaps dress the well-known cold start problem, we requested participants
Joint Workshop on Interfaces and Human Decision Making for
IntersectionExplorer: the Flexibility of Multiple Perspectives Recommender Systems, August 27, 2017, Como, Italy
Figure 1: IEx’s user interface, composed of three views (identified by the top green callouts): the Set Selection View lists the (a)
recommendations of automatic agents (Perspective of Personalized Relevance), (b) the bookmarks of other users (Perspective of
Social Relevance) and (c) papers tagged by the community (Perspective of Content Relevance); the Set Exploration View allows
users to explore the (d) intersections between the selected sets of papers as rows; and, finally, the Intersection Exploration View
displays the items (e) of the intersections clicked in the Set Exploration View, thereby allowing users to explore and bookmark
the suggested papers.
to bookmark and tag a minimum of five papers from the conference agent), multiple agents (exploring the overlapping suggestions of
proceedings its CN3 proceedings page. more than one agent) and augmented agents (exploring the overlaps
In order to understand how flexible IEx’s multi-perspective ap- between the suggestion of agents and sets of papers from other
proach is, we conducted a user study at the DH2016 Conference, perspectives). It is noticeable that in our DH2016 study single and
an event with a rather large, heterogeneous, and not technology- augmented agents were explored the most, with comparable preci-
oriented audience, mainly composed of researchers from the areas sion scores, while participants of our first study mainly explored
of social sciences and humanities. We recruited 37 participants the suggestions of multiple agents.
through direct invitation out of the DH2016 attendees, 11 female
and averaging 38 years (SD: 10). For background, our previous
EC-TEL2015 study had 20 participants, 3 female, averaging 32.9 Table 1: Results for participant interaction with automatic
years old (SD: 6.32). recommendation agents (results of our first study between
Before starting the tests, all participants received the same pre- parentheses).
sentation that introduced IEx, explained its functionality and cov-
ered its essential concepts. All participants were asked to perform Agents Bookmarks
Papers
Precision Explorations
Viewed
the same task: to freely explore the DH2016’s papers through IEx,
and bookmark five relevant papers. Single 41 (5) 196 (93) 0.21 (0.05) 31 (26)
We collected data about participants’ actions, like paper book- Multiple 1 (15) 7 (166) 0.14 (0.09) 4 (40)
marking actions and visualizations. To provide some definitions, Augmented 15 (8) 63 (50) 0.24 (0.16) 37 (27)
we consider that a set of papers is “explored” when the user clicks
on its respective row (Figure 1, d); that papers are “visualized”
when they are listed in the Intersection Exploration View (Figure 1, Table 2 displays the results of single perspective explorations,
e); and that a paper is “bookmarked” when the user clicks on the i.e., explorations of the overlaps between one or more sets of papers
“Bookmark this paper” link that is adjacent to each visualized paper. from the same perspective. It is noteworthy how, in both studies,
In order to simplify our analysis, we define the metric precision as the perspective of content relevance yielded a noticeably higher
the fraction of papers that were visualized and bookmarked, across precision than the other two perspectives.
all users (e.g., if the user was to bookmark one paper out of five Finally, Table 3 presents the results of perspective involvement
he/she visualizes, that would yield a precision of 1/5, or 0.2). in explorations. We consider that a perspective is involved when
the user is exploring a combination containing at least one set of
papers from that perspective. Once again, participants of our two
4.2 Results studies were most likely to make a bookmark when the perspective
In Table 1, we can see the results of participant interactions with of content-relevance was involved, i.e., when one ore more sets of
agents, namely single agents (exploring the suggestions of a single tagged papers were combined with other sets.
Joint Workshop on Interfaces and Human Decision Making for
Recommender Systems, August 27, 2017, Como, Italy Bruno Cardoso, Peter Brusilovsky, and Katrien Verbert
Table 2: Interaction results for single-perspective explo- to mix them freely and discover new and customized approaches
rations (results of our first study between parentheses). that fit best with their personal objectives.
Bookmarks Papers Viewed Precision Explorations 6 CONCLUSIONS AND FUTURE WORK
Agents 42 (20) 203 (259) 0.21 (0.08) 35 (66) Our results indicate that IEx’s multi-perspective approach is a
Users 49 (14) 335 (107) 0.15 (0.13) 41 (30) promising way of presenting recommendations to its users, flex-
Tags 44 (11) 94 (28) 0.47 (0.39) 80 (19) ible enough to adapt and allow them to follow their own path to
trustworthy recommendations. For the future, it would be interest-
ing to further challenge IEx in domains of application other than
Table 3: Precision scores for perspective involvement in ex-
the recommendation of conference papers, and also with different
plorations, across participants. The black square () repre-
audiences. While the UpSet is an effective way of presenting in-
sents perspective involvement (results of our first study be-
tersections between sets, its focus on information entails domain
tween parentheses).
agnosticism. Therefore, different, multi-perspective visualizations
may also be considered to bring IEx closer to its users.
Papers
Bookmarks Precision Explorations
Viewed
ACKNOWLEDGEMENTS
57 (59) 267 (398) 0.21 (0.15) 73 (156)
Agents
96 (25) 1383 (145) 0.07 (0.17) 133 (59) The research has been partially financed by the KU Leuven Research
66 (45) 408 (239) 0.16 (0.19) 86 (119) Council (grant agreement no. C24/16/017).
Users
87 (39) 1242 (304) 0.07 (0.13) 120 (96)
48 (25) 110 (71) 0.44 (0.35) 94 (56) REFERENCES
Tags [1] Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. 1999. Modern information
105 (59) 1540 (472) 0.07 (0.13) 112 (159) retrieval. Vol. 463. ACM press New York.
[2] Svetlin Bostandjiev, John O’Donovan, and Tobias Höllerer. 2012. TasteWeights: a
visual interactive hybrid recommender system. In Proc. RecSys’12. ACM, 35–42.
[3] Peter Brusilovsky, Jung Sun Oh, Claudia López, Denis Parra, and Wei Jeng. 2016.
5 DISCUSSION Linking information and people in a social system for academic conferences.
New Review of Hypermedia and Multimedia (2016), 1–31.
The results of our studies allow us to conclude positively about the [4] Michael D. Ekstrand, Daniel Kluver, F. Maxwell Harper, and Joseph A. Konstan.
flexibility of IEx’s approach to accommodate different user prefer- 2015. Letting Users Choose Recommender Algorithms: An Experimental Study.
ences. Indeed, after the definition of “precision” that we make in In Proceedings of the 9th ACM Conference on Recommender Systems (RecSys ’15).
ACM, New York, NY, USA, 11–18. https://doi.org/10.1145/2792838.2800195
this work (see section 4.1), our results indicate that the perspective [5] Liang Gou, Fang You, Jun Guo, Luqi Wu, and Xiaolong Luke Zhang. 2011. Sfviz:
of content-relevance (composed by sets of tagged papers) is the one interest-based friends exploration and recommendation in social networks. In
Proceedings of the 2011 Visual Information Communication-International Sympo-
accounting for the higher precision (see Table 2). This may be ex- sium. ACM, 15.
plained in light of the nature of this perspective, since well-applied [6] Ido Guy, Naama Zwerdling, Inbal Ronen, David Carmel, and Erel Uziel. 2010.
tags provide accurate insights into the contents of the labeled items, Social media recommendation based on people and tags. In Proceedings of the 33rd
international ACM SIGIR conference on Research and development in information
and conference papers are interesting to readers mainly because retrieval. ACM, 194–201.
of their content. Also, we found that there is a tendentially higher [7] Matevž Kunaver and Tomaž Požrl. 2017. Diversity in recommender systems–A
precision when sets of tagged papers are involved in explorations survey. Knowledge-Based Systems (2017).
[8] Alexander Lex, Nils Gehlenborg, Hendrik Strobelt, Romain Vuillemot, and
(see Table 3). Since this involvement implies that all explored papers Hanspeter Pfister. 2014. UpSet: visualization of intersecting sets. Visualiza-
are also community-tagged papers, this finding provides support tion and Computer Graphics, IEEE Transactions on 20, 12 (2014), 1983–1992.
[9] Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, et al. 2008. Intro-
to our previous observation. duction to information retrieval. Vol. 1. Cambridge university press Cambridge.
Another interesting result reports to participant interaction with [10] Michael G Noll and Christoph Meinel. 2007. Web search personalization via
automatic recommendation agents (see Table 1). It is noticeable that social bookmarking and tagging. In The semantic web. Springer, 367–380.
[11] Denis Parra and Peter Brusilovsky. 2015. User-controllable personalization: A
while participants of our first study were mainly interested in the case study with SetFusion. International Journal of Human-Computer Studies 78
suggestions of multiple agents, those of our second study were not (2015), 43–67.
(respectively 40 vs. 4 explorations). In turn, while the precision was [12] J Ben Schafer, Joseph A Konstan, and John Riedl. 2002. Meta-recommendation
systems: user-controlled integration of diverse recommendations. In Proceedings
higher in our first study for augmented agents, the precision was of the eleventh international conference on Information and knowledge management.
the highest in our DH2016 study for single and augmented agent ACM, 43–51.
[13] Katrien Verbert, Karsten Seipp, Chen He, Denis Parra, Chirayu Wongchokprasitti,
explorations. These findings suggest that IEx use data reflects the and Peter Brusilovsky. 2016. Scalable exploration of relevance prospects to
nature of its users, i.e., technology-oriented users prefer to explore support decision making. In Proceedings of the Joint Workshop on Interfaces
the overlaps of automatic processes while less technology-oriented and Human Decision Making for Recommender Systems co-located with ACM
Conference on Recommender Systems (RecSys 2016). CEUR-WS, 28–35.
people were more interested in complementing the recommenda- [14] Chirayu Wongchokprasitti. 2015. Using external sources to improve research
tions of automatic agents with sets of suggestions based on human- talk recommendation in small communities. Ph.D. Dissertation. University of
generated data - or, in other words, in having a human perspective Pittsburgh.
over machine-produced recommendations.
These results can be extrapolated to conclude about the control
that IEx lends to its users. Indeed, our platform seems to be flexible
enough to allow them to select and explore the perspectives they
judge the most productive and, what is perhaps more interesting,