=Paper=
{{Paper
|id=Vol-1438/paper4
|storemode=property
|title=Inspection Mechanisms for Community-based Content Discovery in Microblogs
|pdfUrl=https://ceur-ws.org/Vol-1438/paper4.pdf
|volume=Vol-1438
|dblpUrl=https://dblp.org/rec/conf/recsys/TintarevKHO15
}}
==Inspection Mechanisms for Community-based Content Discovery in Microblogs==
Inspection Mechanisms for Community-based Content Discovery in Microblogs Nava Tintarev Byungkyu Kang Tobias Höllerer University of Aberdeen Dept. of Computer Science Dept. of Computer Science Aberdeen, UK University of California University of California nava.tintarev@gmail.com Santa Barbara, USA Santa Barbara, USA bkang@cs.ucsb.edu holl@cs.ucsb.edu John O’Donovan Dept. of Computer Science University of California Santa Barbara, USA jod@cs.ucsb.edu ABSTRACT Recommender systems address the challenges of finding ‘hid- This paper presents a formative evaluation of an interface for den gems’ which are tailored to individuals from a very wide inspecting microblog content. This novel interface introduces selection. Implemented well, they hold the key to helping filters by communities, and network structure, as well as rank- users discover items that are both unexpected and relevant, ing of tweets. It aims to improving content discovery, while while helping catalog holders sell a wider range of items [3]. maintaining content relevance and sense of user control. Par- In trying to help users make such discoveries, recommender ticipants in the US and the UK interacted with the interface systems walk a thin line between a) making unexpected but in semi-structured interviews. In two iterations of the same risky recommendations (increasing the chances of irrelevant study (n=4, n=8), we found that the interface gave users a recommendations), and on the other hand b) over-tailoring sense of control. Users asked for an active selection of com- (resulting in unsurprising recommendations). Over-tailoring munities, and a more fine-grained functionality for saving in- can also result in filter bubbles [15], whereby users do not dividual ‘favorite’ users. Users also highlighted unanticipated get exposed to items outside their existing interests. For cur- uses of the interface such as iteratively discovering new com- rent events, such as content in microblogs, personalization munities to follow, and organizing events. Informed by these algorithms may narrow what we know, and surround us with studies, we propose improvements and a mock-up for an in- information that supports what we already believe. This can terface to be used for future larger scale experiments for ex- result in polarization of views, especially as we have a ten- ploring microblog content. dency to self-filter [2]. Author Keywords This paper address these issues by supporting controlled fil- Microblogs, visualization, communities, explanations, tering of microblog content. It introduces a novel visualiza- interfaces, content discovery tion which supports filtering by allowing a user to control: a) ACM Classification Keywords which communities influence their feed b) the network struc- H.5.m. Information Interfaces and Presentation (e.g. HCI): ture relating to these communities and c) different ways of Miscellaneous ranking tweets. This visualization is evaluated in two iter- ations of a qualitative study that assesses the value of such INTRODUCTION controls, as well as the concrete implementation choices ap- Filtering of streaming data such as microblog content is in- plied. We also discuss the ways these filters and controls are evitable, even if it is done by showing the most recent content perceived by users, and how they envision that they would as restricted by screen-size. However our live timelines do use them. We conclude with describing our next steps. often get tailored to us, without transparency or a sense of control. Getting the selection of the content right is a delicate BACKGROUND matter. Inspectability and Control in Recommender Systems In the domain of recommender systems there is a grow- ing acceptance and interest in user-centered evaluations [12]. For example, [9] argues for a framework that takes a user- centric approach to recommender system evaluation, beyond the scope of recommendation accuracy. Along the same vein, it has also been recognized that many recommender systems ACM Recommender Systems 2015, Workshop on Interfaces and Human Decision Making for Recommender Systems (IntRS15), Vienna, Austria. Copyright held by the function as black boxes, providing no transparency into the authors working of the recommendation process, nor offering any ad- The second intuition is that the intersection of groups may be ditional information to accompany the recommendations be- particularly fortuitous for the discovery of new content. This yond the recommendations themselves [6]. is informed by exploitation of cross-domain model inspira- tion as a means for serendipitous recommendations, e.g., [1]. To address this issue, explanations can be given to improve the transparency and control of recommender systems. Re- search on textual explanations in recommender systems to VISUALIZATION date has been evaluated in wide range of domains (varying In this study, we designed a web-based visualization that al- from movies [18] to financial advice [4]). Increasingly, there lows users to experience the recommender system we pro- has also been a blurring between recommendation and search, pose (see Figure 1). The first two columns represent “groups” making use of information visualization. For example, [19] (communities) and “people” (users), allow us to filter ‘tweets’ has looked at how interaction visualization can be used to im- in the third column by both of these ‘facets’. The system sup- prove the effectiveness and probability of item selection when ports therefore support a faceted navigation, with the third users are able to explore and interrelate multiple entities – column representing the resulting information. In addition, i.e. items bookmarked by users, recommendations and tags. the system supports Pivoting (or set-oriented browsing), in Similarly [16] found that in addition to receiving transpar- that it allows users to navigate the search space by starting ent and accurate item recommendations, users gained infor- from a set of instances (by selecting which groups they would mation about their peers, and about the underlying algorithm like to follow). through interaction with a network visualization. The rational for the visualization follows several intuitions with regards to exploring novel and relevant content in social Inspectability and Control in Microblogs network, as outlined in the section in related work. In order to better deal with the vast amounts of user-generated The first is that people can find relevant content in the inter- content in microblogs, a number of recommender systems re- section between multiple communities. In the visualization searchers have studied user experiences through systems that this is represented by the selection of up to three communi- provide transparency of and control over recommendation al- ties to which a user belongs, and color blending to indicate gorithms. Due to the brevity of microblog messages, many people and content that represents this type of overlap. An- systems provide summary of events or trending topics with other intuition is that weak ties, or friends of friends, are also detailed explanations [11]. This unique aspect of microblogs good candidates for content discovery. In this visualization makes both inspectability and control of recommender algo- they are represented as two hops in a network structure. Con- rithms particularly important, since they help users to more sequently we included a slider which included 0-hops (do not efficiently and effectively deal with fine-grained data. For consider this community), 1-hop (include people who follow example, experimental evidence to argue that inspectability a given community), 2-hops (include people who follow peo- and control improve recommendation systems is presented ple in a given community). for microblogs in [16], via a commuter traffic analysis ex- periment, and more generally in [8] using music preference Finally, the ranking of tweets according to a) relevance to a data in their TasteWeights system. user compared to b) popularity and c) time is also likely to help users find relevant and unexpected content compared to Community-based Content Discovery tweets only ordered by time. Serendipity is defined as the act of unexpectedly encountering Structure and Interaction something fortunate. In the domain of recommender systems, one definition has been the extent to which recommended Figure 3 shows a snapshot of the interactive visualization items are both useful and surprising to a user [7]. This pa- used in the study. Information is presented in three columns. per investigates how exploration can be supported in a way From left to right, these are: group/community, people and tweet columns. Users can interact with entities in any of these that improves serendipity. three columns to highlight associations to entities in other The intuitions guiding the studies in this paper are based on columns. In the people and tweet columns, entities are clus- findings in the area of social recommendations, that is based tered and colored based on community associations. In the on people’s relationships in online social networks (e.g., [13]) first column, we visualize a set of communities (also referred in addition to more classical recommendation algorithms. to as groups), which by design, may have some membership and content overlapping. Within this column, each entity has The first intuition is that weak rather than strong ties are im- a widget to control network distance from that entity. This portant for content discovery. This intuition is informed by enables the user to specify how that entity contributes users the findings of the cohesive power of weak ties in social net- and content to the other columns. In particular, sliders were works, and that some information producers are more influ- used for control in Study 1 and radio buttons in Study 2. ential than others in terms of bridging communities and con- tent [5]. Results in the area of social-based explanations also In the second column, a ranked list of users related to each suggest that mentioning which friend(s) influence a recom- community is visualized. These users serve as sources for in- mendation can be beneficial (e.g, [17, 20]). In this case, we formation recommened in the third column, but the visualiza- support exploring immediate connections or friends, as well tion also supports analysis of the connectivity of these users as friends-of-friends. across communities in addition to the content they distribute. Figure 1. Visualization of the recommendation system used in the study 1. The third column shows the recommended tweets which are structure, and ranking of tweets) considered useful for par- by default filtered and ordered according to recency. A user ticipants? b) is the way they are implemented useful? c) do can change the ranking algorithm for this column to either these controls give users a sense of control? d) do participants popularity or relevance. use the controls in the way that we envisaged? The version of the system used for this study can be found online2 . Color Scheme Selecting appropriate color scheme is one of the important as- Participants pects to consider in user interface design. We examined dif- 4 participants were recruited from research staff at computer ferent sets of colors and carefully selected three major colors science department at a UK university. Their ages ranged that represent each group on the first column. They have been from 23-51. They all had twitter accounts, but their experi- selected among the most popular color palettes on Adobe ence with twitter ranged from inactive to highly experienced Color website1 . These colors are tested under grayscale con- (including the use of twitter management and analytics appli- dition. cations). 1 was female, and 3 male. They all had a native or fluent level of English language skills. Participants varied Materials from PhD students, post-doctoral fellows to teaching staff. The materials for the experiment were abstracted: people were given random names of both genders, tweets were short One of the participants had done research with visualizations lines from a short Latin text (“Lorem Ipsum. . . ”), resulting in and twitter, the other three had no experience with either. a total of 229 tweets. When participants interacted with the None knew Latin (one had taken Latin course, but professed system, a random subset of 12 tweets was presented. The top a very rudimentary level of knowledge). 4 of these tweets included a retweet, to visually increase the similarity with a twitter feed, and was applied consistently Procedure across adaptations. Participants took part in individual semi-structured inter- views, following a user test plan3 . Following the collection STUDY 1 of basic demographic data, participants were given a brief in- This section describes a formative study conducted to eval- troduction to the system. The various interface components uate the proposed visualization. We used a layered evalua- were verbally introduced without interacting with the system. tion approach [14], focusing on the decision of an adaptation Participants were then given several simple tasks such as in- and how it was applied (in contrast to which data was col- cluding people who are connected to other people for a given lected or how it was analyzed). Participants took part in semi- community, or ranking tweets by relevance (rather than time). structured interviews, in order to evaluate the user experience Following each interaction participants were asked how the (following the guiding scenarios of [10]). More concretely tweets had changed, if new ones had been added, or if tweets this study aimed to answer the following questions: a) are the had disappeared. The tasks given were: three introduced controls (selection of communities, network • Go to the system online. What are your first impressions? 1 2 https://color.adobe.com/explore/most-popular/ http://goo.gl/krOvuJ 3 ?time=all https://goo.gl/3KpH9z • Select one of three communities that you are a member of Results and reflect your interests (if user can not think of any tell Are the introduced controls (communities, network structure, them to think of conferences that they attend). Have a look and ranking of tweets) considered useful by participants? at the tweets that are recommended to you. The scores given to the various controls was generally high (5 • Add tweets (1 hop) for a second community of your choice or above). There were three exceptions. Participant3 did not from the above. find tweet ranking by relevance and popularity useful at all. Participant4 gave low scores to the hop control for network • Is there any relevant tweet from this second community structure, and the links, but this was due to the way they were you did not see before? Are there any that have disap- implemented, and is discussed below. peared? Is the way they are implemented useful? • The tweets are currently ranked by time, change this to All the participants noted that the interface was simple and rank the tweets by popularity. clean, and had a good first impression. Participant4 noted that it would be well suited for a mobile interface. • Are there now any tweets you did not see before? Are there • Hop control All of the participants found it difficult to any that have disappeared? understand the control for the network structure. When • Now, change who you get your tweets from to include peo- thinking aloud, several said that pulling the slider further ple who are linked to (2 hops) people that attend your first to the right would increase the number of tweets on a cer- community. You may want to remove the second commu- tain topic, rather than widen the network (which potentially nity for this too. would dilute the focus of the tweets). • Community selection Participant1 wanted to ‘activate’ a • How about now, are there now any tweets you did not see community by selecting its box. This seems more intuitive before? Are there any that have disappeared? than selecting 0 hops for the communities they did not want Following the interaction with the system, participants took to follow. part in an exit interview where they were asked about their perceived control of the system, the usefulness of various • People In addition to filtering on community structure and functionalities, and how they would use them for exploration. inclusion, several participants wanted a finer grain control More concretely the questions asked included of which users were included in the selection of tweets. Some users wanted to activate users somehow, by either • How did it feel? What was your impression? (Positive adding them to favorites at the top of the person list, or impressions? Negative impressions?) activating through selection. These participants felt that this should influence the ranking of tweets. • Would you have liked more training on how to interact with the visualization before you got started? • Tweets Participants felt that tweets belonging to the same community should not only have the same color, but be • How helpful did you find the following functionalities (1-7, grouped together. Participant3 (experienced twitter user) unhelpful to helpful), and how could they be improved? felt that ranking of tweets by any other measure than re- cency (time) was not useful. – Tweets organized by community; – Changing how the tweets are ordered/ranked • Links Participant3 found the links and colors between the columns inconsistent. The relationship between the first – Changing who I get tweets from (0,1,2 hops) two columns used links, whereas the relationship between – Being able to interact with the system to specify dif- the second two columns used colors. ferent preferences • Color-interleaving Participant1 mistook the color- – The links between different parts of the interface (peo- interleaving to imply significance, as they varied in hue. ple, groups, tweets). However, the other participants interpreted this correctly although did ask if the interpretation was correct. • Do you think these functionalities would help you find new and relevant information you would not find otherwise? Do these controls give users a sense of control? How would you use them to do this? All of the participants felt that the interface improved their control over their tweets. They also consistently agreed that • Does the filtering give you a sense of what you might be they would be missing some content, and that they were not in missing, or does it hide information that you need? complete control, but that they were happy with the balance in the trade-off. • Did you feel like you had control over which information was presented to you? However, Participant3 felt that they wanted to be able to scroll through all of their tweets, especially because they did not • Would you liked to have had any controls that are not have the finer grained control of which individuals appeared present in this interface? in their feed. Figure 4. Analysis of subjective results in exit interviews for the two Figure 2. Plot showing correlation between participant age and reported studies. Error bars show standard error. importance of “Being able to interact with the system to specify different preferences”. interface can be seen in Figure 3, with annotations to high- light each improvement. The version of the system used for Do participants use the controls in the way that we envisaged? this study can be found online4 . All of the participants completed the simple tasks given to them. They all stated that they would find new and relevant Participants content using the interface, although the highly experience 8 participants were recruited from research staff at computer twitter user felt they already find novel content using tools science department at a US university. Their ages ranged such as TweetDeck. When asked how they would you use the from 20 to 45. 5 participants were female and 3 were male. functionalities to find new and relevant information, partici- Participants varied from PhD students, post-doctoral fellows pants suggested two uses we had not initially considered: to teaching staff in computer science, engineering, media-arts Organizing events Participant3 felt that the groups could be and physics. They all had a native or fluent level of English defined by other characteristics rather than membership of language skills. 6 of the participants had Twitter accounts, a community, such as geographic location. This participant and one person had done research with Twitter data in the suggested that they would use this functionality to identify past. 5 had done research with visualization. As with Study1, and coordinate groups of people when organizing events on no participants knew Latin. the topics they were interested in. Procedure Discover new groups Participant2 was confident that they As in Study1, participants took part in individual semi- would find new relevant communities when looking at the in- structured interviews. Studies were conducted in a computer tersection of existing communities that they follow. This par- science lab on campus using two notebook computers. The ticipant listed three music bands that they listen to and would participant interacted with the UI on one, and the experi- follow on twitter. They would use the system to discover new menter/interviewer took notes on the other. On average, stud- bands, and would then add them as a new group as a ”seed” ies lasted 35 minutes (min 28 minutes, max 43 minutes). for further discovery. Other suggestions Results Participants suggested several features they would expect in In this section, we revisit questions from Study1 and add ad- an interface that was integrated with twitter. For example, ditional comments and discussion based on the new partici- they would want to be able to view the profiles (or at least, pants interacting with the improved UI in Study2. Figure 4 the first 50 characters) of the people they are receiving tweets shows a comparison of participants’ opinions on the differ- from. Others wanted to be able to reply to tweets directly ent features of the system between Study1 (N=4) and Study2 from the feed. Another suggestion was to introduce separate (N=8), along with the combined score (N=12). We note that columns for different communities. This may be related to the combined score is based on two slightly different UI de- the request by other users to be able to group tweets by com- signs, and it is only used as a rough estimate of the overall munity. group evaluation. Are the introduced controls (communities, network structure, STUDY 2 and ranking of tweets) considered useful by participants? The first study identified several limitations of the system, The scores shown in Figure 4 range between 5.58 and 6.87 for which were addressed for a second iteration of evaluation. Study2, shown in the middle column of each group, an aver- Improvements included: a) using buttons rather than a slider age of approximately one point on the 7 point scale. Com- to control the number of hops; b) sorting people by group pared to Study1, the interface modifications appear to have affinity, e.g. greenGroup people were listed at the top, rather had a positive impact on user experience with the system. than mixed throughout the list; c) identifying how many peo- 4 ple were filtered (i.e. “Showing 12 of 1307”). The improved http://penguinkang.com/intRS/ Figure 3. Improved visualization design used in the main user study. Annotation (A) shows changes to the number-of-hops selection. (B) shows the number of filtered users interactively in the form “m of n”, and (C) shows connectivity-based clustering and associated coloring of nodes in the “People” column. While this is a promising side result, the purpose of the study a weighting mechanism, and all understood that it sourced was to provide a formative evaluation of the interface. users from n-hops farther away in the Twitter network. Participants reported the best score for the feature to organize • Community selection Most participants commented that Tweets by community, which is a core contribution of the sys- community selection and analysis was a strong point of the tem. This is encouraging feedback as the authors are design- system. Suggested communities included musical artists, ing a larger-scale quantitative evaluation with this as a central pet fan clubs, and conferences or meetings. feature. The features that elicited the lowest scores were the hop-distance selector and the edge visualizations between the • People A few participants reported having trouble un- columns. derstanding the coloring and community-based group- ing/clustering in this column. All participants understood Participants also reported that they liked the ability to change the data flow correctly by the end of the sessions, but this how Tweets were ordered and ranked through the interface. feature took longer than others for them to master. The One participants commented that “I can’t do this in Face- main cited reason for this was that the colors – added to dis- book or Twitter – this is great!”. Support for expressing real- tinguish the groups, were too similar, as mentioned above. time preferences through interactive interface components Two participants mentioned that it would be useful to select met with strong positive feedback, with all users reporting or weight people of interest. a sense of increased control over the information feed. • Tweets Two participants suggested that a ranking score would be useful to distinguish between tweets in the right Is the way they are implemented useful? column. Participants also requested that when a change is Similarly to Study1 study, all participants commented that the made in the system, the source of that change’s effect on interface was clean and well organized. One participant com- the list should be visualized. Our proposed solution to this plained that it was too complex and could benefit from having is shown in Figure 5 as a ranking source indicator for each less data. 50% of the participants pointed out an issue with tweet. the node-coloring in column 2, shown in Figure 3. Note that this figure needs to be viewed in color to see the true effect • Links Participants were slightly dissatisfied with how links (see link to system above). were shown in the system. Three people commented that links should be shown across all columns when a particular • Hop control Some participants did not realize that the 0 group is selected in the left column, or when any other node position essentially turned the group node off. There were is selected, to visually communicate the associations of that also multiple comments that when hop control was set to 0, node. Other participants commented that the on-demand showing the nodes opaquely was not a good design choice. design was a good idea to avoid cluttering the view. One participant explicitly mentioned that it would be better to remove these nodes completely, noting that the visual ef- • Color-interleaving Half of the users complained that this fect of setting the hop-control to 0 would be much shorter. was too subtle and needed to be made more explicit. This Unlike Study1, no participants confused the hop slider with has been addressed through the use of colored icons next to people to signal group memberships. The color palette selection to include an option for 0-hops, thereby disabling has also been changed to make clearer distinction between the node. groups. Demographics Analysis Do these features give users a sense of control? A brief analysis of demographics and responses showed an In keeping with Study1, all of the participants felt that the interesting correlation between participant age and the per- interface improved their control over their tweets. They also ceived importance of specifying preferences on-the-fly in the consistently agreed that they would be missing some content, user interface. Figure 2 shows a plot with the Likert-scale and that they were not in complete control, but that they were responses for the dynamic preferences shown on the Y-axis happy with the balance in the trade-off. Similar to the Study1, and participant age shown on the X-axis. The data follows a two participants suggested use of scrolling or similar mecha- negative linear trend, with younger participants specifying a nism to view filtered-out tweets in case they wanted to. higher perceived importance of specifying preferences. CONCLUSION AND FUTURE WORK Do participants use the features in the way that we envisaged? In this paper we evaluated a visualization which allowed users Generally, participants reported that they would find the sys- to explore and filter microblog content for communities to tem useful for discovering new content and exploring com- which they belong. The ability to organize Tweets by com- munity structure in the domains that they chose (music, con- munity, the core contribution of the visualization, was rated ferences, pet fan clubs etc.). In particular, they felt that the most highly. Users also stated that the interface gave them real-time preference feedback, community selection and al- enough control over their content, even if they felt some in- gorithm selection (time, relevance or popularity) gave them a formation would inevitably be hidden – the trade-off was con- good sense of control. Many commented that such features sidered acceptable. We also found several unexpected uses of would be useful on everyday social media streams such as in the system. For example two separate participants, in differ- Twitter and Facebook. ent experimental settings (one in the UK and one in the US) Participants suggested similar uses of the controls as in applied the interface (theoretically) as a network traversal and Study1. Many suggested using the system for organizing discovery tool for music. Figure 5 introduces an improved events and advertising across relevant communities, and for mock-up with a number of changes. In addition to these im- discovering new groups. Echoing the comments of Study1, provements, we are planning larger-scale quantitative evalua- one participant mentioned that they would like to use the sys- tions. One of these will explore the use of community-based tem for exploring a broader network of musical artists. They filters, and the other controls introduced in this paper, on ex- described selection of three fan club communities as in our isting twitter feeds. experimental setup, but went on to describe iteratively re- placing them with new nodes that were discovered on the ACKNOWLEDGMENTS right column, thereby applying the interface (theoretically) This research has been carried out within the project Scrutable Autonomous Systems as a network traversal and discovery tool. This is an example (SAsSY), funded by the UK Engineering and Physical Sciences Research Council, grant of a reported use that was not in our design. Another partici- ref. EP/J012084/1. This work was also partially supported by the U.S. Army Research pant proposed to use the system to analyze which community Laboratory under Cooperative Agreement No. W911NF-09-2-0053; The views and produced the most popular content on Twitter, by using the conclusions contained in this document are those of the authors and should not be inter- popularity ranking algorithm and traversing the edge connec- preted as representing the official policies, either expressed or implied, of ARL, NSF, or tions back to the groups. the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. Other suggestions REFERENCES Participants suggested a variety of ways to improve the in- 1. André, P., m.c. schraefel, Teevan, J., and Dumais, S. T. terface. These included addition of multimedia content to Discovery is never by chance: Designing for the tweet column, and visually distinguishing retweets (com- (un)serendipity. In CC (2009). pared to original tweets) by color. Participants also suggested creating visually distinct colorings for blended color groups, 2. Bakshy, E., Messing, S., and Adamic, L. A. Exposure to and displaying links to all group memberships upon clicking ideologically diverse news and opinion on facebook. a user node (rather than upon hover). Another request was Science 348 (2015), 1130–1132. for an indication of how much data has been filtered in all 3. Castells, P., Hurley, N., and Vargas, S. Recommender the columns (currently only for the people column). Partici- Systems Handbook (second ed). (in press), ch. Novelty pants also suggested measuring the usefulness of the system and Diversity in Recommender Systems. for getting an overview of a new community or topic. Several comments, including from reviewers, focused on the group 4. Felfernig, A., Teppan, E., and Gula, B. selection widget. In the current version, a group is activated Knowledge-based recommender technologies for by clicking on the box that represents the group, then the ra- marketing and sales. Int. J. Patt. Recogn. Artif. Intell. 21 dio buttons within it are used to control the number of hops (2007), 333–355. that feed to the people column from that group. Other possi- 5. Granovetter, M. S. The Strength of Weak Ties. The bilities that are being considered for activation of group nodes American Journal of Sociology 78, 6 (1973), are a) a simple check box and b) extending the radio button 1360–1380. Figure 5. Mock-up of improved UI and interaction design based on study results and analysis: (A) improved representation of the hop-distance controls, (B) iconization to show group memberships, (C) Activation (on/off) control of nodes, (D) visualization of dynamic edges, (E) addition of a ranking score for recommended content, and (F), addition of a provenance arrow to show what the previous interaction did to the ranking of each recommendation. 6. Herlocker, J. L., Konstan, J. A., and Riedl, J. Explaining 13. Nagulendra, S., and Vassileva, J. Providing awareness, collaborative filtering recommendations. In ACM understanding and control of personalized stream conference on Computer supported cooperative work filtering in a p2p social network. In Conference on (2000), 241–250. Collaboration and Technology (CRIWG) (2013). 7. Herlocker, J. L., Konstan, J. A., Terveen, L., and Riedl, 14. Paramythis, A., Weibelzahl, S., and Masthoff, J. Layered J. T. Evaluating collaborative filtering recommender evaluation of interactive adaptive systems: Framework systems. ACM Trans. Inf. Syst. 22, 1 (2004), 5–53. and formative methods. User Modeling and User-Adapted Interaction 20 (2010). 8. Knijnenburg, B. P., Bostandjiev, S., O’Donovan, J., and 15. Pariser, E. The filter bubble: What the Internet is hiding Kobsa, A. Inspectability and control in social from you. Penguin Books, 2011. recommenders. In Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys ’12, 16. Schaffer, J., Giridhar, P., Jones, D., Höllerer, T., ACM (New York, NY, USA, 2012), 43–50. Abdelzaher, T., and O’Donovan, J. Getting the message?: A study of explanation interfaces for 9. Knijnenburg, B. P., Willemsen, M. C., Gantner, Z., microblog data analysis. In Proceedings of the 20th Soncu, H., and Newell, C. Explaining the user International Conference on Intelligent User Interfaces, experience of recommender systems. User Modeling IUI ’15, ACM (New York, NY, USA, 2015), 345–356. and User-Adapted Interaction 22, 4-5 (2012), 441–504. 17. Sharma, A., and Cosley, D. Do social explanations 10. Lam, H., Bertini, E., Isenberg, P., Plaisant, C., and work? studying and modeling the effects of social Carpendale, S. Empirical studies in information explanations in recommender systems. In World Wide visualization: Seven scenarios. IEEE Transactions on Web (WWW) (2013). Visualization and Computer Graphic 18(9) (2012), 18. Tintarev, N., and Masthoff, J. Personalizing movie 1520–1536. explanations using commercial meta-data. In Adaptive 11. Marcus, A., Bernstein, M. S., Badar, O., Karger, D. R., Hypermedia (2008). Madden, S., and Miller, R. C. Twitinfo: Aggregating and 19. Verbert, K., Parra, D., Brusilovsky, P., and Duval, E. visualizing microblogs for event exploration. In Visualizing recommendations to support exploration, Proceedings of the SIGCHI Conference on Human transparency and controllability. In Proceedings of the Factors in Computing Systems, CHI ’11, ACM (New 2013 International Conference on Intelligent User York, NY, USA, 2011), 227–236. Interfaces, IUI ’13, ACM (New York, NY, USA, 2013), 351–362. 12. McNee, S. M., Riedl, J., and Konstan, J. A. Being accurate is not enough: How accuracy metrics have hurt 20. Wang, B., Ester, M., Bu, J., and Cai, D. Who also likes recommender systems. In Extended Abstracts of the it? generating the most persuasive social explanations in 2006 ACM Conference on Human Factors in Computing recommender systems. In Twenty-Eighth AAAI Systems (CHI 2006) (2006). Conference on Artificial Intelligence (2014).