Serendipity in Recommender Systems Beyond the Algorithm: A Feature Repository and Experimental Design Annelien Smets1,* , Lien Michiels2,3 , Toine Bogers4 and Lennart Björneborn5 1 imec-SMIT, Vrije Universiteit Brussel, Brussels, Belgium 2 Froomle, Antwerp, Belgium 3 University of Antwerp, Antwerp, Belgium 4 Science, Policy and Information Studies, Department of Communication & Psychology, Aalborg University Copenhagen, Copenhagen, Denmark 5 Department of Communication, University of Copenhagen, Copenhagen, Denmark Abstract Serendipity in recommender systems is ought to improve the quality and usefulness of recommenda- tions. However, despite the increasing amount of attention in both research and practice, designing for serendipity in recommenders continues to be challenging. We argue that this is due to the narrow interpretation of serendipity as an evaluation metric for algorithmic performance. Instead, we venture that serendipity in recommenders should be understood as a user experience that can be influenced by a broad range of system features that go beyond mere algorithmic improvements. In this paper, we propose a first feature repository for serendipity in recommender systems that identifies which elements could theoretically contribute to serendipitous encounters. These include design aspects related to the content, user interface and information access. Furthermore, we outline an experimental design for evaluating the influence of these features on the serendipitous encounters by users. The experiment design is described in such a way that it can be easily reproduced in different recommendation scenarios to contribute empirical insights in various settings. This work aspires to represent a first step towards fostering a more integrated and user-centric view on serendipity in recommender systems and thereby improving our ability to design for it. Keywords serendipity, recommender systems, affordances, design, interaction, evaluation 1. Introduction In most scenarios, a good recommendation is an item that users find interesting and would not have found themselves. The endless scrolling through items that users are already familiar with or that are too similar to their existing preferences has spurred the ‘beyond-accuracy’ paradigm in recommender systems research. This line of work explores evaluation metrics such IntRS’22: Joint Workshop on Interfaces and Human Decision Making for Recommender Systems * Corresponding author. $ annelien.smets@vub.be (A. Smets); lien.michiels@uantwerpen.be (L. Michiels); toine@ikp.aau.dk (T. Bogers); lb@hum.ku.dk (L. Björneborn)  0000-0003-4771-7159 (A. Smets); 0000-0003-0152-2460 (L. Michiels); 0000-0003-0716-676X (T. Bogers); 0000-0001-9994-0390 (L. Björneborn) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) as diversity, coverage and serendipity to increase the usefulness and quality of recommendations [1, 2, 3]. In this research domain, the concept of serendipity has attracted particular research interest as it has the potential to help overcome the risk of over-specialization and popularity bias [4]. Moreover, serendipity in recommender systems could increase that system’s value by helping consumers explore and enable better item discoverability [5]. Despite receiving a considerable amount of attention in academic research, serendipity is known as a complex concept and highly challenging to design for [6, 7]. We argue that this is due to the research paradigm in recommender systems being too narrow, which impacts both our understanding of what serendipity is and how it can be facilitated. Very often serendipity has been equated with diversity or novelty [8, 9], which is an overly narrow interpretation of the concept [10]. Moreover, the dominant focus on algorithmic improvements in recommender systems has overlooked the importance of user interface design choices [11]. As outlined below, different affordances can have an effect on the likelihood of serendipitous encounters [12]. For example, how the recommended items are presented and interconnected. This is greatly inspired by findings from serendipity in other domains, such as libraries, where, for instance, front-cover facing books are more likely to result in serendipitous encounters compared to books placed on shelves with their spines facing out [13]. In this paper, we take a first step towards a more comprehensive research approach to study serendipity in recommender systems, one that goes beyond the mere recommendation algorithm. Hereto, our paper has the following two objectives: • We discuss different affordances that may enable serendipity in the context of recommen- dation and identify which interface elements and other system features could theoretically contribute to serendipitous encounters. • We propose an experimental design for evaluating the influence of these features on the serendipitous encounters by users. This paper is organized as follows. In Section 2, we provide an overview of relevant prior work, followed by an overview of which system features could facilitate serendipity in Section 3. We sketch our experimental design for testing the different features that could promote serendipity in Section 4 and conclude in Section 5. 2. Related Work 2.1. Affordances for Serendipity In the design of information environments, serendipity is generally understood as what happens when users experience a for them unplanned, interesting encounter [14]. In that regard, there exists an apparent paradox of ‘designing the unplanned’ and serendipity researchers agree that serendipity can never be guaranteed [14, 12]. However, environments, including digital ones, can be designed to facilitate serendipity. Underlying this approach is the notion of affordances: a relationship between an environment, an actor, and a potential outcome [14]. Affordances thus not only deal with the properties of objects but also include what an individual can do with that object. In that sense, they represent strong clues to the operations of things and are common in human-computer interaction, e.g., clickable buttons or draggable sliders [15]. Consequently, Table 1 Affordances and sub-affordances for serendipity from Björneborn [14], examples added. Affordance Sub- Description Affordance Diversity Multiple potentials (e.g., diverse, heterogeneous content). Diversifiability Cross-contacts Colliding potentials (e.g., different book genres in the same collec- tion). Incompleteness Unfinalizable potentials (e.g., inconsistent data or ambiguous cate- gories). Accessibility Access to a specific spot, convergently (e.g., floor-level accessibility in buildings). Traversability Multi- Multiple routes between spots (e.g., small-world structures on the reachability web). Explorability Inviting somewhere else, divergently (e.g., libraries with an organic non-grid layout). Slowability Affording slower pace (e.g., “slow design” [21], obstacles, or queues). Exposure Highlighting broader, over a longer time (e.g., exposure of book Sensoriability covers). Contrast Highlighting sharper, more suddenly (e.g., contrasting back- grounds). Pointers Highlighting narrower, more specifically (e.g., signage in stores). an affordance approach to serendipity implies that serendipity can be considered a potential outcome of an environment-actor correspondence [14]. The environment can be designed in such a way that its features (or characteristics) could facilitate serendipitous encounters. Several works have studied these characteristics of (digital) information environments [16, 17, 18, 12] and some even investigate how these relate to personal characteristics [17, 19, 20]. In his overview work, Björneborn [14] describes three “key affordances for serendipity”— Diversifiability, Traversability, and Sensoriability—that represent capacities of a given environ- ment to facilitate serendipity, and will form the basis of our repository discussed in Section 3. These affordances cover key aspects of human interactions with information environments [14]. Diversifiability deals with characteristics related to the diversity of an environment, such as how rich and varied is the collection of items and its metadata and how flexibly is it organized. Traversability relates to opportunities for navigation. For example, the accessibility of items or through how many pathways they can be reached. Finally, Sensoriability refers to the capacity of an environment (and its items) to being perceived by our senses, for example by using contrasts or pointers such as cues or signs. Each of these three affordances consists of several sub-affordances, as summarized in Table 1. 2.2. Serendipity in Recommender Systems In recommendation scenarios, serendipity is often used to refer to “an item, which the user had not seen before and would not even look for on their own, but when the user consumes this item, they enjoy it” [22]. Such interpretations have been operationalized by considering serendipity as a compound concept consisting of (different combinations of) base components such as unexpectedness, relevance, novelty, usefulness and diversity [e.g. 6, 23, 24, 25, 26]. These approaches, however, fail to acknowledge the “ontogenetic uncertainty” [27] of serendipity. This means that it is uncertain what kind of interactions the user will have with the environment (e.g., the recommender system) and which outcomes (e.g., serendipitous encounters) these interactions will lead to. It has indeed been demonstrated that serendipity may evolve differently in different contexts [20, 28]. In the particular context of recommender systems, this means that serendipity should be understood as a user experience rather than a mere offline evaluation metric such as diversity or novelty. As a result, any attempt to study serendipity in recommender systems benefits from a user-centric evaluation [11]. Moreover, the majority of prior work has emphasized the algorithmic component of rec- ommender systems to facilitate serendipitous encounters, such as collaborative filtering or content-based approaches (see Ziarani and Ravanmehr [6] for an overview). However, it has been established that other components of a recommender system, such as user interface ele- ments, are more decisive for the success of a recommender than mere algorithmic changes [29]. Therefore, researchers have been advocating for a more comprehensive research paradigm on recommender systems, which goes beyond mere algorithmic improvements and contributes an integrated view on the user experience of recommenders [30, 5, 31]. In the study of serendipity in recommender systems there exist only a few works evaluating the impact of other elements of recommender systems on users’ experiences of serendipity. For example, the enrichment of the knowledge repository of the system [23], the use of linked open data [32] or different visualizations of the recommended items [33, 34]. The contribution of this paper is to support this line of work by presenting a repository of recommender systems’ features (beyond the algorithm) that have the potential to foster serendipity and an experimental design to evaluate their impact. 3. A Feature Repository for Serendipity in RecSys To improve our ability to design for serendipity, we propose a first-of-its-kind affordance feature repository for serendipity in recommender systems (Table 2). These affordance features represent “the structural elements of artifacts that provide affordances” [35] and could, for example, refer to the presentation structure of recommended items. This relates to what Knijnenburg and Willemsen [11] call Objective System Aspects (OSAs) in their user-centric evaluation framework for recommender systems (see also Section 4). Our repository is not only limited to features that merely relate to the recommender algorithm itself, but also includes content-related features and other information access paradigms besides recommendation. We argue that any of these elements are crucial in a user-centric study of recommender systems, as they might impact users’ interactions with the recommended items. For example, the ability to easily browse through the catalog or perform search queries could influence how users engage with the recommendations. The benefit of our proposed affordance feature repository is that it builds on examples and insights from different domains (e.g., digital and physical libraries, movies, e-commerce). In that way, it can provide clues to designing new affordance features through analogical reasoning and support the future design for serendipity in recommender systems [35]. Moreover, the proposed repository allows us to systematically gather and catalog research that contributes empirical insights into the effects of these affordance features on serendipitous encounters. Additionally, it could uncover affordances that have been underrepresented in the existing literature and therefore open up new avenues for research. The proposed feature repository is based on a literature review of related work, complemented with an affordance feature mapping of popular (recommender system) interfaces for different domains (e.g., IMDB, Amazon, GoodReads, LibraryThing, etc.). For the sake of clarity, we structured the features along three main categories: content, user interface, and information access. By doing so, we explicitly go beyond the dominant view of serendipity as an attribute of algorithms and instead emphasize the importance of the available metadata, how this data is presented and how users access and interact with the user interface. This is in line with the emerging paradigm in recommender systems research that advocates a more integrated view on recommender systems [e.g. 5, 30]. Consequently, our repository already highlights multiple promising avenues for further work on serendipity in recommender systems that go beyond mere algorithmic improvements. In this feature repository, we have limited ourselves to affordance features commonly encountered in recommender system interfaces. In that sense, it is a preliminary and non-exhaustive enumeration, and future work could contribute novel features. In the remainder of this section, we will briefly discuss each feature and illustrate it by means of real-world examples in Figure 1. The snippets in Figure 1 represent annotated examples highlighting the most apparent affordance features. To make it clear how the different affordance features might contribute to serendipity, Table 2 also lists the features’ correspondence with the serendipity sub-affordances as discussed in Section 2.1. Due to space limitations, we will not discuss every relation in depth, but rather focus on the most salient examples that illustrate our line of thought. 3.1. Content The first category deals with features that relate to the content (C). Each of these features contribute to Diversifiability as they represent the potential of the information environment to be diversified [14]. At this point, we identify four different content features that are most apparent in our affordance feature mapping (Figure 1). The core metadata (C1) of an item describes its basic attributes, such as title, author, publisher, producer, abstract, release year, price, actor(s), platform, series, etc. Such metadata adds more fine-grained knowledge of the items, which increases the apparent Diversity of an item catalogue [14, 36, 33]. In a similar way, Diversity can be increased by using controlled vocabularies (C2). These are any type of taxonomy, thesaurus, ontology or hierarchical categorization scheme, with different levels of specificity, that prescribes which term(s) should be used to describe a specific item [37]. Moreover, they could contribute to Incompleteness by specifying broad categories, Table 2 Preliminary affordance feature repository for serendipity in recommender systems. Feature Affordance(s) Example(s) in Figure 1 C1 Core metadata Diversity 1a, 1c, 1f, 1j, 1k CONTENT C2 Controlled vocabulary Diversity, Incompleteness 1m C3 User-generated content Diversity, Cross-contacts, Incompleteness, Multi- 1g, 1k, 1l, 1n reachability C4 Multimedia Diversity 1d, 1e, 1g, 1k, 1k U1 Metadata Slowability 1a, 1c, 1f, 1j U2 User-generated content Slowability, Multi-reachability 1n USER INTERFACE U3 Multimedia Contrast 1d, 1e, 1g, 1k U4 Global navigation Accessibility, Exposure, Pointers 1i U5 Presentation structure Cross-contacts, Explorability 1c, 1d, 1e U6 Headers Pointers 1c, 1d, 1e, 1f, 1g U7 Explanations Explorability, Pointers 1c U8 Emphasis Exposure, Contrast 1d, 1e, 1k U9 Pop-up Slowability, Exposure, Contrast 1l Recommendation R1 Personalized Accessibility, Pointers 1c R2 Non-personalized Cross-contacts, Accessibility, Pointers 1f, 1k R3 Curated Cross-contacts, Pointers 1d INFORMATION ACCESS R4 Item-to-item Cross-contacts, Multi-reachability, Explorability, 1g Pointers R5 User-to-user Cross-contacts, Multi-reachability, Explorability, 1b Pointers Search S1 Search engine Accessibility, Pointers 1h, 1i S2 Auto-completion Accessibility, Explorability, Pointers Cross- 1h contacts Browsing B1 Hyperlinks Accessibility, Multi-reachability, Explorability 1a,1c, 1f, 1j, 1m B2 Breadcrumb trails Accessibility, Slowability, Pointers 1m B3 Collections Cross-contacts, Slowability, Pointers 1e, 1i B4 User profiles Cross-contacts, Slowability, Exposure, Pointers 1j such as ‘American Literature’. The use of such broad categories is a typical feature of physical libraries that are found to promote serendipity [14, 38], and can also be incorporated in digital environments to support serendipitous encounters. User-generated content (C3) is generated by users of a digital environment, such as tags, items, reviews, ratings, and many more. This can be considered as a source of knowledge infusion in recommender systems [23] and contribute to folksonomies by “coupling individual perspective with the dynamics of human interaction” [12]. In this way, it adds novel dimensions to items and thereby increases Diversity. At the same time, this kind of content may be considered as ‘traces’ [14] of user behavior and one can draw an analogy with the case of libraries where left-behind books by other visitors (Incompleteness) may lead to serendipity [18]. In some recommender systems, users can also add links to existing dynamic hypertext structures, that may develop small-world network features (Multi-reachability) [14]. Finally, multimedia (C4) is used to refer to data that represents the content of an item, such as (full) text, audio, images, animations, or video. The availability of multimedia content also contributes to Diversity, although its main effect is likely to be situated at the level of user interface and information access. In fact, each of these four content features (C1–C4) has relevant implications for features related to the user interface and information access, as will be discussed in the next sections. 3.2. User Interface User interface (U) elements mainly relate to Sensoriability as they primarily deal with the visual or auditive cues of the information environment. Nonetheless, they may also support affordances dealing with Diversifiability and Traversability. We follow the terminology laid out by Tidwell et al. [39] when describing these elements. As stated above, the four content features (C1–C4) relate to the user interface as well – hence the recurrence of some features in both categories. Displaying metadata (U1) or user- generated content (U2) could invite the user to take a closer look at the item (Slowability). For example, by reading the description or a review, especially lengthy ones. In recommender systems research, there exists some related work on the impact of showing metadata on user interaction (e.g., food nutrition labels and food choice [40]). Moreover, a vast body of work studies the impact of user interface elements on preference elicitation (see Jugovac and Jannach [31] for an overview) and it promises a worthwhile avenue to study the impact of showing ratings and reviews (U2) on users’ interaction with unfamiliar items. Furthermore, displaying multimedia (U3), such as cover images of movie trailers, may impact the user interaction with the recommended items by making them stand out (Contrast). The latter is a well-known strategy to foster serendipity in offline information environments, such as showcasing book covers on tilted shelves [18]. Another feature is global navigation (U4), which covers elements in the user interface that are always visible regardless of the scrolling behavior. For example, the heading of LibraryThing (including tabs such as ‘Recommendations’ and ‘Reviews’) is always displayed and increases Exposure, amongst others. The presentation structure (U5) refers to how items can be grouped together and ordered in different ways when presented to the user, such as using lists, carousels or grids [41, 39]. Presentation structure has been shown to influence user behavior when interacting with items [42, 43] and may impact serendipitous encounters as it could enable Cross-contacts (e.g., different movie genres in one list). Moreover, Jannach et al. [44] found that multi-list interfaces result in users exploring more options (Explorability) before making a decision. Furthermore, headers (U6) serve as Pointers by indicating the category of an item (e.g., ‘Editorial lists’). Headers can also be used as a form of explanation (U7) when they provide additional information on why the item is displayed (e.g., ‘Readers also enjoyed’). Explainability in recommender systems and its impact on user interactions is a widely studied topic [e.g. 45, 46, 31] and found to impact the acceptance of system suggestions [47]. Two final user interface elements in our preliminary affordance feature repository mainly contribute to Contrast and Exposure. Emphasis (U8) refers to making interface elements stand out by, e.g., putting them at the top, on the left, in one of the top corners, giving them high contrast and visual weight, or setting them off with white space [39]. Here, the “Gestalt principles” may be relevant as they represent a set of rules (proximity, similarity, continuity and closure) that describe the way humans perceive visual objects and when they seem to belong together [39]. Finally, pop-ups (U9) are interface elements known as effective attention grabbers, and often associated with an interruption of the current task (Slowability) [48]. 3.3. Information Access The final category of affordance features relates to information access. This category is subdi- vided in features that relate to recommendation (R), search (S) and browsing (B). In principle, search and browsing can also be personalized, but because of our focus on recommender sys- tems, we will discuss the personalized vs. non-personalized distinction only in our discussion of recommendation. 3.3.1. Recommendation All recommendations are important Pointers as they highlight specific items. As discussed in Section 2.2, current work on serendipity in recommender systems has mainly focused on algorithmic improvements. Since it is our explicit goal to emphasize the importance of under- standing serendipity as a user experience, we highlight five popular high-level recommendation strategies, as seen through the lens of the user and the information environment (see Figure 1) rather than the algorithm as such. First, recommendations can be personalized (R1). This includes the use of collaborative filtering, context-based, hybrid recommendation algorithms, and any other recommendation paradigm that provides unique recommendations to each user individually. Personalized recom- mendations can increase Accessibility of (long-tail) items that are of specific interest only to this user [32]. Non-personalized (R2) recommendations, on the other hand, are recommendations that are made to all users, regardless of their preferences. Commonly encountered examples would be ‘most popular’ or ‘most recent’ items, which might contain highly diverse items and lead to Cross-contacts. Moreover, non-personalized recommendations may serve as a good example of how particular features could also inhibit serendipity: by providing shortcuts to items (Accessibility) that are likely interesting to all users (e.g., breaking news items), the users may fail to notice other, more serendipitous items. It could therefore be interesting to test to what extent improved placement (e.g., at the bottom of the page) or absence of non-personalized recommendations, can facilitate serendipity. In addition, another common recommendation strategy is segmentation. That is the creation of non-personalized recommendations for groups of users, rather than all users, and often based on user demographics or psychographics. How- ever, as it could be situated in between personalized and non-personalized recommendation, (a) C1 Core metadata, U1 Metadata & B1 Hy- (b) R5 User-to-user recommendations (LibraryThing) perlinks (IMDB) (c) R1 Personalized recommendations, (d) R3 Curated recommendations, U3 (e) B3 Collections, U3 & C4 Mul- U5 Presentation structure, U7 Ex- & C4 Multimedia, U5 Presentation timedia, U5 Presentation struc- planations, C1 Core metadata, U1 structure, U6 Headers & U8 Empha- ture, U6 Headers & U8 Empha- Metadata, U6 Headers & B1 Hyper- sis (IMDB) sis (IMDB) links (LibraryThing) (f) R2 Non-personalized recom- (g) R4 Item-to-item recommendations (h) S1 Search engine & S2 Autocomple- mendations, C1 Core metadata, & U6 Headers, U3 & C4 Multi- tion (Amazon) U1 Metadata, U6 Headers & B1 media, C3 User-generated content Hyperlinks (LibraryThing) (IMDB) (i) U4 Global navigation & S1 Search engine & B3 Collections (LibraryThing) Figure 1: Snippets of recommender interfaces illustrating affordance features listed in Table 2. (j) B4 User profiles, C1 Core meta- (k) C4 & U3 Multimedia, C1 Core (l) U9 Pop-up & C3 User-generated data, U1 Metadata & B1 Hyper- metadata, C3 User-generated content (GoodReads) links (LibraryThing) content, R2 Non-personalized recommendations & U8 Empha- sis (IMDB) (m) C2 Controlled vocabulary, B2 Bread- (n) C3 & U2 User-generated content (Library- crumb trails & B1 Hyperlinks (Li- Thing) braryThing) Figure 1: (Cont.). Snippets of recommender interfaces illustrating affordance features listed in Table 2. and often not distinguished from an user point of view, we do not consider it as a separate feature in our repository. Furthermore, recommendations can also be manually or algorithmically curated (R3). Al- gorithmically curated recommendations can, for example, be generated by topic modeling algorithms that cluster items according to similarities in content, e.g., young-adult novels with strong female leads. Manually curated recommendations are provided by human curators, such as editors or other users. Similar to algorithmically curated recommendations, manually curated recommendations are often groups of items that are similar in one or more aspects. They contribute to Cross-contacts in the sense that they may result in a diverse set of items, for example by grouping books across different genres or authors. Interestingly, algorithmic and manual curation can be combined. One particular instance is a “curated recommender system” [49] in which a personalization algorithm is used to recommend (manually) curated lists of items. Kim et al. [49] argue that such curated recommendation leads to more serendipitous encounters, mainly because it may increase the trust in the recommendation. A final popular recommendation strategy is similarity. We further divide this strategy into two different paradigms, item-to-item and user-to-user similarity. Item-to-item (R4) recom- mendations typically highlight alternatives to the current item. For example, by recommending books within the same genre but on a completely different topic. User-to-user (R5) recommen- dations, on the other hand, make recommendations based on similarities between user profiles. In that sense, both paradigms promote Explorability and Cross-contacts, and are particularly suited to foster Multi-reachability. 3.3.2. Search A second aspect related to information access is a search engine (S1) that identifies relevant items in response to a user’s search query by matching it against the content representations of all items in its database. In that way, search engines provide direct access to items (Accessibility) and serve as Pointers by narrowing down the amount of available information [12, 14]. Moreover, search becomes more efficient with autocompletion (S2), where a searcher is provided with relevant query suggestions in real time when entering their term(s) in the search box [39]. Additionally, autocompletion fosters Explorability and Cross-contacts by inviting the user to consider diverse options that are densely represented, such as ‘transformer toys’ and ‘translucent sticky paper’. 3.3.3. Browsing The final set of features relates to browsing as environments facilitating browsing are found to particularly support serendipitous encounters [12]. First of all, the previous discussed features of core metadata (C1), controlled vocabulary (C2), and user-generated content (C3) such as tags can be used to connect items that share the same metadata/controlled vocabulary/tags by turning them into hyperlinks (B1). Generally, this increases the Traversability of the item cat- alogue by encouraging Accessibility, Multi-Reachability, and Explorability. Next, breadcrumb trails (B2) show the path from the starting page down through the navigational hierarchy to the selected page [39]. This feature also contributes to Traversability, for example by inviting the users to reflect upon their navigation (Slowability) and increase the Accessibility of previous pages. Moreover, collections (B3) represent collections of items assembled manually by one or more users. For example, a (shared) watchlist of movies or a personal catalog of your favorite childhood books. In that sense, collections are distinct from curated recommendations (R3) as they are not considered to be actual suggestions (for others) but rather a practice of information management [50]. However, similar to curated recommendation lists (R3), such collections may contribute to Cross-contacts, amongst others. Finally, user profiles (B4) provide of all the items consumed, purchased or rated by a user. Such overviews support users in examining potentially interesting items (Slowability), similar to user-to-user recommendations (R5). Moreover, in case of the users’ own profile, it might remind them of items they had forgotten about (e.g., their favourite childhood song), which could trigger experiences of serendipity when they (re)encounter it. 4. A Proposed Experimental Design To better support and understand serendipitous experiences in recommender systems, we need to identify which of the previously discussed affordance features actually impact serendipity. However, there is no guarantee that all of the identified elements support serendipity to the same (or any) degree. Additionally, we have argued that serendipity should be understood as a user experience that may evolve differently in different contexts (see Section 2.1). Consequently, empirical research is needed to study which (combination of) elements lead(s) to serendipitous experiences and under what conditions. In this Section, we propose a controlled experimental design where the (de)activation of the different features would correspond to the independent variable(s). The experiment design is described in such a way that it can be easily reproduced in different recommendation scenarios to contribute empirical insights in various settings. We start by explaining a possible experiment scenario to which we apply the practical guidelines for recommender system user experiments proposed by Knijnenburg and Willemsen [11]. Afterwards, we discuss the required materials to set up the experiment and we conclude by discussing some possible extensions. Our proposed experimental design should be seen as a first step and we invite discussion and suggestions for improving it. 4.1. Procedure In our task design, we adopt the common (but not exclusive) notion that a serendipitous find relates to an unplanned yet interesting encounter [14]. For the purpose of this study, we distinguish between foreground and (inactive) background tasks and stipulate for the sake of simplicity that a serendipitous find can only be experienced when one is not looking for it. In other words, the find is unrelated to the foreground task of the user. Rather, it is an inactive background task of the user that triggers the recognition of the unplanned and interesting find, also known as background serendipity [51]. This distinction between an active foreground task and an inactive background task was first proposed by Erdelez [52] and later adopted by several others [e.g. 53, 19, 51]. In our experiment, participants are given a foreground task to complete, namely finding relevant books for a monthly book club meeting. Hereto, they will interact with the user interface of an online service to help catalog books (e.g., similar to LibraryThing). Any books that are deemed relevant for this task, can be added to a ‘task’ shortlist. However, as participants are likely to stay on task, it is unlikely that we will find any serendipitous items in their the final shortlist, as those are by definition related to their background task(s). Because we cannot ever know all active background tasks of our participants, we propose an approach similar to that of Qin et al. [19], where users are also provided a second shortlist for their ‘personal favorites’. They are encouraged to add any book that they find interesting and wish to save for after the experiment. To incentivize them to do a good job, they would be sent this list by email at the end of their session. This setup increases the likelihood of capturing one or more serendipitous items in this list of personal favorites. After completing their task, users are presented with a questionnaire (see Section 4.2.4). In this survey, we gauge to what extent the participants report experiences of serendipity, as well their overall experience with the user interface. Finally, at the end of the experiment, participants would then be sent their list of favorite personal items. In order to be able to generate personalized recommendations for the participants, we need information about their preferences. This could be remedied by asking them to rate a number of items, genres, and authors beforehand, which could serve as input to the recommendation algorithm(s). Alterna- tively, we could recruit users that have existing accounts on similar platforms (e.g., LibraryThing or GoodReads) and obtain their permission to use their interactions with these platform to kickstart recommendations. While participants should of course be introduced to the study and asked for informed consent, it remains an open question whether participants should be made aware of the experiment’s focus on serendipity or not. Based on prior work, we suggest to not inform participants about this specific research interest. Bogers et al. [53] showed that priming people about the concept of serendipity before an experiment, even without explicitly mentioning it as the experiment’s focus, is likely to have a negative influence on participants experiencing serendipity. More specifically, they found that it is more likely to induce participants to stay on task instead of exhibiting divergent information behavior. Although the decision about disclosing this information is up to the researchers carrying out the experiment, we strongly encourage reporting their decision along with the findings as it may impact the users’ behavior. 4.2. Experiment Design We limit this discussion to the elements that are specific to our experiment design and refer to Knijnenburg and Willemsen [11] for more methodological details on conducting user-centric evaluations of recommender systems. 4.2.1. Research Model Following the terminology put forward by Knijnenburg and Willemsen [11], the aim of this experiment is to measure the impact of Objective System Aspects (OSAs) on the user’s Experience (EXP). More specifically, we are particularly interested to assess the effect of the proposed affordance feature(s) (OSAs) on the degree of experienced serendipity (EXP) by users of the recommender system. Moreover, we need to keep track of the user’s interactions with the system (INT) as it helps to “ground the user experience in observable behavior” [11]. For example, Taramigkou et al. [54] found that their experimental system led to more queries (INT) and serendipitous encounters (EXP) compared to the baseline system. Additionally, Knijnenburg and Willemsen [11] suggest that the effect of OSAs on EXP and INT are mediated by Subjective System Aspects (SSAs), or users’ perception of these features. For example, whether the participants do perceive the thumbnails of items or not. Depending on the exact research question and the resulting relevant aspects to include, various hypotheses can be formulated. As an example, we discuss the use of multimedia (U3) (i.e., thumbnails of book covers) and the two experimental conditions are shown vs. no thumbnails. This manipulation could give rise to hypotheses such as ‘Adding thumbnails increases the number of serendipitous encounters (OSA→EXP)’ or ‘Users interact more often with items when thumbnails are shown (OSA→INT)’. 4.2.2. Participants Participant selection is an important step and should be conducted carefully. We refer to Knijnenburg and Willemsen [11] for more detailed instructions on how to sample participants and determine the sample size. The latter requires an estimate of the expected effect size, which could be taken from previous work on serendipitous experiences such as Qin et al. [19]. Recruiting a sufficient number of participants for the controlled experiment is most likely easier to do for remote participation using services such as Prolific 1 or Amazon Mechanical Turk 2 . In terms of recruitment criteria, participants should be expected to have at least an average level of experience with computers and the Internet. Moreover, they are assumed to have an interest in the domain in question, such as books if the recreated environment is a service similar to LibraryThing (see Section 4.1). 4.2.3. Experimental Manipulations Various experimental manipulations can be drawn from the proposed affordance feature reposi- tory (Table 2). In the example above, displaying thumbnails of book covers (U3) would be the independent variable (OSA) with shown vs. not-shown thumbnails as its two conditions. Most features can indeed be tested by simply (de)activating it, resulting in one treatment and one baseline condition. Additionally, one could experiment with different experimental conditions, such as different presentation structures (U5) (e.g., multi lists vs. single lists) or types of per- sonalized recommendations (R1) (e.g., content-based vs. collaborative filtering-based) and so on. We believe a between-group design would be the best option, because of possible learning effects with regard to the available content, making it harder to accurately measure both task success and serendipity after going through the first experimental condition. Moreover, investigating multiple elements at the same time—and thereby multiple independent variables— is possible, although the total number of independent variables should be kept relatively low to keep the number of required participants manageable in order to achieve an meaningful level of statistical power. 4.2.4. Measurement The main dependent variable would be the degree of experienced serendipity (EXP). To confirm whether the participant actually experienced serendipity, we should ask them for each of the items on their list of personal favorites whether they were unplanned and interesting finds to them, similar to the approach used by Björneborn [18]. In addition to going through their personal shortlist, it could be beneficial to use an established scale for measuring serendipity, such as the perception of serendipity scale developed by McCay-Peet et al. [17] and include this in a post-experiment questionnaire This could be combined with the questions developed by Kotkov et al. [24], Lutz et al. [20] or Chen et al. [55]. Such a methodological triangulation would increase the chances of detecting actual serendipitous experiences by our participants. In 1 https://www.prolific.co/ 2 https://www.mturk.com/ addition to the perception of serendipity scale(s), we recommend recording as rich interaction data (INT) as possible, such as clicks, paths through the system, queries entered into the search box, and generated recommendation lists [e.g. 54]. Depending on the hypothesized research model (Section 4.2.1), questions about task difficulty, cognitive load, and subjective satisfaction with the system could also be included in the post-experiment questionnaire (EXP). Two common scales related to user-centric evaluation of recommender systems are the ones by Knijnenburg et al. [30] and Pu et al. [56]. 4.2.5. Statistical Evaluation For an extensive overview of statistical methods to evaluate user experiments with recommender systems, we again refer to Knijnenburg and Willemsen [11]. The research model and experi- mental manipulations discussed in the previous paragraphs can be tested with commonly-used statistical tests, such as the independent (2-sample) t-test or ANOVA. 4.3. Materials and Equipment The core development effort related to our proposed experimental design lies in the digital environment that participants have to use. In principle, our proposed experiment could be carried out as an online A/B test with a real-world website. This would allow us to detect realistic user behavior with actual users. However, companies may be unwilling to ‘break’ the user experience of their users by turning off already established website features as part of the A/B test. The content accessible on the website may also change during the A/B test, which could make it more cumbersome to compare the different experimental conditions. Finally, it is also unlikely that engagement metrics alone (such as clicks or purchases) would be able to distinguish between serendipitous and non-serendipitous events. This would require a more complex evaluation setup. We therefore proposed a more traditional controlled experiment that could be run either in the lab or remotely. A first requirement is designing a realistic digital environment for our participants to navigate and explore, and to populate this environment with rich and diverse data that ideally would contain all four types of content features (C1–C4) as described in Table 2 so as to support Diversity. In our proposed experimental setting of finding books, the Amazon/LibraryThing collection3 could be used. The user interface required for this experiment should be similar to those of existing services, such as LibraryThing or GoodReads in our case. However, to avoid undue influence of prior good or bad experiences with LibraryThing the styling of the website (fonts, colors, etc.) might be changed to some degree. This is similar to the experimental system developed by Qin et al. [19] for their experiment comparing tag presentation formats. Another similar approach is the Mock Social Media Website Tool developed by Jagayat et al. [57] or the 3bij3 framework for news recommenders of Loecherbach and Trilling [58]. The benefit of a tool like the one by Jagayat et al. [57] is that it consists of a component to present survey questionnaires in a user-friendly way, namely nicely integrated with the experiment tasks making it a smooth 3 http://inex.mmci.uni-saarland.de/data/documentcollection.html process for the participants. To achieve a similar user experience, researchers could use tools such as Gorilla 4 . 4.4. Extensions Our proposed experiment could be extended and/or adjusted based on specific research questions and models. For example, as argued earlier, we believe that a focused foreground task provides the necessary contrast with the participants’ background task(s) in order to identify experiences of serendipity. However, Makri et al. [59] argue that it should be possible to detect serendipity for both focused, narrow tasks as well as more broad, exploratory tasks. Such an exploratory task could possible be added after the focused task has been completed by the participant. Moreover, earlier work has shown that certain Personal Characteristics (PCs) such as Openness to Experience and Extraversion can in some situations have an influence on people’s likelihood of experiencing serendipity [19, 59]. We could adapt these questions from Lee and Ashton [60] to include them in a pre-experiment questionnaire. Knijnenburg and Willemsen [11] describe how such personal characteristics can be included in the research model. Additionally, Knijnenburg and Willemsen [11] discuss how Situational Characteristics (SCs) could influence user experience and interaction. For example, users might interact differently with short format content and long format content (e.g., music vs. movies) [49]. Given that serendipity is found to evolve differently in different contexts, we highly encourage experiments across different recommendations scenarios. Other promising datasets could be a combination of the IMDB collection and MovieLens tags and ratings data5 . MovieLens itself would be another example of an environment in which such an experiment could be run. Furthermore, one may investigate the trade-offs between serendipity and other user experi- ences of the recommender system, such as task difficulty. This could be assessed by studying if users experience increased task difficulty as evidenced by a noticeable increase in clicks required to complete the foreground task, for example. Finally, once the impact of an affordance feature on users’ experience of serendipity has been assessed in a controlled experiment, these same affordance features can be tested in an observational study of a real-word recommendation environment. Such a study would require a different setup and we leave its discussion for future work. 5. Conclusions & Future Work In this paper, we presented an approach to study serendipity in recommender systems that goes beyond the mere algorithm. By doing so, our paper addresses the shortcomings of current research that is characterized by a narrow view on serendipity in recommender systems and mainly perceives it as an evaluation metric of algorithmic performance. Our approach considers serendipity as a user experience and thereby emphasizes the im- portance of a user-centric and integrated view on recommender systems. This is built on an affordance approach to serendipity and thus understanding serendipity as a potential outcome of 4 https://gorilla.sc/ 5 https://grouplens.org/datasets/movielens/ a user interaction with an environment. Based on related work and an affordance feature map- ping of popular recommender system interfaces, we proposed an affordance feature repository that lists a first overview of features that have the potential to foster serendipity. This repository includes aspects related to the available content, user interface, and information access as each of these could impact users’ serendipitous encounters in recommender systems. As a result, the possible design space for serendipity in recommender systems expands significantly. Therefore, we proposed a controlled experimental design for evaluating the influence of these features on the serendipitous encounters by users. We outlined a potential evaluation procedure and discussed possible extensions to the proposed design. As argued throughout the paper, the proposed repository and experimental design are a first attempt to foster a more integrated view on serendipity in recommender systems. We invite others to provide feedback, suggest improvements, or point towards other serendipitous directions. In future work, we plan to conduct such experiments and evaluate the impact of some of these features on serendipitous encounters. In the long run, we aim to develop an online crowd-sourced affordance feature repository for serendipity in recommender systems that includes examples from specific domains, empirical findings and links to related work. By doing so, we aspire to contribute an open knowledge source that may serve as an inspiration to design for serendipity in recommender systems and thereby improve the community’s ability to facilitate serendipitous encounters through these systems. Acknowledgments This work is supported in part by the Research Foundations Flanders under grant K203822N. References [1] M. Kaminskas, D. Bridge, Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems, ACM Transactions on Interactive Intelligent Systems 7 (2016) 1–42. [2] M. Ge, C. Delgado-Battenfeld, D. Jannach, Beyond accuracy: evaluating recommender systems by coverage and serendipity, in: Proceedings of the fourth ACM conference on Recommender systems, 2010, pp. 257–260. [3] S. M. McNee, J. Riedl, J. A. Konstan, Being accurate is not enough: How accuracy metrics have hurt recommender systems, in: CHI’06 extended abstracts on Human factors in computing systems, 2006, pp. 1097–1101. [4] L. Iaquinta, M. De Gemmis, P. Lops, G. Semeraro, M. Filannino, P. Molino, Introduc- ing serendipity in a content-based recommender system, in: 2008 eighth international conference on hybrid intelligent systems, IEEE, 2008, pp. 168–173. [5] D. Jannach, C. Bauer, Escaping the mcnamara fallacy: towards more impactful recom- mender systems research, AI Magazine 41 (2020) 79–95. [6] R. J. Ziarani, R. Ravanmehr, Serendipity in Recommender Systems: A Systematic Literature Review, Journal of Computer Science and Technology 36 (2021) 375–396. doi:10.1007/ s11390-020-0135-9. [7] U. Reviglio, Serendipity as an emerging design principle of the infosphere: challenges and opportunities, Ethics and Information Technology 21 (2019) 151–166. [8] F. Lu, A. Dumitrache, D. Graus, Beyond Optimizing for Clicks: Incorporating Editorial Values in News Recommendation, Association for Computing Machinery, New York, NY, USA, 2020, p. 145–153. URL: https://doi.org/10.1145/3340631.3394864. [9] D. Bountouridis, J. Harambam, M. Makhortykh, M. Marrero, N. Tintarev, C. Hauff, Siren: A simulation framework for understanding the effects of recommender systems in online news environments, in: Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* ’19, Association for Computing Machinery, New York, NY, USA, 2019, p. 150–159. URL: https://doi.org/10.1145/3287560.3287583. doi:10.1145/3287560. 3287583. [10] A. Smets, Serendipity as a Shared Value in Urban Recommender Systems, Phd thesis, Vrije Universiteit Brussel, 2022. [11] B. Knijnenburg, M. Willemsen, Evaluating recommender systems with user experiments, 2nd ed., Springer, Germany, 2015, pp. 309–352. doi:10.1007/978-1-4899-7637-6_9. [12] S. Makri, T. M. Race, Serendipity in Current Digital Information Environments, Chandos Information Professional Series, Chandos Publishing, 2016, p. 53–80. URL: https://www.sciencedirect.com/science/article/pii/B9781843347507000042. doi:10.1016/ B978-1-84334-750-7.00004-2. [13] H. Goldhor, The effect of prime display location on public library circulation of selected adult titles, The Library Quarterly 42 (1972) 371–389. [14] L. Björneborn, Three Key Affordances for Serendipity: Toward a Framework Connecting Environmental and Personal Factors in Serendipitous Encounters, Journal of Documenta- tion (2017). doi:10.1108/JD-07-2016-0097. [15] D. A. Norman, The psychology of everyday things., Basic books, 1988. [16] S. Makri, A. Blandford, M. Woods, S. Sharples, D. Maxwell, “making my own luck”: Serendipity strategies and how to support them in digital information environments, Journal of the Association for Information Science and Technology 65 (2014) 2179–2194. [17] L. McCay-Peet, E. G. Toms, E. K. Kelloway, Examination of Relationships among Serendip- ity, the Environment, and Individual Differences, Information Processing & Management 51 (2015) 391–412. [18] L. Björneborn, Serendipity Dimensions and Users’ Information Behaviour in the Physical Library Interface, Information Research 13 (2008) 13–4. [19] C. Qin, Y. Liu, X. Ma, J. Chen, H. Liang, Designing for Serendipity in Online Knowledge Communities: An Investigation of Tag Presentation Formats and Openness to Experience, Journal of the Association for Information Science and Technology (2022) 1–12. doi:10. 1002/asi.24640. [20] C. Lutz, C. Pieter Hoffmann, M. Meckel, Online serendipity: A contextual differentiation of antecedents and outcomes, Journal of the Association for Information Science and Technology 68 (2017) 1698–1710. [21] B. Grosse-Hering, J. Mason, D. Aliakseyeu, C. Bakker, P. Desmet, Slow design for mean- ingful interactions, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013, pp. 3431–3440. [22] D. Kotkov, An overview of serendipity in recommender systems, 2021. URL: https:// theserendipitysociety.files.wordpress.com/2021/11/book-of-abstracts-serendipity-and-rs. pdf. [23] M. d. Gemmis, P. Lops, G. Semeraro, C. Musto, An investigation on the serendipity problem in recommender systems, Information Processing & Management 51 (2015) 695–717. doi:10.1016/j.ipm.2015.06.008. [24] D. Kotkov, J. A. Konstan, Q. Zhao, J. Veijalainen, Investigating Serendipity in Recommender systems Based on Real User Feedback, in: SAC ’18: Proceedings of the 33rd Annual ACM Symposium on Applied Computing, 2018, pp. 1341–1350. [25] A. Smets, J. Vannieuwenhuyze, P. Ballon, Serendipity in the city: User evaluations of urban recommender systems, Journal of the Association for Information Science and Technology 73 (2021) 19–30. [26] M. Ge, C. Delgado-Battenfeld, D. Jannach, Beyond accuracy: evaluating recommender systems by coverage and serendipity, in: Proceedings of the fourth ACM conference on Recommender systems - RecSys ’10, ACM Press, Barcelona, Spain, 2010, p. 257. URL: http: //portal.acm.org/citation.cfm?doid=1864708.1864761. doi:10.1145/1864708.1864761. [27] M. de Reuver, A. van Wynsberghe, M. Janssen, I. van de Poel, Digital platforms and respon- sible innovation: expanding value sensitive design to overcome ontological uncertainty, Ethics and Information Technology 22 (2020) 257–267. [28] X. Sun, S. Sharples, S. Makri, A user-centred mobile diary study approach to understanding serendipity in information research, Information Research 16 (2011) 16–3. [29] D. Jannach, M. Jugovac, Measuring the business value of recommender systems, ACM Transactions on Management Information Systems (TMIS) 10 (2019) 1–23. [30] B. P. Knijnenburg, M. C. Willemsen, Z. Gantner, H. Soncu, C. Newell, Explaining the user experience of recommender systems, User modeling and user-adapted interaction 22 (2012) 441–504. [31] M. Jugovac, D. Jannach, Interacting with recommenders—overview and research direc- tions, ACM Transactions on Interactive Intelligent Systems 7 (2017) 1–46. doi:10.1145/ 3001837. [32] M. Aziz, A. Wang, A. Pappu, H. Bouchard, Y. Zhao, B. Carterette, M. Lalmas, Leveraging semantic information to facilitate the discovery of underserved podcasts, in: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, p. 3707–3716. [33] M. Taramigkou, E. Bothos, D. Apostolou, G. Mentzas, Fostering serendipity in online information systems, in: 2013 International Conference on Engineering, Technology and Innovation (ICE) & IEEE International Technology Management Conference, IEEE, 2013, p. 1–10. URL: http://ieeexplore.ieee.org/document/7352707/. doi:10.1109/ITMC.2013. 7352707. [34] A. H. Afridi, F. Outay, Triggers and connection-making for serendipity via user interface in recommender systems, Personal and Ubiquitous Computing 25 (2021) 77–92. doi:10. 1007/s00779-020-01371-w. [35] Y. S. Kim, J. H. Noh, S. R. Kim, et al., A case study for application of design for affordance methodology using affordance feature repositories, in: DS 75-5: Proceedings of the 19th International Conference on Engineering Design (ICED13) Design For Harmonies, Vol. 5: Design for X, Design to X, Seoul, Korea 19-22.08. 2013, 2013, pp. 011–020. [36] L. McCay-Peet, E. Toms, Measuring the Dimensions of Serendipity in Digital Environments, Information Research 16 (2011) paper 483. [37] S. G. Dextre Clarke, The Last 50 Years of Knowledge Organization: A Journey through my Personal Archives, Journal of Information Science 34 (2008) 427–437. [38] D. Bawden, Information systems and the stimulation of creativity, Journal of information science 12 (1986) 203–216. [39] J. Tidwell, C. Brewer, A. Valencia, Designing Interfaces: Patterns for Effective Interaction Design, 3rd ed., O’Reilly Media, 2020. [40] A. El Majjodi, A. D. Starke, C. Trattner, Nudging towards health? examining the merits of nutrition labels and personalization in a recipe recommender system, in: Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, 2022, pp. 48–56. [41] J. B. Schafer, J. A. Konstan, J. Riedl, E-commerce recommendation applications, Data Mining and Knowledge Discovery 5 (2001) 115–153. doi:10.1023/A:1009804230409. [42] A. Starke, M. Willemsen, C. Snijders, Effective User Interface Designs to Increase Energy- Efficient Behavior in a Rasch-based Energy Recommender System, in: RecSys ’17: Pro- ceedings of the Eleventh ACM Conference on Recommender Systems, 2017, pp. 65–73. doi:10.1145/3109859.3109902. [43] L. Chen, P. Pu, Eye-tracking study of user behavior in recommender interfaces, in: International conference on user modeling, adaptation, and personalization, Springer, 2010, pp. 375–380. [44] D. Jannach, M. Jesse, M. Jugovac, C. Trattner, Exploring multi-list user interfaces for similar- item recommendations, in: Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, UMAP ’21, Association for Computing Machinery, New York, NY, USA, 2021, p. 224–228. URL: https://doi.org/10.1145/3450613.3456809. doi:10. 1145/3450613.3456809. [45] N. Tintarev, J. Masthoff, Evaluating the effectiveness of explanations for recommender systems, User Modeling and User-Adapted Interaction 22 (2012) 399–439. [46] C.-H. Tsai, P. Brusilovsky, The effects of controllability and explainability in a social recommender system, User Modeling and User-Adapted Interaction 31 (2021) 591–627. [47] B. P. Knijnenburg, S. Bostandjiev, J. O’Donovan, A. Kobsa, Inspectability and control in social recommenders, in: Proceedings of the sixth ACM conference on Recommender systems, 2012, pp. 43–50. [48] S. Willermark, A. Sigríður Íslind, The polite pop-up: An experimental study of pop-up design characteristics and user experience, in: Proceedings of the 53rd Hawaii International Conference on System Sciences, 2020. [49] H. M. Kim, B. Ghiasi, M. Spear, M. Laskowski, J. Li, Online serendipity: The case for curated recommender systems, Business Horizons 60 (2017) 613–620. doi:10.1016/j. bushor.2017.05.005. [50] D. R. Karger, D. Quan, Collections: flexible, essential tools for information management, in: CHI’04 extended abstracts on Human factors in computing systems, 2004, pp. 1159–1162. [51] T. Bogers, L. Björneborn, Micro-serendipity: Meaningful coincidences in everyday life shared on twitter, in: Proceedings of the iConference 2013, iSchools, 2013, pp. 196–208. URL: http://hdl.handle.net/2142/36052. [52] S. Erdelez, Investigation of Information Encountering in the Controlled Research Environ- ment, Information Processing & Management 40 (2004) 1013–1025. doi:10.1016/j.ipm. 2004.02.002. [53] T. Bogers, R. R. Rasmussen, L. S. B. Jensen, Measuring Serendipity in the Lab: The Effects of Priming and Monitoring, in: Proceedings of the iConference 2013, iSchools, 2013, pp. 703–706. [54] M. Taramigkou, D. Apostolou, G. Mentzas, Supporting creativity through the interactive exploratory search paradigm, International Journal of Human–Computer Interaction 33 (2017) 94–114. [55] L. Chen, Y. Yang, N. Wang, K. Yang, Q. Yuan, How serendipity improves user satisfaction with recommendations? a large-scale user evaluation, in: The World Wide Web Conference, WWW ’19, Association for Computing Machinery, 2019, p. 240–250. URL: https://doi.org/ 10.1145/3308558.3313469. doi:10.1145/3308558.3313469. [56] P. Pu, L. Chen, R. Hu, A user-centric evaluation framework for recommender systems, in: Proceedings of the fifth ACM conference on Recommender systems - RecSys ’11, ACM Press, Chicago, Illinois, USA, 2011, p. 157. URL: http://dl.acm.org/citation.cfm?doid= 2043932.2043962. doi:10.1145/2043932.2043962. [57] A. Jagayat, G. Boparai, C. Pun, B. L. Choma, Mock social media website tool (1.0), 2021. URL: https://docs.studysocial.media. [58] F. Loecherbach, D. Trilling, 3bij3 – developing a framework for researching recommender systems and their effects, Computational Communication Research 2 (2020) 53–79. doi:10. 5117/CCR2020.1.003.LOEC. [59] S. Makri, J. Bhuiya, J. Carthy, J. Owusu-Bonsu, Observing Serendipity in Digital Information Environments, ASIST ’15: Proceedings of the Association for Information Science and Technology 52 (2015) 1–10. [60] K. Lee, M. C. Ashton, Psychometric Properties of the HEXACO Personality Inventory, Multivariate Behavioral Research 39 (2004) 329–358.