How does the User’s Knowledge of the Recommender Influence their Behavior? Muheeb Faizan Ghori1 , Arman Dehpanah1 , Jonathan Gemmell1 , Hamed Qahri-Saremi1,2 and Bamshad Mobasher1 1 School of Computing, DePaul University, Chicago, USA. 2 CIS Department, College of Business, Colorado State University, Fort Collins, USA. Abstract Recommender systems have become a ubiquitous part of modern web applications. They help users discover new and relevant items. Today’s users, through years of interaction with these systems have developed an inherent understanding of how recommender systems function, what their objectives are, and how the user might manipulate them. We describe this understanding as the Theory of the Recom- mender. In this study, we conducted semi-structured interviews with forty recommender system users to empirically explore the relevant factors influencing user behavior. Our findings, based on a rigor- ous thematic analysis of the collected data, suggest that users possess an intuitive and sophisticated understanding of the recommender system’s behavior. We also found that users, based upon their un- derstanding, attitude, and intentions change their interactions to evoke desired recommender behavior. Finally, we discuss the potential implications of such user behavior on recommendation performance. Keywords Recommender systems, Mental models, Qualitative research, User Modeling, 1. Introduction Recommender systems suggest relevant items to users in a variety of domains such as online retailers, streaming services, and social media platforms. These systems have become an essential tool in modern web applications helping users navigate large and complex online environments. In domains like e-commerce, recommender systems help service providers boost their revenue and provide a competitive edge. These systems often leverage user information and interactions with the system to provide personalized recommendations that satisfy the needs and interests of the user. Recommender systems have become pervasive in the last decade. Consequently, users find themselves interacting with recommenders on a regular basis. The system suggests relevant items to users that satisfy their needs and preferences. Users view these recommended items, consume items that catch their interests, and perhaps rate or leave feedback about these items. The system in turn models their responses and provides new recommendations. However, these IntRS’21: Joint Workshop on Interfaces and Human Decision Making for Recommender Systems, September 25, 2021, Virtual Event " mghori2@depaul.edu (M. F. Ghori); adehpana@depaul.edu (A. Dehpanah); jgemmell@cdm.depaul.edu (J. Gemmell); hamed.qahri-saremi@colostate.edu (H. Qahri-Saremi); mobasher@depaul.edu (B. Mobasher) © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) repeated interactions may contribute to the user developing an inherent understanding of how recommender systems function. Prior research demonstrates that users possess a cognitive model of how recommender sys- tems work [1, 2, 3]. This cognitive model represents the users’ interpretation of how the system operates, what their objectives are, and how they generate recommendations. For example, users may recognize that the system uses their demographic data and online interactions to build recommendations. Similarly, they may also recognize that their preferences and interactions influence the recommendations they receive later. Users may also realize the motives of the system such as increasing revenue, marketing, and advertisements. However, we hypothesize that such a sophisticated understanding of the system may influence how users choose to interact with it thereby modifying their interactions to obtain desired recommender behavior. A sophisticated understanding of the recommender system can have several implications. Kulesza et al. [1] showed that a user’s mental model of the recommender system can be helpful in debugging and personalizing the intelligent agent. In this case, a user unhappy with the current set of recommendations may purposely search for and upvote items they have previously enjoyed in order to improve their user profile and correct the system’s reasoning. This allows the system to revise user preferences and improve recommendations. On the contrary, users may change their interactions based on certain situations or personal motivations thereby manipulating the feedback that the recommender system receives. For example, users, when presented with a political viewpoint with which they disagree, may aggressively downvote a content creator in a video sharing service in order to signal the system that they are uninterested in those viewpoints. Some users may delete their recent activity on a streaming service to avoid receiving related recommendations. Other users, sensitive to privacy concerns, may forgo the benefits of a recommender system and opt to view news stories in ‘incognito’ mode. However, such user behavior may result in an incorrect interpretation of user preferences by the system and have an adverse effect on recommendation performance. In this research, we explore how users interact with recommenders based upon their cognitive model of the system. We hypothesize that a sophisticated understanding of the recommender system may influence how users choose to interact with it thereby altering the feedback the sys- tem receives. Understanding such user behavior and the factors affecting it is crucial for several reasons, among them: 1) identifying user behavior and its impact on recommendation perfor- mance. 2) designing systems that can leverage these behaviors to improve recommendation performance and the user’s satisfaction with the system. To test our hypothesis, we conduct a user study to elicit the users’ perception and understand- ing of how the system operates, their attitude, and their interactions with the system along with the motivation behind them. To that end, we asked participants to describe recommender systems, how they work, and provide reasoning as to why they may have received certain recommendations. We then investigate how different users respond to the recommendations to identify unconventional user behavior. Our results show that users possess a mature under- standing of the system’s functionality allowing them to reason and predict the recommender’s behavior. We also found that users, based on their personal motivations, modify their interac- tions to steer the recommender system in desired directions. The insights obtained from this study provides a refined understanding of how users interact with recommender systems and the various factors that influence user behavior. 2. Related Work Prior research on user perception of the recommender system has shown that users possess a cognitive model of how the system works [3, 2]. Cognitive models or mental models can be described as the internal representations that users build based on their interactions with the target system [4, 5]. These models reflect the user’s knowledge and beliefs about the system, allowing them to understand, reason, and predict their behavior. Mental models vary in their fidelity. An ideal model of a target system must be accurate, consistent, and complete. However, Norman [4] observed that users’ mental models of technological systems can be inaccurate, con- tain areas of uncertainty, or lack completeness. Thus, in order to successfully predict a system’s behavior, the user’s mental model must have some degree of technical validity. Despite the limitations of mental models, Doyle and Ford [6] concluded that they are enduring, accessible, and have a great potential to account for human behavioral patterns and errors. Research examining user’s mental models in the context of the recommender system is sparse. Kulesza et al. [1] conducted an empirical study to explore the effects of users’ mental model soundness on personalization in a music recommender system. They showed that users with a sound mental model of the recommender system’s reasoning can help debug and personalize the intelligent agent. In another study, Kulesza et al. [7] explored ways to improve the users’ mental model of the system and trust formation using explanations. They showed that making a recommender system’s reasoning transparent using explanations helped users understand the system’s reasoning and build trust in the system. In an exploratory study, Ngo et al. [2] elicited mental models of Netflix users and investigated their perception of transparency and control. The authors propose aligning the explanatory and interactive components of the system with underlying recommendation algorithms and linking the system components to identified basic mental models to increase transparency and control. Eiband et al. [8] proposed a stage-based participatory approach for designing transparent recommender system interfaces. The study provides key insights for practitioners to integrate transparency into recommender system design; achieved by improving the users’ mental model of the recommender system through the use of explanation interfaces. Rader and Gray [9] investigated user understanding of algorithmic curation in Facebook’s news feed. Analyzing survey responses of Facebook users, they found that users’ beliefs about the system varied, and the survey participants demonstrated intuitive theories about how Facebook’s news feed works. As with research on understanding mental models of recommender systems, these studies have not investigated the effect of user’s knowledge on their behavior. On the other hand, research examining mental models in other domains has shown that the user’s knowledge of a target system has the ability to influence user behavior [10, 11]. In this research, we explore the users’ perception and beliefs about how recommender systems work in a qualitative study. Next, we investigate the impact of users’ mental models on their interactions with the system. Finally, we draw on the ‘Theory of planned behavior’ to understand the key determinants motivating users to such user behavior. The theory of planned behavior [12, 13] states that the performance of a behavior is jointly driven by intentions and perceived behavioral control. Intentions capture the motivational factors that influence behaviors; factors such as the amount of effort and the degree to which an individual is willing to perform that behavior. Perceived behavioral control refers to the beliefs re- garding the possession of required resources and opportunities to engage in a behavior. Most no- tably, the perception of behavioral control plays an important role, impacting both the intentions and the actual behavior. In general, the theory of planned behavior details the determinants of an individual’s decision to enact a particular behavior [14]. This theory has been widely used to understand and predict user behavior in a variety of domains, and to serve as a framework for be- havior change interventions [15, 14]. Therefore, in this research, we examine how an individual’s beliefs about the system’s functionality, their attitudes, and intentions affect online behavior. Most popular methods to elicit mental models include diagramming exercises, think-aloud tasks, and verbal interview. Carley and Palmquist [16] proposed that a representation of a mental model can be extracted from verbal text elicited through interviews. The identified verbal structure is a symbolic representation of an individual’s cognition [16, 17, 18]. In line with this, we interviewed a sample of recommender system users to elicit their mental models using thematic analysis. Our work aims to examine the relationship between user’s knowledge and their behavior, what impact such behavior might have on these systems, and how recommender systems might be designed to cope with, or even leverage these behaviors. 3. Methodology The purpose of this study is to identify and investigate the various factors influencing user behav- ior of the recommender system. To that end, we used thematic analysis, a well-known qualitative research method. We conducted in-depth interviews with participants to probe their under- standing of the recommender system along with the rationale behind their online interactions. 3.1. Thematic Analysis Thematic analysis is a well-known qualitative data analysis method for systematically identi- fying, analyzing, and interpreting themes within qualitative data [19]. The identified themes serve as a framework for organizing and reporting meaningful analytic observations relevant to the research question. Thematic analysis was well suited for our research, as specified by Braun and Clarke [20], this method is especially useful for examining the perspectives of different research participants, highlighting similarities and differences, and generating unanticipated in- sights. Thematic analysis has been successfully employed in healthcare and information system domains to uncover user perceptions of health applications and critical user experiences [21]. 3.2. Participants A total of 40 participants (24 female, 16 male) were recruited for the study using Reddit, a community-based discussion website. The age of the participants ranged from 19 to 45 years. Typically, interviews took between 30-45 minutes and each respondent received a compensation of USD $10 for their participation. The participants came from a wide range of professions including a business owner, teacher, sales representative, engineer, and business analyst. Partici- pants held diverse educational backgrounds and stated using recommenders for 4.79 hours a day on an average. The study focused mainly on applications that users frequently use on a daily basis (e.g., e-commerce and streaming platforms such as Amazon, YouTube, Netflix, and Spotify). 3.3. Study Procedure In order to collect rich and detailed qualitative data, we conducted semi-structured interviews using Zoom, a video conferencing service [22]. Before each interview began, participants were debriefed about the research and their informed consent was obtained as prescribed by the Institutional Review Board. The initial interview questions were designed to elicit the user’s understanding of the recommender system and investigate their behavior. Consequently, we probed the participant’s perception of how recommenders work. Questions included, what is a recommender system, how does the system build recommendations? What information does it use? What are the goals and motivations of the system? Similarly, we asked participants to explain the system’s rationale behind the various recommendations they receive on their person- alized feed. Finally, the participants were prompted to describe their daily interactions with the system in detail. Participants answered these questions based on their own experience and under- standing, citing examples from the recommenders they use regularly. By adapting the conceptual framework of the ‘theory of planned behavior’ to our research context coupled with the insights obtained through initial exploratory interviews with participants, we identified the key determi- nants affecting user behavior. The identified factors include the user’s overall knowledge of rec- ommenders, behavioral intentions, and attitudes towards the system. Hence, we explored and in- vestigated the relationships among the user’s attitude towards the system, their perceptions and beliefs about how recommenders work, their online behavior, and the motivation behind them. 3.4. Data Analysis We followed the six-phase approach to thematic analysis as specified by Braun and Clarke [23, 20]. Each interview was transcribed and analyzed using QSR NVivo 12 [24]. First, we performed multiple readings of the textual interview data to gain a comprehensive understanding of the content. In the second phase, we systematically analyzed the data by reading the participant responses analytically, and critically to identify initial codes. Codes are semantic labels that represent the participant’s interpretation of the data. Afterward, the identified codes were reviewed to find similarities and differences. We grouped similar codes into categories known as themes [23]. In the fourth phase, the discovered themes were further condensed into higher-level themes to ensure a distinct and coherent set. We recursively reviewed and revised the identified themes against the entire data set to verify they adequately capture the data. In the next phase, we performed a deeper analysis to identify global themes. We further defined the themes in terms of its focus, scope, and purpose; that each, in turn, builds on and develops the sub-themes. The resulting global themes were inspired by the conceptual framework of the theory of planned behavior. This recursive process of thematic analysis resulted in several distinct and coherent themes. In the last phase, we defined and named the different themes and developed a thematic map that describes the various factors influencing user behavior of the recommender systems. 4. Results The results of the thematic analysis are summarized in the form of themes along with its sub- themes and individual codes. Our thematic map present a graphical summary of the various factors that influence user behavior of the recommender systems (Fig 1). Five main themes emerged from the data analysis namely user’s attitude towards the recommender system, per- ceived reasoning for recommendations, user behavior, perceived behavioral control, and perceived outcome. In the remainder of this section, we describe these themes in detail along with direct quotes and statements from the participants. While themes are presented as discrete, some overlap of content exists between them. Figure 1: Thematic map summarizing the various factors that influence user behavior. 4.1. Attitude towards Recommender System Attitude refers to the user’s overall feeling towards a recommender system [25]. In general, attitudes reflect the user’s thoughts and opinions that determine their choices and behavior. User attitudes can vary based on several factors. Based on our analysis, we found that users’ attitudes varied depending on the application, the nature of the recommendations they receive, perceived utility, and overall experience of interacting with the system. In general, participants expressed dissatisfaction towards various types of recommendations that did not cater to their current needs or interests. These recommendations, described in 4.2, include targeted advertisements (personalized and non-personalized), popular or trending recommendations, and irrelevant or partially relevant recommendations. Here, we divide this theme into four sub-themes, namely positive, negative, neutral, and mixed. We describe these sub-themes in detail. 4.1.1. Positive In general, users described the recommender system using the terms ‘useful’, ‘helpful’, and ‘convenient’. Participants with a positive attitude towards the system described the various benefit of using recommender systems. They stated that the recommender system helps them find items they are looking for, discover new items, and offer a variety of options while saving them time and effort. One user described, “ I think it’s pretty good, [...] it’s very useful for a lot of people because they don’t have to always go searching for something. There’s something there, recommended to them. Even if they’re a new user there’s things that they could look at and click on, and then it’ll show them more related to that, just to get them started on the system” [P6]. Another user expressed, “.... It helps me find better deals. It helps me find out about new products and it helps me shop conveniently. So, I think it’s doing a great job”[P10]. 4.1.2. Negative While most users conveyed a positive experience with recommender systems, a few users had a negative impression of the system. They expressed frustration with regards to targeted advertisements, sponsored content, and their invasion of privacy. One user mentioned, “I think [...] that it’s not because they care about the customer and want to [provide] a more personalized experience, the thing is it’s just for their benefit, so they don’t really take that into account. I mean the [ads] are all over your feed like it’s just too much. It’s excessive, it’s obnoxious” [P19]. Another user stated, “I think it’s very not private. I guess I am aware of what I signed up for. The app itself, I just know that all of our information is going to advertisers and it’s being sold off because even on YouTube, I notice when I get advertisements sometimes it’s catered to me as well” [P26]. 4.1.3. Neutral A few users held a neutral stance on the topic. These users were aware but indifferent to the advantages and drawbacks of using a recommender system. We found that users with a neutral stance typically spent less time on average using recommender systems than the others. One user expressed, “I guess I’m neutral, when it’s helpful, it’s nice but a lot of the times I already know what I’m looking for when I use something like Amazon or [Netflix]. So, I’m kind of just looking for what I need, I don’t dislike it necessarily. And I know that I’m being tracked in ways I’m probably not aware of, but I guess, my general thought is that like millions of people use Netflix, so it can’t be that bad” [P3]. 4.1.4. Mixed A subset of users held both positive and negative impressions of the recommender system. We also noticed that some users were uncertain or indecisive on their stance towards the system. A user described, “... It’s dependent on the service because something like Netflix, [...] where they get their money from paid subscriptions then the recommendation service will be more geared towards trying to make sure you enjoy the product as much as possible. If it’s a service that gets its money based off of ad revenue then ... it can be a negative to the user because things that grab your attention aren’t necessarily good for you” [P12]. 4.2. Perceived Reasoning for Recommendations The data analyzed within this theme represents the user’s perception of the recommender system’s knowledge, reasoning, and motivation. Participants described the recommendations they received from the various applications using the terms ‘useful’, ‘interesting’, ‘relevant’, ‘somewhat relevant’, ‘irrelevant’, and ‘annoying’. Specifically, we asked participants to explain the system’s reasoning for the recommendations they regarded as ‘irrelevant’ or ‘annoying’. Here, we describe their responses in detail. 4.2.1. User-Item Similarity The majority of the users described that the items recommended to them were similar to what they have watched, browsed, or purchased in the past. They further explained that the system uses this data to pick item features in order to recommend new items. However, these new recommendations did not necessarily align with their taste and preference. One user described, “It is probably based upon [...] a certain character or a certain thing that I watched and they just pulled from there and assumed that I wanted to watch another certain type of a children’s movie” [P10]. Another Facebook user described, “May be [Facebook] looks at certain keywords in the posts and comments that I am making. I tried looking through my statuses to see if there was anything that I can identify as to why Facebook thinks I am depressed ” [P23]. On the other hand, some participants explained that the recommendations were based on the preferences of other ‘similar’ users. While some users perceived this idea solely based on the explanations of recommendations ( for e.g., “users who bought ... also bought ...”), other users de- scribed in detail that the system groups users based on similar interests and demographics to rec- ommend items to the target user. One user expressed, “Sometimes when I get those kinds of [rec- ommendations], I would see a ... little note underneath that said “this was watched by people who watch this video”. But that wouldn’t necessarily mean that was something that I would also want to watch” [P1]. A Netflix user explained, “Maybe people with similar demographics as me are inter- ested in them, like based on what other users of similar demographics like click on and watch and like spend time on. The recommendation system will pull that information and suggest that” [P37]. Similarly, another user described in detail, “I think it probably has to do with other people in the population that fall into the similar category as you, kind of get lumped together and they might have purchased the same thing or something similar and they purchase another item to go along with it or looked at another item. So, they make that suggestion based off of that or like I said people just previously making the purchase. So, these three things together ” [P34]. While many participants expressed uncertainty regarding the specifics of how the system generates recommendations, these responses show that users possess an innate and general understanding of the different recommendation algorithms including collaborative filtering, content-based filtering, and model-based methods [26, 27, 28]. 4.2.2. Casual Browsing Users mentioned the idea of casual browsing, stating that sometimes they would mindlessly browse social media applications and retail websites when they feel bored. However, they realized that the system would track their activity such as the items they looked at, searched for, or clicked on, and the time spent browsing different items and use that information to recommend new items that they may not necessarily be interested in. The user explains, “... [On Amazon] I scroll through and look at the ‘deals of the day’ which have just about everything in there. So, I feel like the amount of time that I spend looking at products, ... I might click on it just to see what it is but it’s not really something that I want I’m just kind of interested to see what it is used for. And I think they store that information and then use that to make those recommendations that are not necessarily applicable to me” [P34]. 4.2.3. Accidental Interactions Several participants mentioned that the recommendations were based on items they have accidentally clicked on, or searched for in the past. Similarly, a few users reported receiving related recommendations right after watching a certain video once or buying a particular item one-time. Lastly, users also mentioned that the system would recommend them similar items even after having purchased that particular product. One user described, “I may have accidentally clicked on a product that I didn’t like. Therefore, that might have altered my interests according to the recommendation system” [P32]. Another user said, “I think it may have happened because I did click on one similar genre and then it just kept showing me that genre or something” [P7]. 4.2.4. Advertisements, Marketing and Sponsored content Participants in our study demonstrated an understanding of the goals and motivations of the recommender system. They realize the motives behind the recommendations. They recognize when the system is promoting a new song or pushing a certain product. A user described, “With Facebook, people who are paying to get their stuff out in front of more people. Their stuff pops up first. Even if it’s not necessarily something I would actually like. They are paying for the ads” [P38]. Another user mentioned the notion of ‘marketing’ stating, “The [recommendations] may not be relevant, but they still serve it up because, this is one of their marketing functions. They are trying to cross promote and get you to purchase more or engage you in purchasing more. Or even putting that thought into your head that, “Hey, did you think about buying a bag? Did you think about buying a mouse?” [P39]. Similarly, one user suggested that the system is promoting certain items so they get recommended to all users, “[...] to boost some products and services and perhaps they recommend it to everyone. It’s ... not just specifically to me ” [P4]. 4.2.5. Popular and Trending Recommendations Users may often find items that are popular amongst all the other users across the platform irrele- vant. These items may be products with the most sales, movies with the highest ratings, or videos with the most views. Similar to popular recommendations, trending items focuses on popularity within a particular time frame. For instance, a new product released that suddenly surges in sales. One participant shared a similar concern stating, “I think maybe ... that content is still popular amongst other users that are not like myself maybe like overall across all the users on an applica- tion there. So, they try to push me to try to watch it and see if I like it too. Even if [it is not related to my usual] habits” [P26]. Another user reasoned, “I think some of the videos that are recommended were very popular or trending videos that a lot of people had seen and were on my feed” [P1]. 4.2.6. Change in Preferences or Mood Change Users’ interests in items may change over time. Similarly, users may have different preferences for items depending on their current situation. Participants conveyed a similar experience with recommendations. One user stated, “I was previously interested in them [items] but now I’m not interested [...] anymore .... the recommendation system may still has that type of item as I’m inter- ested in them” [P37]. Another user expressed, “Yeah, YouTube would recommend me some clips of ... ‘American Bad’ but I’m not in the mood so even like I might have been a couple days ago” [P28]. 4.2.7. Insufficient Data on Users A critical challenge for any recommender system is recommending items to new users or users with an insufficient amount of activity on the platform [29]. In this case, the system lacks the valuable history of the user’s interaction with the system on which to base the recommendations. Participants in our study reflected a similar understanding of this concept. Users mentioned that the system probably does not have sufficient interaction data to recommend items. Therefore, it is trying to understand user preferences by recommending different genres. On the other hand, some users described the system as ‘inferior’, stating that the recommender system may not be as good. Participants used the term ‘sliding scale’ to describe the accuracy of recommendations, explaining that the recommendations may not be perfectly relevant. One user expressed, “I think that [...] the reason is maybe the system isn’t as good, however, that looks at capturing what you are interested in, based on interaction on the platform. But also, I think it takes time to learn some- one, and to learn someone’s behavior. I think the more you interact on a platform, the better the recommendations, the more accurate they will be. Therefore, if you are getting a lot of inaccurate stuff, maybe you just haven’t used the platform that much ” [P31]. Similarly, another reasoned, “Maybe ... it’s trying to figure out what I liked. It’s trying to understand the user trying to under- stand what type of products or what type of music or whatever, like user understanding” [P13]. 4.2.8. Mixed user preferences A user who shares his user account with his family members reasoned that he finds the rec- ommendations irrelevant, stating “[...] I share a Netflix account with another family member and there’s recommendations because you know they are on the same account. You know using, searching and watching. So, it’s kind of their preferences mixed in the recommendations” [P7]. 4.3. User Behavior The data analyzed within this theme represents how different users interact with recommender systems. Our analysis suggests that users interact with the system differently based on several factors. These factors include the user’s behavioral intentions, their knowledge of the recom- menders, and their attitude towards the system. We found that users with strong attitudes (pos- itive, negative, and mixed) develop behavioral intentions. These behavioral intentions together with the user’s knowledge of the available system features, their perceived behavioral control, and their beliefs about action-interaction outcomes guide their actual behavior. In this section, we present the different ways users respond to recommendations along with their intentions. 4.3.1. Ignore recommendations or Hide recommendations The majority of the users mentioned that they ‘do nothing’. A user described, “I usually just ignore them or like, I use the app and I get like notifications and I know I can turn it off, but I am kind of lazy about stuff like that, so I just kind of discard it or dismiss it or like scroll past it” [P3]. On the other hand, few users mentioned that they view their recommendations stating that the recommendations have helped them discover something new and interesting in the past. A user expressed, “I look at it. Just see what it’s all about and sometimes this is a good way to discover something I have never used or seen before also” [P4]. Some participants reported using the application features to explicitly communicate their dislikes. One user mentioned, “I will hit ‘hide ad’ and ‘don’t show notifications’ of this ad or something like that.” Our analysis shows that users are aware of the various applications’ feed- back mechanisms, and use them to signal their preferences to the system. One user mentioned, “I usually ignore them or If I don’t want to see it in my recommended section, [On YouTube] I could click and say I’m not interested in this [...]then they usually take the video away” [P9]. A Face- book user expressed, “I click on the x icon [...] and then afterwards it [asks] why? I click irrelevant and then it says you will no longer receive recommendations like this in the future” [P17]. 4.3.2. Refresh the page for new recommendations A few participants mentioned refreshing the page until they see recommendations they would like. One user mentioned, “I usually just like refresh the page which always gives me new recom- mendations and I will just refresh it until I find something I like”[P28]. Another user described, “... [On YouTube], I usually just ignore it or like refresh to have ... a new set of recommended videos. For Amazon, I will just keep scrolling and move on. Sometimes if I don’t like what they are recommending, I just filter more to have like those type of things” [P34]. 4.3.3. Provide feedback Data analyzed within this sub-theme shows that users communicate their preferences to the recommender system through various interactions. These interactions include liking or disliking a video, rating items, filling out surveys, creating a music playlist, etc. Users believe that these interactions allow the recommender system to recognize their actual preferences and improve the recommendations. On the contrary, we also found that some users habitually upvote, downvote, or rate items to express their disposition without any behavioral motivation. A video streaming service user expressed, “I actually do [press like or dislike] a lot on YouTube because it helps to really predict what’s going to play and not play. So, I hate when things play that I don’t like” [P18]. A user described, “On Google play, If I’m listening to a radio that’s very specific .. and I ... ask for workout music. It’ll play the music but as soon as I click a thumbs down it’ll stop the song, skips the songs and then like never play the song again. I click the thumbs down button or if there is anything that says like I don’t want to see this anymore then I’ll go ahead and do that” [P14]. In contrast, one user expressed, “I usually just add them to ... my favorite playlist. I like [press like] them ”. When asked if she had any motivation for those interactions, she further explained, “ I never really thought about that, I think it probably does but I just always assumed as long as I click on a video to watch it then they would just like show me another video that’s related” [P26]. 4.3.4. Send active signals A subset of users explained that sending active signals to the recommender system helped them avoid irrelevant recommendations. These signals include ‘actively’ searching for items, clicking on items, adding or removing items from a playlist, or changing one’s preference profile to signal the recommender system. This is different from the above sub-theme ‘providing feedback’. Here, the user is intentionally spending a session sending interactive signals. They expressed that these actions enable the recommender system to realize their actual preferences and help steer the recommendations in the desired direction. One user stated, “Probably change the profile, ... like let them know [what kind of things you like]”[P30]. Another user mentioned sending signals to the recommender system, “I try to skip through the [recommendations], or I will actually actively pick something that is the direct opposite of it. So, [the system] would be like, oh okay you like MSNBC not Fox News ” [P29]. Some users mentioned adding items to their playlist on a music streaming service to signal the system about their preferences. For instance, a user expressed, “I [press] like, dislike or ‘favorite’ [on Spotify], or I can interact in a different way [...] I think it keeps track of that” [P22]. 4.3.5. Limit interactions A subset of users mentioned that they refrain from casually interacting with the recommender in a way that would incorrectly signal the system about their interests. Furthermore, they described that the system tracks their activity on the platform. Therefore, they refrain from interacting with any recommendations unless they align with their interests. A user expressed, “I think a lot of your user interaction is tracked through the number of clicks, the number of searches you per- formed, types of searches you performed, ... the departments you have looked into. If [a user] does not like the recommendations, I guess they should just avoid searching for that product. ... I mean it’s kind of in the user’s control” [P25]. Similarly, another user described, “... If I search something new something out of what I revolve around maybe. I don’t have a cat, I certainly am searching cat food for a friend, ... I feel like searching things kind of effects it, so I guess avoiding it” [P22]. 4.3.6. Delete information Participants reported deleting items from their browsing history, search history, and playlists. Similarly, some users mentioned that they remove items from their ‘watch later’ list. When asked about the reason for these actions, users responded that the recommender system likely uses this information to generate new recommendations. One user described, “I go to my history and delete my search list and then I unsubscribe to some of the channels and then I remove a lot of videos from my watch later. That helps me like you know like by 50 % may be” [P13]. 4.3.7. Turn off data collection A few users mentioned the idea of turning off their browsing history or watch history. While some users were certain about the outcome of these behaviors, other users vaguely expressed this idea. One user explained, “For something like ... internet websites, ... I go through stuff like my Google account settings to turn off all the personalized ad revenue and data collection because I personally do not enjoy having lots of data collected about me by faceless organizations” [P12]. Another user discussed, “I think you can turn off notifications. I don’t know, I know on Amazon you can turn off the browsing history and then, I’m guessing [...] even if you don’t see a button, you may be can like write to them. Um, I usually just ... ignore it versus trying to turn it off ” [P3]. 4.3.8. Unsubscribe A few users of streaming services like YouTube and Spotify mentioned the idea of unsubscribing from artists and channels. They described that unsubscribing from certain channels or artists helped them avoid annoying and irrelevant recommendations. For instance P15 stated, “On YouTube, I unsubscribe to the channel if I’m getting really annoying notifications about it”. An- other user stated, “... [Deleting search history] doesn’t help really. It helps me when I unsubscribe to the channels and then delete it from my watch later” [P13]. 4.3.9. Use another account A couple of users also mentioned using a different account to avoid irrelevant recommendations. One user explained, “... I have 2 accounts, so, I just try using the other one for a while and leaving the other one passive” [P15]. 4.4. Perceived Behavioral Control Perceived behavioral control refers to the user’s perception of the ease or difficulty of performing the behavior of interest. According to the theory of planned behavior, perceived behavioral con- trol, together with behavioral intention, can be used directly to predict behavioral achievement. However, the likelihood of a given behavior is also governed by, to some extent, the resources and opportunities available to a user. Our analysis of the participants’ responses shows that users, based on their understanding and interactions with the system, have ascertained several resources that enable them to indulge in the aforementioned behaviors. These resources include the use of available system features to provide feedback to the recommender system such as rating items, liking or disliking a video, fill- ing out surveys, etc. However, users have also discovered unconventional methods to obtain de- sired recommender system behavior. These methods include turning data collection off, deleting user activity data like browsing history, search history, or purchase history, actively interacting with items of interests, and limiting interactions. Users’ perceptions of behavioral control varied across the sample. Most notably, we found that users’ engagement in behavior was largely impacted by their intentions. Some users, possessing the knowledge of available resources and action-interaction outcomes, chose not to carry out these behaviors solely based on their intentions, while other users demonstrated carrying out these behaviors in a variety of situations. 4.5. Perceived Outcome Perceived outcome refers to the users’ beliefs about the expected outcome of their interactions with the system [12]. Such beliefs are formed based on the users’ experience of interacting with the system and drawing inferences over time. Our analysis shows that users held various perceptions of the outcomes of their behavior. Based on the anecdotal evidence presented in 4.3, it can reasonably be inferred that some users attempt to steer the recommender system’s behavior in desired directions. As such, we observed most participants in our study believed that providing feedback through various interactions such as liking or disliking videos, rating products, adding or deleting songs from a playlist helps improve the quality and accuracy of their recommendations. Conversely, some users perceived that limiting interactions with the system, and aggressively upvoting or downvoting items allows them to avoid irrelevant recom- mendations. Similarly, some users asserted that they could personalize the recommendations to their liking by controlling the information they allow the recommender system to perceive. Others reasoned that these behaviors enabled them to achieve a diverse and interesting recom- mendation feed. Some users held a positive outlook towards the system tracking their activity. They believe that the data is used to run the applications and improve recommendations. On the contrary, other users believed that deleting user activity data on the application, turning off data collection features, or using another account would preserve their privacy. 5. Limitations Our study has two noticeable limitations. First, subjects were recruited from Reddit. These indi- viduals are likely more technologically savvy than the average internet user. Second, interview data were analyzed by coders with a background in recommender systems. However, to over- come researcher bias and ensure reliability, two researchers analyzed the data independently. Any coding discrepancies between the researchers were reconciled by consensus decisions. Despite these limitations, this study provides strong subjective evidence of the presence of unconventional user behavior across many domains of recommender systems. Finally, we discuss future work to further explore this research direction. 6. Discussion Our findings, based on a rigorous thematic analysis, demonstrate that users possess an intuitive and sophisticated understanding of the recommender system’s behavior. We observed that user attitudes were influenced by the user’s perception of the utility, and overall experience of interaction with the system including the intrinsic nature of the recommendations served to the user. The study showed that the user’s conceptualization of how recommender systems work has a significant influence on their behavior. In summary, we found that user behavior is informed by the user’s beliefs and knowledge of the action-interaction outcomes, perceived behavioral control, their intentions and attitude towards the system. We analyzed user interaction with recommender systems, specifically how they respond to irrelevant recommendations (section 4.3). Throughout the interviews, we observed three con- trasting approaches of end-user debugging and control used by the participants [30, 1, 31]. In the first case, participants were not fully aware of all the resources available to them to provide feed- back to the system. For instance, P26 did not realize the function of a like button, however, she described using the ‘not interested’ feature to express her preference. P26: “I never really thought about that [’like’ function on YouTube]. I just always assumed as long as I click on a video to watch it then they would just like show me another video that’s related. [...] Yeah I think like sometimes YouTube recommends me scary videos that I just don’t want to see it. There is usually an option for me to take it off so I [...] click that”[P26]. Therefore, users who does not possess the knowledge of the system’s functionality may not be able to enjoy the intended utility of recommenders. Similarly, some participants expressed a mature understanding of how feedback functions work and used them accurately to signal their preferences, P29 explained in detail, “On Pandora [...] basically by saying thumbs up, you’re saying I want to hear more like this. Now, it’s not necessarily by the same artist, they’ll actually take a look at the components of the song. Like, does it have a Caribbean beat, is it a rap song or a musical? So, you have to be careful because sometimes if a band is doing a cover of something then you’re like oh I don’t really like that cover they just did, you thumbs down it And then all of a sudden that band disappears from your playlist. So, apparently my wife informs me you can actually pick I don’t want to hear this track versus thumbs downing that, but it is not intuitive and difficult to find.” In contrast, few participants described using unconventional interaction methods to avoid irrelevant recommendations, as described by P13, “[On YouTube] I’ve tried clearing my cookies, that didn’t work. Well I would just not search for it for a while. I go to my history and delete my search list and then I unsubscribe to some of the channels and then I remove a lot of videos from my watch later. That helps me like you know like by 50 %” [P13]. We speculate that such user behavior may have an adverse affect on recommendation performance and indirectly affects the user’s perceived usefulness, trust, and overall satisfaction with the system. Our research has several implications. First, we encourage researchers to establish the link between the user’s mental model of the recommender system and their online behavior. Since user behavior is both input for recommendation algorithms and constrained by them, unconven- tional user behavior may potentially be detrimental to recommendation performance. Second, designing system that can identify and leverage these behavior can significantly improve recom- mendation performance. Third, we suggest examining discrepancies between the users’ mental model and the system’s actual behavior to identify incorrect user beliefs and assumptions about the system’s functionality. Therefore, resolving the identified discrepancies can help inspire trust, confidence, and satisfaction with the system. Finally, we suggest the use of explanations to help users fully comprehend the functionality of feedback functions and implement controllable interfaces to allow users to revise their preferences in an accessible and intuitive manner. 7. Conclusion and Future Work In this paper, we identified and explored the various factors affecting the user behavior of the recommender systems. To that end, we interviewed forty recommender system users from Reddit in a qualitative user study. Our analysis of their responses demonstrates that user’s behavior towards the system can be influenced by their attitude towards the system, their perception and reasoning of how the system operates, and their motivation. Our findings contribute to a refined understanding of user behavior and demonstrate the relationships among the several factors affecting it. For future work, we plan to develop a comprehensive theoretical framework of the users’ mental models of the system. We imagine a framework that will help recommender systems to identify and leverage these behaviors to enhance the users’ experience and improve the system’s performance. Finally, we plan to experimentally evaluate the impact of the identified user behaviors in a variety of domains. References [1] T. Kulesza, S. Stumpf, M. Burnett, I. Kwan, Tell me more?: the effects of mental model soundness on personalizing an intelligent agent, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 2012, pp. 1–10. [2] T. Ngo, J. Kunkel, J. Ziegler, Exploring mental models for transparent and controllable recommender systems: A qualitative study, in: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, 2020, pp. 183–191. [3] M. F. Ghori, A. Dehpanah, J. Gemmell, B. Mobasher, Does the user have a theory of the recommender? a pilot study., 2019. [4] D. A. Norman, Some observations on mental models, Mental models 7 (????) 7–14. [5] D. E. Rumelhart, D. A. Norman, Representation in memory. (1983). [6] J. K. Doyle, D. N. Ford, Mental models concepts for system dynamics research, System dynamics review: the journal of the System Dynamics Society 14 (1998) 3–29. [7] T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan, W.-K. Wong, Too much, too little, or just right? ways explanations impact end users’ mental models, in: 2013 IEEE Symposium on Visual Languages and Human Centric Computing, IEEE, 2013, pp. 3–10. [8] M. Eiband, H. Schneider, M. Bilandzic, J. Fazekas-Con, M. Haug, H. Hussmann, Bringing transparency design into practice, in: 23rd international conference on intelligent user interfaces, 2018, pp. 211–223. [9] E. Rader, R. Gray, Understanding user beliefs about algorithmic curation in the facebook news feed, in: Proceedings of the 33rd annual ACM conference on human factors in computing systems, 2015, pp. 173–182. [10] Y. Zhang, The influence of mental models on undergraduate students’ searching behavior on the web, Information Processing & Management 44 (2008) 1330–1345. [11] R. Kang, L. Dabbish, N. Fruchter, S. Kiesler, “my data just goes everywhere:” user mental models of the internet and implications for privacy and security, in: Eleventh Symposium On Usable Privacy and Security ({SOUPS} 2015), 2015, pp. 39–52. [12] I. Ajzen, et al., The theory of planned behavior, Organizational behavior and human decision processes 50 (1991) 179–211. [13] I. Ajzen, From intentions to actions: A theory of planned behavior, in: Action control, Springer, 1985, pp. 11–39. [14] M. Conner, C. J. Armitage, Extending the theory of planned behavior: A review and avenues for further research, Journal of applied social psychology 28 (1998) 1429–1464. [15] G. Godin, G. Kok, The theory of planned behavior: a review of its applications to health- related behaviors, American journal of health promotion 11 (1996) 87–98. [16] K. Carley, M. Palmquist, Extracting, representing, and analyzing mental models, Social forces 70 (1992) 601–636. [17] G. Fauconnier, Mental spaces: Aspects of meaning construction in natural language, Cambridge University Press, 1994. [18] J. F. Sowa, Conceptual structures: information processing in mind and machine, Addison- Wesley Longman Publishing Co., Inc., 1984. [19] R. E. Boyatzis, Transforming qualitative information: Thematic analysis and code develop- ment, sage, 1998. [20] V. Braun, V. Clarke, Using thematic analysis in psychology, Qualitative research in psychology 3 (2006) 77–101. [21] T. Kari, S. Koivunen, L. Frank, M. Makkonen, P. Moilanen, Critical experiences during the implementation of a self-tracking technology, in: PACIS 2016: Proceedings of the 20th Pacific Asia Conference on Information Systems, ISBN 9789860491029, Association for Information Systems, 2016. [22] M. M. Archibald, R. C. Ambagtsheer, M. G. Casey, M. Lawless, Using zoom videocon- ferencing for qualitative data collection: perceptions and experiences of researchers and participants, International Journal of Qualitative Methods 18 (2019) 1609406919874596. [23] V. Braun, V. Clarke, Thematic analysis. apa handbook of research methods in psychology: Volume 2, 2, 57–71, 2012. [24] L. Johnston, Software and method: Reflections on teaching and using qsr nvivo in doctoral research, International Journal of Social Research Methodology 9 (2006) 379–391. [25] P. Pu, L. Chen, R. Hu, A user-centric evaluation framework for recommender systems, in: Proceedings of the fifth ACM conference on Recommender systems, 2011, pp. 157–164. [26] Y. Koren, R. Bell, C. Volinsky, Matrix factorization techniques for recommender systems, Computer 42 (2009) 30–37. [27] B. Sarwar, G. Karypis, J. Konstan, J. Riedl, Application of dimensionality reduction in recommender system-a case study, Technical Report, Minnesota Univ Minneapolis Dept of Computer Science, 2000. [28] F. Ricci, L. Rokach, B. Shapira, Introduction to recommender systems handbook, in: Recommender systems handbook, Springer, 2011, pp. 1–35. [29] A. I. Schein, A. Popescul, L. H. Ungar, D. M. Pennock, Methods and metrics for cold-start recommendations, in: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, 2002, pp. 253–260. [30] N. Tintarev, J. Masthoff, Designing and evaluating explanations for recommender systems, in: Recommender systems handbook, Springer, 2011, pp. 479–510. [31] T. Kulesza, S. Stumpf, M. Burnett, W.-K. Wong, Y. Riche, T. Moore, I. Oberst, A. Shinsel, K. McIntosh, Explanatory debugging: Supporting end-user debugging of machine-learned programs, in: 2010 IEEE Symposium on Visual Languages and Human-Centric Computing, IEEE, 2010, pp. 41–48.