The Immunity of Users’ Item Selection from Serial Position Effects in Multi-Attribute Item Recommendation Scenarios Thi Ngoc Trang Tran1 , Carmen Isabella Baumann2 , Alexander Felfernig1 and Viet Man Le1 1 Institute of Software Technology, Graz University of Technology, Austria 2 Graz University of Technology, Austria Abstract Serial position effects are triggered in recommendation scenarios where users focus on evaluating items shown at the beginning and at the end of a list. In this paper, we analyze these effects in the context of multi-attribute item recommendation scenarios where the recommended items are presented to users in the form of a list of relevant attributes. We conducted a user study in different item domains to ex- amine if the item selection of users is affected by the order of the attributes of the recommended items presented to them. The experimental results show that the order of the attributes does not affect users’ item selection. When selecting a recommended item, users tend to focus on evaluating the value of the attributes that reflect their preferences for the desired item but do not care about the order of the attributes. This finding brings us to a conclusion that in the context of multi-attribute item recommen- dation scenarios, the selection of a recommended item from a list of candidate items is immune to serial position effects. Keywords Recommender Systems, Human Decision Making, Decision Biases, Serial Position Effects, Multi-Attribute Items, Item Selection 1. Introduction Serial position effects are decision biases triggered when items are presented in the form of a list [1, 2]. These biases usually occur in single-user recommendation scenarios, where users tend to focus on evaluating items shown at the beginning and at the end of a list [3]. Serial position effects have also been proven to influence the decision making behavior of groups of users in the context of sequential decision scenarios where a group of users have to make different decisions continuously [4]. In this paper, we further analyze the impacts of serial position effects in the context of multi-attribute items, in which items are characterized by a list of attributes. We find out that the existing studies only investigate serial position effects with the focus on the position of recommended items themselves [5, 6, 7], whereas the influences of such effects on users’ item IntRS ’21: Joint Workshop on Interfaces and Human Decision Making for Recommender Systems (2021), Amsterdam on September 25 or October 1, 2021. " ttrang@ist.tugraz.at (T. N. T. Tran); carmen.baumann@student.tugraz.at (C. I. Baumann); alexander.felfernig@ist.tugraz.at (A. Felfernig); vietman.le@ist.tugraz.at (V. M. Le) © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) choices in scenarios where the recommended items are described by a list of relevant attributes have not been studied yet. With this study, we go one step further by discovering correlations between the order of the recommended items’ attributes and the item selection behavior of users. In this context, we confront a recommendation scenario in the digital camera item domain where a camera is described by a list of various attributes such as sensor, megapixels, image resolution, storage, zoom, price, weight, battery, GPS, and face detection. Each attribute is assigned to a specific value (e.g., sensor = 17.3 × 13, megapixels = 35, image resolution = 5148 × 3888, storage = 128GB, zoom = 20×, price = 879 Euros, weight = 425gr, battery = BLS-50 lithium-ion battery, GPS = yes, and face detection = yes). A user has specified his/her requirements for the attributes of the desired item. For instance, I am looking for a camera that can take photos with at least 21 megapixels, the weight should be lower than 600 grams, and the upper price should be 1200 Euros. Based on the user’s requirements, the system selects several items and shows them to the user. Each item is presented in the form of a list of attributes and corresponding values. The user then has a choice to select one item that best suits his/her specified requirements. In such a scenario, we are interested in examining if the order of the attributes triggers different choices concerning the recommended items. The contribution of our paper is to find out a particular way to present the attributes of the recommended items to users (in the context of multi-attribute item domains), which brings ease and convenience in the item selection of users and therefore speeds up their decision-making processes. The remainder of the paper is organized as follows. In Section 2 and Section 3, we present a summary of related work and discuss our research question, corresponding recommendation scenarios, and variants of attribute order. In Section 4, we present essential steps of our user study to answer the research question. The results and discussions regarding the research question are presented in Section 5. Finally, we conclude the paper and discuss open issues for future work in Section 6. 2. Related Work Serial position effects (also known as primacy/recency effects) describe the tendency of a user to recall the items shown at the beginning and at the end of a list more often than those in the middle [3, 5]. These effects can also change the selection behavior of users when interacting with recommender systems. For instance, in personnel decision making, Highhouse and Gallo [6] find out that candidates interviewed at the end of a recruitment process have a higher probability of being selected than other candidates. Stettinger et al. [7] investigate serial position effects in the restaurant domain where restaurant reviews of users are analyzed. The authors show that different arrangements of the same arguments can lead to significantly different perception levels of users concerning restaurant attractiveness. Serial position effects also affect group recommendation scenarios. Tran et al. [4] investigate the influence of these effects when the same group of users has to continuously make a sequence of decisions in different item domains (e.g., low-involvement and high-involvement item domains). The authors analyzed if the order of decision tasks causes different decision making strategies of group members. The experimental results show that group members’ decision strategies for high-involvement items are kept, i.e., are re-used in follow-up low-involvement item decisions (but not vice-versa). Serial position effects can be exploited to create better user interfaces for recommender systems. For instance, in an e-learning recommender system, these effects can increase the frequency of interacting with questions/learning topics. The questions/learning topics can be recommended based on the learner’s training performance or the questions’ difficulty level [8]. For instance, difficult questions answered wrongly by the learner in the previous training rounds should be recommended to him/her in the following training rounds. In this context, one potential solution for applying serial position effects is to place the most relevant questions at the beginning or at the end of the recommendation list. This way, these questions have a high probability of being accessed by the learner. In e-commerce applications, serial position effects have been applied by famous companies such as Apple, Electronic Arts, and Nike to increase the frequency of accessing items or necessary information [9]. The existing research focuses on investigating serial positions effects in the context of no- attribute item domains but does not take into account scenarios where the order of the attributes describing the recommended items can play a role in the item selection of users. To the best of our knowledge, there is no study where serial position effects are examined in the multi- attribute recommender systems when the recommended items with relevant attributes are shown to the target user. In fact, there exist studies on multi-attribute recommender systems [10, 11, 12], they however focus on recommendation generation by further analyzing the preferences of users for items based on their relevant attributes. For instance, a user may like Camera A since it has a high resolution, large storage, and a nice price. In this context, multi-attribute recommender systems provide a two-phase recommendation process where the top-N items are retrieved based on the user’s preferences for the relevant attributes (the first phase) and then ordered according to a specific attribute (the second phase). Different from the existing studies, our study attempts to analyze the item selection behavior of users in the second phase. We propose a two-dimensional grid to represent the recommended items, in which the horizontal axis shows the recommended items and the vertical axis lists all item attributes along with their values. With this representation, we want to investigate if the order of the attributes impacts users’ item selection. In the following sections, we will discuss our research question, recommendation scenarios in different item domains, and our user study design in more detail. 3. Research Question, Recommendation Scenarios, and Variants of Attribute Order 3.1. Research Question Coming back to Section 1 where we assumed a recommendation scenario in which a user selects the desired item characterized by a list of attributes. The user has specified his/her requirements for the attributes of the desired item. In such a scenario, we want to find the answer to the following research question: “Does the order of the attributes with regard to the recommended items affect the item selection behavior of users?”. In this context, we are interested in examining the correlation between the order of the attributes of the recommended items shown to a user and his/her item selection behavior. Especially, we want to know if users tend to evaluate attributes shown at the beginning and at the end of the list and skip attributes shown in the middle of the list when selecting an item. Answering this research question helps to figure out if serial position effects exist in the context of multi-attribute item recommendation scenarios. If this is the case, then these effects could be exploited to create position nudging that provides the particular order of the attributes, increases the ease and convenience of users when selecting the recommended items [13, 14]. It also helps to find out a solution to counteract the impacts of serial positions effects on the item selection of users. 3.2. Recommendation Scenarios To answer the research question, we analyze serial position effects in different item domains. We propose two recommendation scenarios related to two item domains - Airbnb rooms and digital cameras. In the Airbnb room domain, we assume a scenario where a user is looking for an Airbnb room for a two-day trip at a low cost. The room in this scenario is referred to as a low-involvement item that requires low cost and low decision-making effort. In the digital camera domain, we assume a scenario where a user is looking for a professional camera for his/her job. The camera mentioned in this scenario indicates a high-involvement item that requires high cost and high decision-making effort. The details of these recommendation scenarios are presented in the following: Airbnb room recommendation scenario: “Assume you are using a recommender system to look for an Airbnb room. You want to make a two-day trip to the mountains. You need to have a relaxing trip, and the room should be therefore soundproof. The room size should be at least 12 𝑚2 . Besides, the price should not be over 45 Euros per night.” Digital camera recommendation scenario: “Assume you are using a recommender system to look for a digital camera. You are active in social media and working as a food blogger. You want to take photos with the size of at least 21 megapixels. The display should not be larger than 3 inches. The camera should not be heavier than 600 grams so that you can easily transport it. Since you are a beginner in this field, the camera should not cost more than 1500 Euros.” 3.3. Variants of Attribute Order With the proposed recommendation scenarios, we want to analyze the item selection behavior of users when the recommended items are presented in a list of attributes in different orders. We assume that, in each domain, the recommender system suggests five alternatives whose attributes meet the specified requirements of a user concerning the desired item. Each alternative is represented by a list of attributes and corresponding values. To analyze the impacts of serial position effects, we need to propose different variants of attribute order. One question arising in this context is how to place the attributes in a list? According to Wong [9], one of the efficient ways to exploit serial position effects is to show relevant information at the beginning and at the end of the list. The relevant information could be attributes that are important or familiar to the user in the item domain. Inspired by this idea, for each item domain, we propose two variants (variant 1 and variant 2), where the attributes of the recommended items are placed in a list depending on their familiarity. The familiarity of the attributes in the room and camera domains can be identified by related work conducted in the room and digital camera domains [15, 16, 17]. The familiar and unfamiliar attributes for each domain are summarized in Table 1. For variant 1, familiar attributes are shown at the beginning and at the end of the attribute list, whereas unfamiliar attributes are placed in the middle. The attributes in each part of the list are shown randomly. In another way round, variant 2 shows familiar attributes in the middle and unfamiliar ones at the beginning and at the end of the list. The total number of attributes in each variant is ten. To further analyze serial position effects in a longer attribute list, in each item domain, we propose two other variants (variant 3 and variant 4) with the same structure as variant 1 and variant 2, but with 15 attributes in total for each variant (see the structure of the variants in Figure 1). We assume that a higher number of the attributes could trigger a higher probability of serial position effects reinforcement. We ended up with ten and 15 attributes on the basis of considering the following criteria: (1) the number of the attributes should be enough so that serial position effects can be analyzed, and (2) the user study participants should not be overwhelmed by too many attributes. The selection of ten and 15 attributes some how meets these two criteria. Table 1 Familiar and unfamiliar attributes of items in the Airbnb room and digital camera domains [15, 16, 17]. The Airbnb room domain The digital camera domain Familiar attributes Unfamiliar attributes Familiar attributes Unfamiliar attributes staff smoking model video capture cleanliness internet sensor weight room service modernity megapixels battery room size mini bar display HDMI view balcony image resolution GPS parking facilities quietness storage face detection bathroom amenities TV zoom USB price price Figure 1: [A] The structure of Variant 1 and Variant 3. [B] The structure of Variant 2 and Variant 4. For Variant 1 andVariant 2, there are in total ten attributes for each. For Variant 3 and Variant 4, there are in total 15 attributes for each. 4. User Study Design In order to find the answer to the research question, we conducted an online user study with computer-science students (bachelor and master students) who were participated in our courses at Graz University of Technology, Austria. In total, there were 198 participants from 18 to 25 years old (male: 79.65%, female: 20.35%). There was a big difference between these two proportions, coming from the inequivalent numbers between male and female students in our courses. The user study was designed and conducted in the following steps: Step 1 - Propose recommendation scenarios: We proposed two recommendation scenar- ios with regard to two item domains, Airbnb rooms and digital cameras, as presented in Section 3.2. For each recommendation scenario, we selected five alternatives (Alternative A ... Alterna- tive E) whose attributes meet the requirements mentioned in the recommendation scenario. The recommended items (alternatives) are shown to the participants in a random fashion. The proposal of these recommendation scenarios for our user study was needed to provide the requirements on which the participants can select one desired item. This proposal helped to avoid a situation where the participants had no clue to select an item and therefore, just picked up an item randomly. Step 2 - Propose different variants of attribute order: In each domain, we proposed four variants (variant 1 ... variant 4) showing four different orders of the attributes of the recommended items (see the variant description in Section 3.3). In the Airbnb room domain, the ten attributes shown in variants 1 and 2 are cleanliness, staff rating, price, smoking, quietness, modernity, mini bar, internet, room service, and room size. Besides these attributes, variants 3 and 4 in the room domain additionally show five other attributes - bathroom amenities, parking, TV, balcony, and view. In the digital camera domain, variants 1 and 2 show ten attributes - zoom, price, image resolution, HDMI, battery, video, GPS, weight, megapixels, and display. Variants 3 and 4 additionally show five other attributes - model, sensor, storage, face detection, and USB. Step 3 - Distribute variants to participants: The study was conducted using the between- subjects method, i.e., each participant received exactly one recommendation scenario and one variant of attribute order. The user interfaces showing the recommended items and correspond- ing attributes are depicted in Figure 2. The number of participants for each variant in each item domain is shown in Table 2. Each participant took a look at the recommendation scenario and assumed himself/herself to be the user in the recommendation scenario. The participant was then asked to select one item that is the most appropriate from his/her point of view. Table 2 The number of the participants for each variant in the Airbnb room and digital camera item domains. The Airbnb room domain The digital camera domain variant 1 28 27 variant 2 21 30 variant 3 25 18 variant 4 26 23 Step 4 - Conduct the user study with an eye-tracking device: In addition to the men- tioned online user study (with 198 participants), we invited 25 students to our institute to Figure 2: User interfaces showing the lists of recommended room/camera alternatives and the corre- sponding attributes. The highlighted alternatives are more likely to be selected by the participants. perform the user study directly on our computer with an eye-tracking device (Tobii Pro T60 XL). In order to avoid potential biases, these students did not participate in the online user study earlier (i.e., all of them saw the user study the first time). This eye-tracking user study helped us to better observe how the participants evaluated the values of the attributes of the recommended items. 5. Data Analysis Results and Discussions Method: To answer the research question, in each item domain, we first collected items (alternatives) selected by the participants corresponding to a specific variant of attribute order. Consequently, in each item domain, we collected four data sets for four variants. Thereafter, we performed cross-tabulation analyses between the variants (e.g., variant 1 vs. variant 2, variant 3 vs. variant 4) in both domains. In these analyses, each table reports the variants in the columns, the item choices in the rows, and the corresponding number (frequency) of participants in the cells (see Figure 3). Finally, we ran Chi-Square tests (𝛼 = 0.05) to find out if there exist correlations between the item selection behavior of users and the order of the attributes. Results: The cross-tabulation analyses and Chi-Square test results show that there were no correlations between the item selection of the participants and the order of the attributes (p > 0.05 - see Figure 3 and Table 3). In both domains, the item selection of the participants was independent of the order of the attributes shown to them. In other words, there did not exist serial position effects in the context of multi-attribute item recommendation scenarios. This can be further proven by analyzing the eye-tracking data. According to the heat map data, the participants focused on checking the attributes mentioned in the recommendation scenarios and then compared the values of these attributes in order to find the best option (see Figure 4). Discussions: Let us have a look at the variants (with ten attributes) in the digital camera domain. The analyses in these variants show that the order of attributes did not affect how the participants chose the recommended item. Indeed, by having a look at the heat-map data of Figure 3: The cross-tabulation analyses between the variants (variant 1 vs. variant 2 and variant 3 vs. variant 4) in the Airbnb room and digital camera domains. Table 3 Based on the cross-tabulation analyses shown in Figure 3, Chi-Square tests (with 𝛼 = 0.05) were conducted. This table shows P-values of the Chi-Square tests between different variants (Var). The Airbnb room domain The digital camera domain 10 attributes 15 attributes 10 attributes 15 attributes Var 1 vs. Var 2 Var 3 vs. Var 4 Var 1 vs. Var 2 Var 3 vs. Var 4 p-value 0. 141 0.759 0.297 0.915 variants 1 and 2 in Figure 4, we recognized that the participants paid their attention to the values of the attributes mentioned in the recommendation scenario (i.e., mega pixel, display, weight, and price) and then compared them with each other. This way, Alternative A and Alternative C were more frequently selected by the participants since they satisfy all the requirements mentioned in the recommendation scenario (high image resolution, lightweight, and low price). In a similar fashion, Alternative D (in the room domain) was selected by the majority of the participants regardless of the order of the attributes of the room alternatives. The participants chose this alternative since this room fulfills all the requirements mentioned in the recommendation scenario. Therefore, it was preferred by the participants over other alternatives. These results show that the participants focused on evaluating the relevant attributes without caring about how these attributes are shown to them when selecting an item. This brings us to the conclusion that serial position effects are not triggered in multi-attribute item recommendation scenarios, where users have to select recommended items characterized by a (long) list of attributes. This finding goes against what has been pointed out in the existing studies, where serial position effects are mainly taken into account in the context of no-attribute item domains. In the context of multi-attribute item recommendation scenarios, our study shows that, when the recommended items are presented in the form of their attribute lists, the item selection of users is immune to serial position effects. This finally provides the idea of designing user interfaces to present recommended items, which help counteract such effects. User interfaces showing the attribute lists of the recommended items to users can help to mitigate the impacts of serial position effects on the item selection of users. Figure 4: Examples of the heat-map data of the eye-tracking user study captured in different variants of attribute order. The participants focused on evaluating the values of the attributes mentioned in the recommendation scenario. 6. Conclusion and Future Work The paper has discussed recommendation scenarios in two multi-attribute item domains - Airbnb rooms and digital cameras, in which recommended items are shown to a user in a list of attributes. In each domain, we examined if serial position effects affect the item selection of users. The data analysis results show that there are no correlations between users’ item selection and the order of attributes. During the item selection process, users focus more on evaluating the values of relevant attributes but do not care about the order of the attributes presented to them. This brings us to the conclusion that serial position effects are not triggered in the context of multi-attribute item recommendation scenarios. One limitation of the paper lies in the small-size samples (on average, 25 participants for each variant) and the large difference in the numbers between the male participants (around 80%) and the female participants (around 20%). Therefore, within the scope of future work, we will collect more participants to achieve representative samples, which are the premise to provide more convincing analysis results. Besides, we will select groups of participants where the number of males and females are equivalent. Another limitation of our work is related to the selected item domains. Although the Airbnb item domain is considered a lower-involvement item domain compared to the digital camera item domain, it is not always low-stake for all users. For some scenarios where users are anxious about traveling and want to double-check the room concerning different attributes before choosing. In such scenarios, deciding on an Airbnb room could consume high decision making efforts, even though the price is low. For future work, we will conduct our user study with further item domains ranging from very low- to very high-involvement item domains (e.g., movies and restaurants for (very) low-involvement item domains; financial services and houses/apartments for (very) high-involvement item domains) to achieve more adequate observations from the item domain perspective. We assume that in different multi-attribute item domains, serial position effects could have different impacts on users’ item selections. References [1] M. Mandl, A. Felfernig, E. Teppan, M. Schubert, Consumer decision making in knowledge- based recommendation, Journal of Intelligent Information Systems 37 (2011) 1–22. doi:https://doi.org/10.1007/978-3-642-04875-3_12. [2] E. Teppan, M. Zanker, Decision biases in recommender systems, Journal of Inter- net Commerce 14 (2015) 255–275. doi:https://doi.org/10.1080/15332861.2015. 1018703. [3] A. Felfernig, G. Friedrich, B. Gula, M. Hitz, T. Kruggel, G. Leitner, R. Melcher, D. Riepan, S. Strauss, E. Teppan, O. Vitouch, Persuasive recommendation: Serial position effects in knowledge-based recommender systems, in: Y. de Kort, W. IJsselsteijn, C. Midden, B. Eggen, B. J. Fogg (Eds.), Persuasive Technology, Springer Berlin Heidelberg, Berlin, Heidelberg, 2007, pp. 283–294. doi:https://doi.org/10.1007/978-3-540-77006-0_34. [4] T. N. T. Tran, M. Atas, A. Felfernig, R. Samer, M. Stettinger, Investigating serial position effects in sequential group decision making, in: Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization, UMAP’18, ACM, New York, NY, USA, 2018, pp. 239–243. doi:10.1145/3209219.3209255. [5] A. Felfernig, Biases in decision making, in: Proceedings of the International Workshop on Decision Making and Recommender Systems 2014, CEUR, Bolzano, Italy, 2014, pp. 32–37. doi:https://doi.org/10.1007/978-3-319-75067-5_8. [6] S. Highhouse, A. Gallo, Order effects in personnel decision making, Human Performance 10 (1997) 31–46. doi:10.1207/s15327043hup1001\_2. [7] M. Stettinger, A. Felfernig, G. Leitner, S. Reiterer, M. Jeran, Counteracting serial position effects in the choicla group decision support environment, in: Proceedings of the 20th International Conference on Intelligent User Interfaces, IUI ’15, Association for Computing Machinery, New York, NY, USA, 2015, p. 148–157. doi:https://doi.org/10.1007/ 978-3-319-20267-9_10. [8] M. Stettinger, T. N. T. Tran, I. Pribik, G. Leitner, A. Felfernig, R. Samer, M. Atas, M. Wundara, KnowledgeCheckr: Intelligent techniques for counteracting forgetting, in: Proceedings of 24th European Conference on Artificial Intelligence - PAIS ECAI2020, IOS Press, Santiago de Compostela, Spain, 2020, pp. 1–6. [9] E. Wong, Serial position effect: How to create better user interfaces, https://www.interaction-design.org/literature/article/ serial-position-effect-how-to-create-better-user-interfaces, 2020. [10] W. H. Chen, C. C. Hsu, Y. A. Lai, V. Liu, M. Y. Yeh, S. D. Lin, Attribute-aware recommender system based on collaborative filtering: Survey and classification, Frontiers in Big Data 2 (2020) 49. doi:https://doi.org/10.3389/fdata.2019.00049. [11] F. Hdioud, B. Frikh, B. Ouhbi, Multi-criteria recommender systems based on multi-attribute decision making, in: Proceedings of International Conference on Information Integration and Web-Based Applications & Services, IIWAS ’13, Association for Computing Machinery, New York, NY, USA, 2013, p. 203–210. doi:10.1145/2539150.2539176. [12] C. Yu, L. Yan, L. Kecheng, Multi-attribute collaborative filtering recommendation based on improved group decision-making, International Journal of Computers Communication & Control 10 (2015) 746–759. doi:https://doi.org/10.15837/ijccc.2015.5.1379. [13] M. Jesse, D. Jannach, Digital nudging with recommender systems: Survey and future directions, CoRR abs/2011.03413 (2020). arXiv:2011.03413. [14] C. R. Sunstein, Nudging: a very short guide, Business Economics 54 (2019) 127–129. doi:https://doi.org/10.1007/s10603-014-9273-1. [15] S. Bag, M. Tiwari, F. Chan, Predicting the consumer’s purchase intention of durable goods: An attribute-level analysis, Journal of Business Research 94 (2019) 408–419. doi:10.1016/ j.jbusres.2017.11. [16] S. Dolnicar, T. Otter, Which hotel attributes matter? a review of previous and a framework for future research, Faculty of Commerce - Papers 1 (2003) 1–24. [17] S. Jang, T. Liu, K. Ji Hye, H. Yang, Understanding important hotel attributes from the consumer perspective over time, Australasian Marketing Journal (AMJ) 26 (2018). doi:https://doi.org/10.1016/j.ausmj.2018.02.001.