Conflict Management for Constraint-based Recommendation1 Franz Wotawa, Martin Stettinger, Florian Reinfrank, Gerald Ninaus, Alexander Felfernig2 Institute for Software Technology, Graz University of Technology Inffeldgasse 16b/II, 8010 Graz, Austria {firstname.lastname}@ist.tugraz.at Abstract Constraint-based recommendation systems are ple, a recommendation system contains notebooks from 2GB well-established in several domains like cars, com- up to 8GB RAM but notebooks with 8GB RAM cost more puters, and financial services. Such recommenda- than 1,000EUR. The user wants to buy a notebook with more tion tasks are based on sets of product constraints than 8GB RAM at a price which is lower than 300EUR. The and customer preferences. Customer preferences union of the product-related constraints with the customers reduce the number of products which are relevant preferences can not be fulfilled, i.e., the customer preferences for the customer. In scenarios like that it may hap- are inconsistent with the given set of product-specific con- pen that the set of customer preferences is inconsis- straints. In such situations, a constraint-based recommenda- tent with the set of constraints in the recommenda- tion system can provide help to users in terms of proposing tion system. In order to repair an inconsistency, the change operations that restore the consistency between user customer is informed about possible ways to adapt preferences and the product-related constraints. his/her preferences. There are different possibili- In this paper we present four different scenarios to sup- ties to present this information to the customer: a) port the user in finding a way out of the ’no solution could via preferred diagnoses, b) via preferred conflicts, be found’ dilemma. The first approach is to show which user and c) via similar products. On the basis of the re- preferences lead to an empty result set. For example, the com- sults of an empirical study we show that diagnoses, bination of preferences of a notebook with 8GB RAM and a conflicts, and similar products are evaluated differ- price which is lower than 200EUR is not satisfiable. By offer- ently by users in terms of understandability, user ing this information, the user can choose which of the pref- satisfaction, and conflict resolution effort. erences is less important and removes them. (1) We denote a set of preferences which is unsatisfiable (inconsistent with 1 Introduction the given set of product-related constraints) as conflict. (2) Alternatively, the system is also able to show change opera- The number of e-commerce web sites and the quantity of tions which resolve all conflicts in the current customer pref- offered products and services is increasing enormously [1]. erences. Such change operations are denoted as diagnoses. This triggered the demand of intelligent techniques that im- (3) We can also show diagnoses and explain them by giving prove the accessibility of complex item assortments for users. the information about conflicts. (4) If the user is not inter- An approach to identify relevant products for each customer ested in conflicts and diagnoses, we are also able to show are recommendation systems [12]. We can differentiate be- similar products by using a utility function which is ranking tween collaborative (e.g., www.amazon.com [12]), content- the products according to the user’s preferences. based (e.g., www.youtube.com [12]), critiquing-based (e.g., www.movielens.org [4]), and constraint-based systems (e.g., The major goal of this paper is to analyze, in which way www.my-productadvisor.com [6]). The favored type of rec- inconsistencies should be presented to users. Therefore, we ommendation system depends on the domain in which the conducted a study at the Graz University of Technology and system will be used. For example, in highly structured do- the University of Klagenfurt. With this empirical study we mains where almost all information about a product is avail- provide recommendations for presenting inconsistencies in able in a structured form, constraint-based systems are often constraint-based recommendation scenarios to users. the most valuable recommendation approach. The remainder of this paper is organized as follows. Sec- Each recommendation approach has its own challenges. tion 2 gives an introduction into constraint-based recommen- For example, collaborative systems have to deal with the dation systems, shows a working example, and provides an cold-start problem [12]. For some content-based systems it overview of inconsistency management techniques and utility is hard to identify related items [16]. Constraint-based sys- calculation for products. Section 3 shows our online applica- tems can not offer products to users in each case. For exam- tion for the empirical study, lists our hypotheses, and shows the evaluation of the hypotheses. Section 4 discusses relevant 1 We thank the anonymous reviewers for their helpful comments. aspects and Section 5 finalizes this paper with a summary and 2 Authors are ordered in reverse alphabetical order. issues for future work. 2 Constraint-based recommendation systems There are algorithms for calculating minimal conflict sets in inconsistent constraint sets [13]. In our scenario, a conflict In our approach we exploit constraint satisfaction problems set detection algorithm calculates the set CS = {c4 , c6 } as (CSPs) for representing products and customer preferences a minimal conflict set. Since conflict detection algorithms [20]. Such CSPs are a major modeling technique for knowl- typically return one minimal conflict set at a time (see, e.g., edge bases [3; 7]. CSPs are represented by a triple (V, D, C) Junker [13]), we use Reiter’s HSDAG to calculate all minimal where V is a set of variables (see the following example). conflicts [17]. In our example we get two different minimal V = {vname , vCP U , vRAM , vHDD , vLCD , vprice } conflict sets: CS1 = {c4 , c6 }, CS2 = {c5 , c6 }. D is a set of domains dom(vi ) where each domain de- Not all constraint sets have the same importance for each scribes possible assignments for a variable, for example: user. For example, if the CPU is more important for a user D = { dom(vname ) = {cheap, media, easy, turbo}, than the RAM, the user will probably prefer conflict sets dom(vCP U ) = {dualcore, quadcore}, which do not contain the CPU. We can calculate preferred dom(vRAM ) = {4GB, 6GB, 8GB}, minimal conflicts by ordering the constraints s.t. the preferred dom(vHDD ) = {400GB, 500GB, 750GB}, constraints are at the end of the list. For example, the con- dom(vLCD ) = {14, 15, 17}, straint ordering {c6 , c5 , c4 } leads to the conflict set {c6 , c5 } dom(vprice ) = {199, 399, 599}} [9]. The set C describes constraints which are reducing the Resolving conflicts can be done in two different ways: product space. Constraints can be product constraints c ∈ First, we can remove constraints from a conflict set and re- CKB and all products are combined in a disjunctive order, ceive further conflicts (e.g., removing the constraint c4 leads such that c0 ∨ c1 ∨ ... ∨ cn ∈ CKB . A conjunctive set of to the conflict set CS = {c5 , c6 }). Second, we can determine customer preferences c ∈ CR describes the customers pref- a set of constraints which resolves all conflicts in the given erences and CKB ∧ CR = C. Next, we insert four products set of user preferences. We denote such sets diagnoses [8; into CKB and two customer preferences into CR . 17]. By removing a set of constraints ∆ from the set of user c0 : vname = cheap ∧ vCP U = dualcore ∧ vRAM = 4GB preferences, we receive at least one valid instance (solution). ∧vHDD = 400 ∧ vLCD = 15 ∧ vprice = 199 A formal definition of diagnosis is the following one (see Def- c1 : vname = media ∧ vCP U = dualcore ∧ vRAM = 8GB inition 4): ∧vHDD = 750 ∧ vLCD = 17 ∧ vprice = 599 Definition 4: A set of constraints ∆ ⊆ CR is denoted as c2 : vname = easy ∧ vCP U = quadcore ∧ vRAM = 4GB diagnosis if CR \ ∆ ∪ CKB is consistent. ∧vHDD = 500 ∧ vLCD = 14 ∧ vprice = 399 In our example, the removal of the set CR restores consis- c3 : vname = turbo ∧ vCP U = quadcore ∧ vRAM = 8GB tency since CKB is consistent. As the removal of all con- ∧vHDD = 750 ∧ vLCD = 15 ∧ vprice = 599 straints probably doesn’t satisfy the customer, we try to de- c4 : vCP U = dualcore ∈ CR tect minimal sets of diagnoses (see Definition 5) which will c5 : vRAM ≥ 6GB ∈ CR be used in Scenarios 1 and 3 as explanations for the inconsis- We now try to get all valid instances of the constraint-based tency (Scenario 3 uses the minimal conflicts as an explanation recommendation task. A result (solution or instance) of such of the diagnoses). a recommendation task is characterized by Definition 1. Definition 5: A set of constraints ∆ ⊆ CR is denoted as Definition 1: A complete consistent instance is a model minimal diagnoses iff it is a diagnosis (see Definition 4) and where each variable in the knowledge base has an assignment, there does not exist a diagnosis ∆0 with ∆0 ⊂ ∆. i.e. ∀v∈V v 6= ∅ and all assignments are consistent with the The example notebook recommendation system contains constraints in C. two different minimal diagnoses: ∆1 = {c4 , c5 }, ∆2 = {c6 }. In our case, the product c1 fits all customer constraints We can calculate them by using a diagnosis detection algo- (preferences). Now, let’s assume that the customer has one rithm [19] within the HSDAG for calculating all possible di- more preference and adds the following constraint: agnoses [17] and order the diagnoses based on the ordering c6 : vLCD = 14 ∈ CR of the constraints in CR [9]. The new recommendation task leads to an inconsistency, Next, we calculate all minimal conflicts (Scenarios 2 and 3) s.t. Definition 1 cann’t be fulfilled. We only consider the and minimal diagnoses (Scenarios 1 and 3) for each customer. constraints in CR as conflicting constraints and assume that Currently it is not considered which of the conflict and di- the products in CKB have a valid representation. agnoses sets contains preference constraints that are relevant Definition 2: A conflict set is a set of constraints CS ⊆ CR for the customer. For example, if the CPU is more impor- s.t. CS ⊆ CKB is inconsistent. tant for the customer than the LCD size and the RAM (i.e., In the example, CR is inconsistent with CKB . Because relevance(vCP U ) = 3, relevance(vLCD ) = 2, relevance potentially non-minimal conflict sets are not helpful for users, (vRAM ) = 1) we order the conflicts and diagnoses based on we try to reduce the number of constraints in conflict sets CS the relevance of the customer preferences. A conflict / diag- as much as possible (see Definition 3) and introduce the term nosis containing low relevances is called a preferred minimal minimal conflict set. conflict / diagnosis [8; 10]. In our example, the user has the Definition 3: A minimal conflict set is given, iff the set CS possibility to add a relevance for each customer constraint is a minimal conflict set (see Definition 2) and there does not 1 ≤ relevance(vi ) ≤ n where n is the number of all vari- exist a conflict set CS 0 ⊂ CS with the property of being a ables. We used this information in our empirical study to conflict set. get preferred minimal conflicts and preferred minimal diag- Figure 1: Notebook recommendation: definition and weighting of user preferences. Each relevance can only be selected once. noses. For a detailed discussion of algorithms supporting the product fitness percentile determination of preferred conflicts and diagnoses we refer c0 0.460 57% the reader to Felfernig and Schubert [8]. c1 0.535 66% We are also able to evaluate similarities between products c2 0.222 27% and the customer preferences which will be used in the fourth c3 0.303 37% Scenario of our empirical study. If the customer preferences can not be fulfilled, we can calculate the similarity by using Table 1: Fitness values for the example knowledge base. the fitness function given in Equation 1. For example, for the product c0 and the customer prefer- X ences CR = {c4 , c5 , c6 } the fitness value is calculated by f it(p, CR ) = u(p, c) × ω(maxrelevance , c) (1) (1 × 33 ) + ( 46 × 13 ) + ( 14 2 15 × 3 ). c∈CR In Equation 1, p defines a product. CR is the set of customer 3.1 Notebook Recommendation System preferences. For each customer preference we calculate the utility value u(p, c) and the weighting ω(maxrelevance , c). In the preferences screen (see Figure 1) the user is asked For the utility value, we are using McSherrys’ similarity met- for at least three preferences which are described in terms of rics for each variable [15]. For example, a lower price value product variables. Each of the specified preferences must be is better (less is better) customer value product value , a higher RAM value weighted on a six-point scale. product value is better (more is better) customer The next step was to remove all products c ∈ CKB which value and for the optical are consistent with the user preferences CR to assure, that the drive a nearer value is better (nearer is better) = [0, 1]. The weighting function ω(maxrelevance , c) evaluates a weight- participants were confronted with a situation where her pref- ing for the constraint c by calculating the relative importance erences were inconsistent with the underlying product assort- relevance(c) ment, i.e., CR is inconsistent with CKB . maxrelevance . In the example in Figure 1 the weighing func- In the following, participants received a visualization of the tion for the product variable CP U is 5/6. Table 1 gives an conflict. Each participant was assigned to one of four scenar- overview about the fitness values of all example products (see ios (see Table 2). In the first scenario the participants got min- Section 2). Note that the application upgraded all fitness val- imal diagnoses as change recommendations (see Figure 2). ues to a percentile value. The best value was the number of Scenario 2 presents minimal conflicts to the participants (see fulfilled preferences divided by the number of all the user’s Figure 3). Scenario 3 contains both, minimal diagnoses and preferences. In our example the product c1 matches two of minimal conflicts, as explanations (see Figure 4). Scenario 4 three preferences ( 23 = 66%). The second best value was de- shows the fitness values for all products (see Figure 5). For valuated by the relative difference between the fitness values the differentiation between experts and novices we used two 0.460 (0.66 0.535 = 57%). questions in the questionnaire at the end of the study. The first question asked for a self-assessment and the second question 3 Empirical Study asked for expert knowledge. In our study 111 participants are How users of recommendation systems deal with conflicts, experts and 90 participants are novices. diagnoses, and fitness values will be evaluated in this sec- Next, we try to find the best approach for presenting in- tion. Therefore, we describe our online notebook recommen- consistencies in constraint-based recommendation systems. dation system, define hypotheses, and evaluate and discuss For the evaluation we have measured three general charac- them based on an empirical study. teristics: a) the time which is used to repair a conflict, b) the Scenario 1 Scenario 2 Scenario 3 Scenario 4 3.2 Hypotheses Step 1 Insert preferences After having selected a diagnosis (in Scenarios 1 and 3), the Step 2 apply dissolve apply participant (user) receives a list of notebooks. In Scenario diagnoses conflicts diagnoses 2 the user has to remove as many of her preferences in un- Step 3 Select a product less a product can be recommended because we removed all Step 4 Answer a questionaire products which fits to the preferences. We call the number of preferences, which have to be removed until the user receives Table 2: Overview about the user activities and scenarios products, interaction cycles. For example, an interaction cy- cle of two means that the user removed two of her preferences until products could be presented. Therefore we expect that the time, which is necessary for resolving the conflict, will be lower when diagnoses are presented to the participant: Hypothesis 1: Study participants will solve inconsisten- cies faster when they receive diagnoses. The study participants received all diagnoses in a preferred order (see Section 2). We expect that the first diagnosis will be selected most frequently. Hypothesis 2: The first diagnosis will be selected by the majority of the users for adapting their preferences. A conflict occurs if a set of preferences can not be fulfilled (see Definitions 2 and 3). Scenario 3 uses the minimal con- flict sets (see Definition 3) as a description for the minimal Figure 2: Presentation of 1 to n diagnoses. diagnoses (see Definition 5). We expect a positive impact on the understandability by the diagnoses: Hypothesis 3: Participants will understand their conflicts more easily, if they receive explanations. When the participants don’t receive products after having inserted the preferences, the satisfaction with the recommen- dation system will decrease, and we expect that the satisfac- tion with the product assortment of our recommendation sys- tem will be higher if products are offered (Scenario 4), even if they don’t fulfill all of the participants’ preferences. Hypothesis 4: The participants will have a higher satis- faction with the product assortment when they receive fitness values (Scenario 4, see Figure 5). Figure 3: Presentation of 1 to n conflicts. Due to the stability of preferences, the participants are less willing to adapt their preferences. When the recommendation system asks for more than one adaption of preferences, the participants will have a lower satisfaction with the system. This leads to the following Hypothesis: Hypothesis 5: More interaction cycles lead to a lower sat- isfaction with the anomaly support. 3.3 Evaluation For evaluating our hypotheses, we conducted a study at the TU Graz and the University of Klagenfurt. 240 users partici- pated in our study. The students’ average age is 25 years (std. dev.: 5.52 years). The participants are studying technical sci- ences (117), cultural sciences (63), economics (29), and other sciences (n = 31). We’ve tested our results with a two-tailed Mann-Whitney U-test and removed all participants with con- tradictory answers to the SUS (system usability scale) ques- tionnaire [2]. Finally, we divided 201 participations into the scenarios with diagnoses (n = 56), conflicts (n = 50), diag- Figure 4: Presentation of 1 to n diagnoses and conflicts. noses and conflicts (n = 38), and the fitness function (n = 57). Hypothesis 1 focuses on the time which is required to re- solve inconsistencies. Therefore, we measured the time be- understandability of conflicts and diagnoses, and c) the satis- tween the first conflict notification and the product presenta- faction with the ’no solution could be found’ dilemma. tion (see Table 3). Figure 5: Presentation of fitness values. Scenario 1 2 3 4 plained by the fact that dealing with diagnoses and conflicts D C D&C Fit helps to receive a deep understanding of the problem. Partic- conflict solving time 16.64 21.16 20.05 0.00 ipants in the fourth Scenario required 43.82 sec. for selecting product selection time 26.09 27.52 18.72 43.82 a product. The higher effort for selecting a product can be total 42.73 48.68 38.77 43.82 explained by the missing explanations of the conflict, and the participants may get confused that not all preferences are ful- Table 3: Average time (in sec.) to resolve inconsistencies and filled by the offered products. All differences in the product to select a product (in sec.; D = diagnoses, C = conflicts, Fit selection time are statistically significant (p < 0.001). = fitness) Hypothesis 2 is looking at the ordering of preferred diag- noses and conflicts. We measured the position of the selected conflict / diagnoses (see Figure 6). Note, there are only those The result shows that the time for removing conflicts with participants considered from Scenarios 1 and 3 whose num- diagnosis is lower (16.64 sec.) than with conflicts (21.16 sec.) ber of offered diagnoses is greater than one. or selecting the diagnoses with a corresponding explanation (20.05 sec.). This is because there is only one interaction cycle for resolving inconsistencies with a diagnosis whereas 1.66 interaction cycles are required to resolve inconsistencies with conflicts. Reading the explanation of a diagnosis also increased the time to resolve an inconsistency (20.05 sec.) compared to the diagnoses without explanations (p < 0.1). The time for resolving the conflicts is 0 in Scenario 4 since they aren’t resolved. These results confirm Hypothesis 1. We also researched the influences of the number of con- flicts and diagnoses (see Table 4). Figure 6: Ranking of selected diagnosis / conflict # of presented n satisfaction repair time diagnoses / conflicts We can confirm Hypothesis 2 since 81 of the participants 1 diagnosis: 11 4.55 11.18 sec. (82.65%) selected the first diagnosis. The second diagnosis was selected by 11 (11.22%), the third one by 5 (5.10%) par- 2 diagnoses: 11 4.14 10.71 sec. ticipants and the fourth recommendation by one participant > 2 diagnoses: 38 4.37 19.32 sec. (1.02%). Reasons for applying the first diagnosis are that 1 conflict: 56 4.09 22.29 sec. the first diagnosis contains only unimportant preferences, the 2 conflicts: 23 4.04 45.48 sec. primacy-effect [5], and preference reversals [22]. >2 conflicts: 4 1.75 62.00 sec. For measuring Hypothesis 3 we asked the participants from the Scenarios 1-3 if the diagnoses/conflicts were under- Table 4: Average time to repair conflicts regarded to the num- standable. Answers were given on a 5 point Likert-scale (5 ber of presented conflicts represents the highest understandability). Results show that the highest understandability is given The time to select a product was nearly the same in the sce- when diagnoses are presented (Scenario 1) followed by di- narios with diagnoses (Scenario 1) and conflicts (Scenario 2). agnoses explained with conflicts (Scenario 3) and conflicts The third scenario performs best in terms of the time which (Scenario 2, see Table 5). The difference between the un- is required to select a product (18.72 sec.). This can be ex- derstandability of conflicts (4, 4, Scenario 2) and the other Scenarios (Scenario 1 with 4.55 and Scenario 3 with 4.45) # interaction cycles n satisfaction is statistically significant (p < 0.05). The degree of under- 1 34 4.44 standability is higher for experts than for novices (p > 0.1). 2 10 4.30 We can partially confirm Hypothesis 3 since experts have 3 3 2.67 a higher understanding of the conflict when conflicts and di- ≥4 3 3.00 agnoses are presented while novices can not deal with much information. Due to the cognitive processes (trial-and-error Table 7: Satisfaction with the presented conflicts regarding to of novices versus analytical processing of experts [11]) it is interaction cycles easier to deal with diagnoses when the cognitive process is more analytical. When participants use a trial-and-error pro- cess and they don’t expect the visualization of conflicts, it is 4 Discussion harder to adapt the preferences. This paper gives an overview about conflict management in constraint-based recommendation systems. While we can not Scenario 1 2 3 present products which fit to the user’s preferences the user D C D&C has to adapt her preferences. Such preference reversals al- Total: 4.55 4.40 4.45 ways result in a low satisfaction of users. The degree of dis- Experts: 4.62 4.38 4.67 satisfaction depends on how often the preferences have been Novices: 4.46 4.42 4.18 fulfilled in the past [22]. If users have positive experience with their preferences, it Table 5: Understandability of conflicts can happen that the participants have well-established anchor- ing affects [21]. In such scenarios the participants may have stable preferences and preference reversals are necessary to Hypothesis 4 evaluates the satisfaction with the recom- get notebooks. It can be more problematic if there are many mended products. The average values are from 2.62 up to conflicts / diagnoses shown, because it could be the case that a 3.3 (see Table 6) which is worse and can be explained by the representation of all conflicts / diagnoses leads to a manifes- removal of all valid products at the beginning of the process. tation of the current preferences and the user is less willing to accept any conflicts / diagnoses. Such an effect is called Scenario 1 2 3 4 status-quo bias [14; 18]. D C D&C Fit Another important aspect is the cognitive processing task. Total 2.62 3.30 2.80 3.30 While novices tend to use trial-and-error processes, experts Experts 2.44 3.12 2.33 3.19 tend to use heuristic and analytic cognitive processes [11]. Novices 2.88 3.50 3.35 3.48 That means that novices tend to adapt their preferences un- less they receive products. Our results confirm this process Table 6: Satisfaction with the product assortment since the satisfaction of novices is high if they can adjust their preferences arbitrarily or receive similar products (see Hypothesis 4). On the other hand, experts try to understand The results show that conflicts (Scenario 2) and the fitness the modifications and analyze them. Therefore, they prefer function (Scenario 4) lead to the highest satisfaction with the the visualization of diagnoses (see Hypothesis 3). product assortment. A differentiation between experts and novices does not influence the significance. Because conflicts and the fitness values lead to the same satisfaction we can 5 Conclusion not confirm Hypothesis 4. An interesting result is also, that This paper shows how different visualization strategies for novices have an overall higher satisfaction with the product conflicts can be used within constraint-based recommenda- assortment compared to experts. This can be explained by tion systems. We’ve shown the state-of-the-art in detecting the fact, that they are more happy that they get any products all minimal preferred diagnoses and conflicts, calculated fit- recommended. On the other hand, experts know, that there ness values for products, and introduced hypotheses for con- are products which fits to their preferences. flict management and evaluated them with an empirical study. Hypothesis 5 will be evaluated by Table 7. There is a sig- The result of this evaluation is that the optimal strategy for the nificant difference when participants had more than two inter- visualization of inconsistencies depends on the optimization action cycles. A statistically significant difference between strategy. The visualization of diagnoses leads to a low inter- experts and novices isn’t constituted. A differentiation be- action effort, whereas the visualization of conflicts and fitness tween the interaction cycles of diagnoses and conflicts also functions leads to a higher satisfaction. doesn’t lead to a significant difference between all interaction A major focus of our future work will be the inclusion of cycles or between conflict and diagnoses visualization. We different decision-psychological effects such as, for example, can confirm Hypothesis 5. framing, priming, and decoy effects into our studies. In this A comparison between the number of conflicts / diagnoses context we want to answer the question whether these phe- and satisfaction, understandability, or time to resolve the in- nomena exist in the context of conflict detection and resolu- consistency is not statistically significant. tion scenarios, too. References games. Computers in Human Behavior, 19:245 – 258, [1] Ivan Arribas, Francisco Perez, and Emili Tortosa- 2003. Ausina. Measuring international economic integration: [12] Dietmar Jannach, Markus Zanker, Alexander Felfernig, Theory and evidence of globalization. World Develop- and Gerhard Friedrich. Recommender Systems: An In- ment, 37(1):127 – 145, 2009. troduction, volume 1. University Press, Cambridge, [2] John Brooke. Sus: a quick and dirty usability scale. In 2010. Patrick Jordan, B. Thomas, Bernard Weerdmeester, and Ian Lyall McClelland, editors, Usability Evaluation in [13] Ulrich Junker. Quickxplain: preferred explanations and Industry. Taylor and Francis, 1986. relaxations for over-constrained problems. In Proceed- [3] Robin Burke. Knowledge-based recommender systems. ings of the 19th national conference on Artifical intelli- gence, AAAI’04, pages 167–172. AAAI Press, 2004. In Encyclopedia of library and information systems, [14] Daniel Kahneman, Jack Knetsch, and Richard H. page 2000. Marcel Dekker, 2000. Thaler. Anomalies: The endowment effect, loss aver- [4] Li Chen and Pearl Pu. Evaluating critiquing-based sion, and status quo bias. The Journal of Economic Per- recommender agents. In Proceedings of the 21st na- spectives, 5:193 – 206, 1991. tional conference on Artificial intelligence - Volume 1, AAAI’06, pages 157–162. AAAI Press, 2006. [15] David McSherry. Similarity and compromise. In Pro- [5] Alexander Felfernig et al. Persuasive recommenda- ceedings of the Fifth International Conference on Case- tion: Serial position effects in knowledge-based recom- Based Reasoning, pages 291–305. Springer, 2003. mender systems. In Yvonne Kort, Wijnand IJsselsteijn, [16] Michael Pazzani and Daniel Billsus. Content-based rec- Cees Midden, Berry Eggen, and B.J. Fogg, editors, Per- ommendation systems. In Peter Brusilovsky, Alfred suasive Technology, volume 4744 of Lecture Notes in Kobsa, and Wolfgang Nejdl, editors, The Adaptive Web, Computer Science, pages 283–294. Springer Berlin Hei- volume 4321 of Lecture Notes in Computer Science, delberg, 2007. pages 325–341. Springer Berlin / Heidelberg, 2007. [6] Alexander Felfernig and Robin Burke. Constraint-based recommender systems: technologies and research is- [17] Raymond Reiter. A theory of diagnosis from first prin- sues. In Proceedings of the 10th international confer- ciples. Artificial Intelligence, 32(1):57–95, 1987. ence on Electronic commerce, ICEC ’08, pages 3:1– 3:10, New York, NY, USA, 2008. ACM. [18] William Samuelson and Richard Zeckhauser. Status quo bias in decision making. Journal of Risk and Uncer- [7] Alexander Felfernig, Gerhard Friedrich, Dietmar Jan- tainty, 1:7–59, 1988. nach, and Markus Stumptner. Consistency-based diag- nosis of configuration knowledge bases. Artificial Intel- [19] Monika Schubert and Alexander Felfernig. A Diagnosis ligence, 152(2):213 – 234, 2004. Algorithm for Inconsistent Constraint Sets. In Proceed- [8] Alexander Felfernig and Monika Schubert. Personal- ings of the 21st International Workshop on the Princi- ized diagnoses for inconsistent user requirements. AI ples of Diagnosis, 2010. EDAM, 25(2):175–183, 2011. [20] Edward Tsang. Foundations of Constraint Satisfaction. [9] Alexander Felfernig, Monika Schubert, and Stefan Reit- Academic Press, 1993. erer. Personalized diagnosis for over-constrained prob- lems. IJCAI, pages 1990 – 1996, 2013. [21] Amos Tversky and Daniel Kahneman. Judgement [10] Alexander Felfernig, Monika Schubert, and Christoph under uncertainty: Heuristics and biases. Science, Zehentner. An efficient diagnosis algorithm for incon- 185(4157):1124 – 1131, 1974. sistent constraint sets. AI EDAM, 26(1):53–62, 2012. [22] Amos Tversky, Paul Slovic, and Daniel Kahneman. The [11] Jon-Chao Hong and Ming-Chou Liu. A study on think- causes of preference reversal. American Economic Re- ing strategy between experts and novices of computer view, 80(1):204–17, March 1990.