<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Effects of Online Recommendations on Consumers' Willingness to Pay</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gediminas Adomavicius</string-name>
          <email>gedas@umn.edu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jesse Bockstedt</string-name>
          <email>bockstedt@email.arizo</email>
          <email>bockstedt@email.arizo na.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shawn Curley</string-name>
          <email>curley@umn.edu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jingjing Zhang</string-name>
          <email>jjzhang@indiana.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Indiana University</institution>
          ,
          <addr-line>Bloomington, IN</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Arizona</institution>
          ,
          <addr-line>Tucson, AZ</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Minnesota</institution>
          ,
          <addr-line>Minneapolis, MN</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <fpage>40</fpage>
      <lpage>45</lpage>
      <abstract>
        <p>We present the results of two controlled behavioral studies on the effects of online recommendations on consumers' economic behavior. In the first study, we found strong evidence that participants' willingness to pay was significantly affected by randomly assigned song recommendations, even when controlling for participants' preferences and demographics. In the second study, we presented participants with actual system-generated recommendations that were intentionally perturbed (i.e., significant error was introduced) and observed similar effects on willingness to pay. The results have significant implications for the design and application of recommender systems as well as for e-commerce practice.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>
        Recommender systems have become commonplace in online
purchasing environments. Much research in information systems
and computer science has focused on algorithmic design and
improving recommender systems’ performance
        <xref ref-type="bibr" rid="ref2">(see Adomavicius
&amp; Tuzhilin 2005 for a review)</xref>
        . However, little research has
explored the impact of recommender systems on consumer
behavior from an economic or decision-making perspective.
Considering how important recommender systems have become
in helping consumers reduce search costs to make purchase
decisions, it is necessary to understand how online recommender
systems influence purchases.
      </p>
      <p>In this paper, we investigate the relationship between
recommender systems and consumers’ economic behavior.
Drawing on theory from behavioral economics, judgment and
decision-making, and marketing, we hypothesize that online
recommendations1 significantly pull a consumer’s willingness to
pay in the direction of the recommendation. We test our
hypotheses using two controlled behavioral experiments on the
recommendation and sale of digital songs. In the first study, we
find strong evidence that randomly generated recommendations
(i.e., not based on user preferences) significantly impact
consumers’ willingness to pay, even when we control for user
preferences for the song, demographic and consumption-related
factors, and individual level heterogeneity. In the second study,
1 In this paper, for ease of exposition, we use the term “recommendations”
in a broad sense. Any rating that the consumer receives purportedly from
a recommendation system, even if negative (e.g., 1 star on a five-star
scale), is termed a recommendation of the system.</p>
      <p>Paper presented at the 2012 Decisions@RecSys workshop in conjunction with the
6th ACM conference on Recommender Systems. Copyright © 2012 for the
individual papers by the papers' authors. Copying permitted for private and
academic purposes. This volume is published and copyrighted by its editors.
2.</p>
    </sec>
    <sec id="sec-2">
      <title>LITERATURE REVIEW AND</title>
    </sec>
    <sec id="sec-3">
      <title>HYPOTHESES</title>
      <p>
        Behavioral research has indicated that judgments can be
constructed upon request and, consequently, are often influenced
by elements of the environment. One such influence arises from
the use of an anchoring-and-adjustment heuristic
        <xref ref-type="bibr" rid="ref10 ref6">(Tversky and
Kahneman 1974; see review by Chapman and Johnson 2002)</xref>
        , the
focus of the current study. Using this heuristic, the decision
maker begins with an initial value and adjusts it as needed to
arrive at the final judgment. A systematic bias has been observed
with this process in that decision makers tend to arrive at a
judgment that is skewed toward the initial anchor.
      </p>
      <p>
        Past studies have largely been performed using tasks for which a
verifiable outcome is being judged, leading to a bias measured
against an objective performance standard
        <xref ref-type="bibr" rid="ref6">(e.g., see review by
Chapman and Johnson 2002)</xref>
        . In the recommendation setting, the
judgment is a subjective preference and is not verifiable against an
objective standard. This aspect of the recommendation setting is
one of the task elements illustrated in Figure 1, where accuracy is
measured as a comparison between the rating prediction and the
consumer’s actual rating, a subjective outcome. Also illustrated
in Figure 1 is the feedback system involved in the use of
recommender systems. Predicted ratings (recommendations) are
systematically tied to the consumer’s perceptions of products.
Therefore, providing consumers with a predicted “system rating”
can potentially introduce anchoring biases that significantly
influence their subsequent ratings of items.
      </p>
      <p>One of the few papers identified in the mainstream anchoring
literature that has looked directly at anchoring effects in
preference construction is that of Schkade and Johnson (1989).
However, their work studied preferences between abstract,
stylized, simple (two-outcome) lotteries. This preference situation
is far removed from the more realistic situation that we address in
this work. More similar to our setting, Ariely et al. (2003)
observed anchoring in bids provided by students participating in
auctions of consumer products (e.g., wine, books, chocolates) in a
classroom setting. However, participants were not allowed to
sample the goods, an issue we address in this study.</p>
      <p>Predicted Ratings (expressing recommendations for unknown items)</p>
      <sec id="sec-3-1">
        <title>Recommender System</title>
        <p>(Consumer preference estimation)</p>
      </sec>
      <sec id="sec-3-2">
        <title>Accuracy</title>
      </sec>
      <sec id="sec-3-3">
        <title>Consumer</title>
        <p>(Preference formation / purchasing
behavior / consumption)</p>
        <p>Actual Ratings (expressing preferences for consumed items)
Very little research has explored how the cues provided by
recommender systems influence online consumer behavior.
Cosley et al. (2003) dealt with a related but significantly different
anchoring phenomenon in the context of recommender systems.
They explored the effects of system-generated recommendations
on user re-ratings of movies. They found that users showed high
test-retest consistency when being asked to re-rate a movie with
no prediction provided. However, when users were asked to
rerate a movie while being shown a “predicted” rating that was
altered upward or downward from their original rating by a single
fixed amount of one rating point (providing a high or a low
anchor), users tended to give higher or lower ratings, respectively
(compared to a control group receiving accurate original ratings).
This showed that anchoring could affect consumers’ ratings based
on preference recall, for movies seen in the past and now being
evaluated.</p>
        <p>Adomavicius et al. (2011) looked at a similar effect in an even
more controlled setting, in which the consumer preference ratings
for items were elicited at the time of item consumption. Even
without a delay between consumption and elicited preference,
anchoring effects were observed. The predicted ratings, when
perturbed to be higher or lower, affected the consumer ratings to
move in the same direction. The effects on consumer ratings are
potentially important for a number of reasons, e.g., as identified
by Cosley et al. (2003): (1) Biases can contaminate the inputs of
the recommender system, reducing its effectiveness. (2) Biases
can artificially improve the resulting accuracy, providing a
distorted view of the system’s performance. (3) Biases might
allow agents to manipulate the system so that it operates in their
favor. Therefore, it is an important and open research question as
to the direct effects of recommendations on consumer behavior.
However, in addition to the preference formation and
consumption issues, there is also the purchasing decision of the
consumer, as mentioned in Figure 1. Aside from the effects on
ratings, there is the important question of the possibility of
anchoring effects on economic behavior. Hence, the primary
focus of this research is to determine how anchoring effects
created by online recommendations impact consumers’ economic
behavior as measured by their willingness to pay. Based on the
prior research, we expect there to be similar effects on economic
behavior as observed with consumer ratings and preferences.
Specifically, we first hypothesize that recommendations will
significantly impact consumers’ economic behavior by pulling
their willingness to pay in the direction of the recommendation,
regardless of the accuracy of the recommendation.</p>
        <p>
          Hypothesis 1: Participants exposed to randomly generated
artificially high (low) recommendations for a product will
exhibit a higher (lower) willingness to pay for that product.
A common issue for recommender systems is error (often
measured by RMSE) in predicted ratings. This is evidenced by
Netflix’s recent competition for a better recommendation
algorithm with the goal of reducing prediction error by 10%
          <xref ref-type="bibr" rid="ref5">(Bennet and Lanning 2007)</xref>
          . If anchoring biases can be generated
by recommendations, then accuracy of recommender systems
becomes all the more important. Therefore, we wish to explore
the potential anchoring effects introduced when real
recommendations (i.e., based on the state-of-the-art recommender
systems algorithms) are erroneous. We hypothesize that
significant errors in real recommendations can have similar effects
on consumers’ behavior as captured by their willingness to pay for
products.
        </p>
        <p>Hypothesis 2: Participants exposed to a recommendation
that contains significant error in an upward (downward)
direction will exhibit a higher (lower) willingness to pay for
the product.</p>
        <p>We test these hypotheses with two controlled behavioral
studies, discussed next.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. STUDY 1: RECOMMENDATIONS AND</title>
    </sec>
    <sec id="sec-5">
      <title>WILLINGNESS-TO-PAY</title>
      <p>Study 1 was designed to test Hypothesis 1 and establish whether
or not randomly generated recommendations could significantly
impact a consumer’s willingness to pay.
3.1.</p>
    </sec>
    <sec id="sec-6">
      <title>Procedure</title>
      <p>Both studies presented in this paper were conducted using the
same behavioral research lab at a large public North American
university, and participants were recruited from the university’s
research participant pool. Participants were paid a $10 fee plus a
$5 endowment that was used in the experimental procedure
(discussed below). Summary statistics on the participant pool for
both Study 1 and Study 2 are presented in Table 1. Seven
participants were dropped from Study 1 because of response
issues, leaving data on 42 participants for analysis.</p>
      <p>The experimental procedure for Study 1 consisted of three main
tasks, all of which were conducted on a web-based application
using personal computers with headphones and dividers between
participants. In the first task, participants were asked to provide
ratings for at least 50 popular music songs on a scale from one to
five stars with half-star increments. The songs presented for the
initial rating task were randomly selected from a pool of 200
popular songs, which was generated by taking the songs ranked in
the bottom half of the year-end Billboard 100 charts from 2006
and 2009.2 For each song, the artist name(s), song title, duration,
album name, and a 30-second sample were provided. The
objective of the song-rating task was to capture music preferences
from the participants so that recommendations could later be
generated using a recommendation algorithm (in Study 2 and
post-hoc analysis of Study 1, as discussed later).
In the second task, a different list of songs was presented (with the
same information for each song as in the first task) from the same
set of 200 songs. For each song, the participant was asked
whether or not they owned the song. Songs that were owned were
excluded from the third task, in which willingness-to-pay
judgments were obtained. When the participants identified at
least 40 songs that they did not own, the third task was initiated.
In the third main task of Study 1, participants completed a
withinsubjects experiment where the treatment was the star rating of the
song recommendation and the dependent variable was willingness
to pay for the songs. In the experiment, participants were
presented with 40 songs that they did not own, which included a
star rating recommendation, artist name(s), song title, duration,
album name, and a 30 second sample for each song. Ten of the 40
songs were presented with a randomly generated low
recommendation between one and two stars (drawn from a
uniform distribution; all recommendations were presented with a
one decimal place precision, e.g., 1.3 stars), ten were presented
with a randomly generated high recommendation between four
and five stars, ten were presented with a randomly generated
midrange recommendation between 2.5 and 3.5 stars, and ten were
presented with no recommendation to act as a control. The 30
songs presented with recommendations were randomly ordered,
and the 10 control songs were presented last.
2 The Billboard 100 provides a list of popular songs released in each year.
The top half of each year’s list was not used to reduce the number of songs
in our database that participants would already own.</p>
      <p>To capture willingness to pay, we employed the
incentivecompatible Becker-DeGroot-Marschack method (BDM)
commonly used in experimental economics (Becker et al. 1984).
For each song presented during the third task of the study,
participants were asked to declare a price they were willing to pay
between zero and 99 cents. Participants were informed that five
songs selected at random at the end of the study would be
assigned random prices, based on a uniform distribution, between
one and 99 cents. For each of these five songs, the participant
was required to purchase the song using money from their $5
endowment at the randomly assigned price if it was equal to or
below their declared willingness to pay. Participants were
presented with a detailed explanation of the BDM method so that
they understood that the procedure incentivizes accurate reporting
of their prices, and were required to take a short quiz on the
method and endowment distribution before starting the study.
At the conclusion of the study, they completed a short survey
collecting demographic and other individual information for use
in the analyses. The participation fee and the endowment minus
fees paid for the required purchases were distributed to
participants in cash. MP3 versions of the songs purchased by
participants were “gifted” to them through Amazon.com
approximately within 12 hours after the study was concluded.
3.2.</p>
    </sec>
    <sec id="sec-7">
      <title>Analysis and Results</title>
      <p>We start by presenting a plot of the aggregate means of
willingness to pay for each of the treatment groups in Figure 2.
Note that, although there were three treatment groups, the actual
ratings shown to the participants were randomly assigned star
ratings from within the corresponding treatment group range (low:
1.0-2.0 stars, mid: 2.5-3.5 stars, high: 4.0-5.0 stars).</p>
      <p>As an initial analysis, we performed a repeated measure ANOVA,
as shown in Table 2, demonstrating a statistically significant
effect of the shown rating on willingness to pay. Since the overall
treatment effect was significant, we followed with pair-wise
contrasts using t-tests across treatment levels and against the
control group as shown in Table 3. All three treatment conditions
significantly differed, showing a clear, positive effect of the
treatment on economic behavior.</p>
      <p>
        To provide additional depth for our analysis, we used a panel data
regression model to explore the relationship between the shown
star rating (continuous variable) and willingness to pay, while
controlling for participant level factors. A Hausman test was
conducted, and a random effects model was deemed appropriate,
which also allowed us to account for participant level covariates
in the analysis. The dependent variable, i.e., willingness to pay,
was measured on an integer scale between 0 and 99 and skewed
toward the lower end of the scale. This is representative of typical
count data; therefore, a Poisson regression was used
(overdispersion was not an issue). The main independent variable
was the shown star rating of the recommendation, which was
continuous between one and five stars. Control variables for
several demographic and consumer-related factors were included,
which were captured in the survey at the end of the study.
Additionally, we controlled for the participants’ preferences by
calculating an actual predicted star rating recommendation for
each song (on a 5 star scale with one decimal precision), post hoc,
using the popular and widely-used item-based collaborative
filtering algorithm (IBCF)
        <xref ref-type="bibr" rid="ref8">(Sarwar et al. 2001)</xref>
        .3 By including
this predicted rating (which was not shown to the participant
during the study) in the analysis, we are able to determine if the
random recommendations had an impact on willingness to pay
above and beyond the participant’s predicted preferences.
The resulting Poisson regression model is shown below, where
WTPij is the reported willingness to pay for participant i on song j,
ShownRatingij is the recommendation star rating shown to
participant i for song j, PredictedRatingij is the predicted
recommendation star rating for participant i on song j, and
Controlsi is a vector of demographic and consumer-related
variables for participant i. The controls included in the model
were gender (binary), age (integer), school level (undergrad
yes/no binary), whether they have prior experience with
recommendation systems (yes/no binary), preference ratings
3 Several recommendation algorithms were evaluated based on the Study
1 training data, and IBCF was selected for us in both studies because it
had the highest predictive accuracy.
      </p>
      <p>Mid
1.723**
(interval five point scale) for the music genres country, rock, hip
hop, and pop, the number of songs owned (interval five point
scale), frequency of music purchases (interval five point scale),
whether they thought recommendations in the study were accurate
(interval five point scale), and whether they thought the
recommendations were useful (interval five point scale). The
composite error term (ui + εij) includes the individual participant
effect ui and the standard disturbance term εij.
The results of the regression are shown in Table 4. Note that the
control observations were not included, since they had null values
for the main dependent variable ShownRating.</p>
      <p>The results of our analysis for Study 1 provide strong support for
Hypothesis 1 and demonstrate clearly that there is a significant
effect of recommendations on consumers’ economic behavior.
Specifically, we have shown that even randomly generated
recommendations with no basis on user preferences can impact
consumers’ perceptions of a product and, thus, their willingness to
pay. The regression analysis goes further and controls for
participant level factors and, most importantly, the participant’s
predicted preferences for the product being recommended. As can
be seen in Table 4, after controlling for all these factors, a one unit
change in the shown rating results in a 0.168 change (in the same
direction) in the log of the expected willingness to pay (in cents).
As an example, assuming a consumer has a willingness to pay of
$0.50 for a specific song and is given a recommendation,
increasing the recommendation star rating by one star would
increase the consumer’s willingness to pay to $0.59.</p>
    </sec>
    <sec id="sec-8">
      <title>4. STUDY 2: ERRORS IN</title>
    </sec>
    <sec id="sec-9">
      <title>RECOMMENDATIONS</title>
      <p>The goal of Study 2 was to extend the results of Study 1 by testing
Hypothesis 2 and exploring the impact of significant error in true
recommendations on consumers’ willingness to pay. As
discussed below, the design of this study is intended to test for
similar effects as Study 1, but in a more realistic setting with
recommender-system-generated, real-time recommendations..</p>
    </sec>
    <sec id="sec-10">
      <title>4.1. Procedure</title>
      <p>Participants in Study 2 used the same facilities and were recruited
from the same pool as in Study 1; however, there was no overlap
in participants across the two studies. The same participation fee
and endowment used in Study 1 was provided to participants in
Study 2. 15 participants were removed from the analysis in Study
2 because of issues in their responses, leaving data on 55
participants for analysis.</p>
      <p>
        Study 2 was also a within-subjects design with perturbation of the
recommendation star rating as the treatment and willingness to
pay as the dependent variable. The main tasks for Study 2 were
virtually identical to those in Study 1. The only differences
between the studies were the treatments and the process for
assigning stimuli to the participants in the recommendation task of
the study. In Study 2, all participants completed the initial
songrating and song ownership tasks as in Study 1. Next, real song
recommendations were calculated based on the participants’
preferences, which were then perturbed (i.e., excess error was
introduced to each recommendation) to generate the shown
recommendation ratings. In other words, unlike Study 1 in which
random recommendations were presented to participants, in Study
2 participants were presented with perturbed versions of their
actual personalized recommendations. Perturbations of -1.5 stars,
-1 star, -0.5 stars, 0 stars, +0.5 stars, +1 star, and +1.5 stars were
added to the actual recommendations, representing seven
treatment levels. The perturbed recommendation shown to the
participant was constrained to be between one and five stars,
therefore perturbations were pseudo-randomly assigned to ensure
that the sum of the actual recommendation and the perturbation
would fit within the allowed rating scale. The recommendations
were calculated using the item-based collaborative filtering
(IBCF) algorithm
        <xref ref-type="bibr" rid="ref8">(Sarwar et al. 2001)</xref>
        , and the ratings data from
Study 1 was used as training data.
      </p>
      <p>Each participant was presented with 35 perturbed, personalized
song recommendations, five from each of the seven treatment
levels. The song recommendations were presented in a random
order. Participants were asked to provide their willingness to pay
for each song, which was captured using the same BDM
technique as in Study 1. The final survey, payouts, and song
distribution were also conducted in the same manner as in Study
1.
4.2.</p>
    </sec>
    <sec id="sec-11">
      <title>Analysis and Results</title>
      <p>For Study 2, we focus on the regression analysis to determine the
relationship between error in a recommendation and willingness
to pay. We follow a similar approach as in Study 1 and model
this relationship using a Poisson random effects regression model.
The distribution of willingness to pay data in Study 2 was similar
to that of Study 1, overdispersion was not an issue, and the results
of a Hausman test for fixed versus random effects suggested that a
random effects model was appropriate. We control for the
participants’ preferences using the predicted rating for each song
in the study (i.e., the recommendation rating prior to
perturbation), which was calculated using the IBCF algorithm.
Furthermore, the same set of control variables used in Study 1 was
included in our regression model for Study 2. The resulting
regression model is presented below, where the main difference
from the model used in Study 1 is the inclusion of Perturbationij
(i.e., the error introduced for the recommendation of song j to
participant i) as the main independent variable. The results are
presented in Table 5.
log(WTPij)= b0 + b1(Perturbationij)+ b2(PredictedRatingij)
+ b3(Controlsi) + ui + εij
The results of Study 2 provide strong support for Hypothesis 2
and extend the results of Study 1 in two important ways. First,
Study 2 provides more realism to the analysis, since it utilizes real
recommendations generated using an actual real-time
recommender system. Second, rather than randomly assigning
recommendations as in Study 1, in Study 2 the recommendations
presented to participants were calculated based on their
preferences and then perturbed to introduce realistic levels of
system error. Thus, considering the fact that all recommender
systems have some level of error in their recommendations, Study
2 contributes by demonstrating the potential impact of these
errors. As seen in Table 5, while controlling for preferences and
other factors, a one unit perturbation in the actual rating results in
a 0.115 change in the log of the participant’s willingness to pay.
As an example, assuming a consumer has a willingness to pay of
$0.50 for a given song, perturbing the system’s recommendation
positively by one star would increase the consumer’s willingness
to pay to $0.56.</p>
    </sec>
    <sec id="sec-12">
      <title>CONCLUSIONS</title>
      <p>Study 1 provided strong evidence that willingness to pay can be
affected by online recommendations through a randomized trial
design. Participants presented with random recommendations
were influenced even when controlling for demographic factors
and general preferences. Study 2 extended these results to
demonstrate that the same effects exist for real recommendations
that contain errors, which were calculated using the
state-of-theart recommendation algorithms used in practice.</p>
      <p>There are significant implications of the results presented. First,
the results raise new issues on the design of recommender
systems. If recommender systems can generate biases in
consumer decision-making, do the algorithms need to be adjusted
to compensate for such biases? Furthermore, since recommender
systems use a feedback loop based on consumer purchase
decisions, do recommender systems need to be calibrated to
handle biased input? Second, biases in decision-making based on
online recommendations can potentially be used to the advantage
of e-commerce companies, and retailers can potentially become
more strategic in their use of recommender systems as a means of
increasing profit and marketing to consumers. Third, consumers
may need to become more cognizant of the potential decision
making biases introduced through online recommendations. Just
as savvy consumers understand the impacts of advertising,
discounting, and pricing strategies, they may also need to consider
the potential impact of recommendations on their purchasing
decisions.</p>
    </sec>
    <sec id="sec-13">
      <title>ACKNOWLEDGMENT</title>
      <p>This work is supported in part by the National Science Foundation
grant IIS-0546443.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Adomavicius</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bockstedt</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Curley</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , and Zhang,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <year>2011</year>
          . Recommender Systems, Consumer Preferences, and
          <string-name>
            <given-names>Anchoring</given-names>
            <surname>Effects</surname>
          </string-name>
          .
          <source>Proceedings of the RecSys 2011 Workshop on Human Decision Making in Recommender Systems (Decisions@RecSys</source>
          <year>2011</year>
          ), Chicago IL, October
          <volume>27</volume>
          , pp.
          <fpage>35</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Adomavicius</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Tuzhilin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Towards the Next Generation of Recommender Systems: A Survey of the Stateof-the-Art and Possible Extensions</article-title>
          .
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          ,
          <volume>17</volume>
          (
          <issue>6</issue>
          ) pp.
          <fpage>734</fpage>
          -
          <lpage>749</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Ariely</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lewenstein</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Prelec</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2003</year>
          . “
          <article-title>Coherent arbitrariness”: Stable demand curves without stable preferences</article-title>
          ,
          <source>Quarterly Journal of Economics (118)</source>
          , pp.
          <fpage>73</fpage>
          -
          <lpage>105</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Becker</surname>
            <given-names>G.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>DeGroot M.H.</surname>
          </string-name>
          , and
          <string-name>
            <surname>Marschak</surname>
            <given-names>J.</given-names>
          </string-name>
          <year>1964</year>
          .
          <article-title>Measuring utility by a single-response sequential method</article-title>
          .
          <source>Behavioral Science</source>
          ,
          <volume>9</volume>
          (
          <issue>3</issue>
          ) pp.
          <fpage>226</fpage>
          -
          <lpage>32</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Bennet</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Lanning</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>The Netflix Prize</article-title>
          .
          <source>KDD Cup and Workshop</source>
          . [www.netflixprize.com].
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Chapman</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <article-title>and</article-title>
          <string-name>
            <surname>Johnson</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2002</year>
          .
          <article-title>Incorporating the irrelevant: anchors in judgments of belief and value</article-title>
          . Heuristics and Biases: The Psychology of Intuitive Judgment,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gilovich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Griffin</surname>
          </string-name>
          and D. Kahneman (eds.), Cambridge University Press, Cambridge, pp.
          <fpage>120</fpage>
          -
          <lpage>138</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Cosley</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lam</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Albert</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2003</year>
          .
          <article-title>Is seeing believing? How recommender interfaces affect users' opinions</article-title>
          . CHI 2003 Conference, Fort Lauderdale FL.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Sarwar</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karypis</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2001</year>
          .
          <article-title>Item-based collaborative filtering algorithms</article-title>
          .
          <source>10th Annual World Wide Web Conference (WWW10)</source>
          , May 1-5,
          <string-name>
            <given-names>Hong</given-names>
            <surname>Kong</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Schkade</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Johnson</surname>
            ,
            <given-names>E.J.</given-names>
          </string-name>
          <year>1989</year>
          .
          <article-title>Cognitive processes in preference reversals. Organizational Behavior and Human Decision Processes, (</article-title>
          <year>44</year>
          ), pp.
          <fpage>203</fpage>
          -
          <lpage>231</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Tversky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Kahneman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1974</year>
          .
          <article-title>Judgment under uncertainty: Heuristics and biases</article-title>
          .
          <source>Science</source>
          , (
          <volume>185</volume>
          ), pp.
          <fpage>1124</fpage>
          -
          <lpage>1131</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>