<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Can Trailers Help to Alleviate Popularity Bias in Choice-Based Preference Elicitation?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mark P. Graus</string-name>
          <email>m.p.graus@tue.nl</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martijn C. Willemsen</string-name>
          <email>m.c.willemsen@tue.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Eindhoven University of Technology</institution>
          ,
          <addr-line>IPO 0.17, PB 513, 5600 MB, Eindhoven</addr-line>
          ,
          <country country="NL">Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Eindhoven University of Technology</institution>
          ,
          <addr-line>IPO 0.20, PB 513, 5600 MB, Eindhoven</addr-line>
          ,
          <country country="NL">Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>quire them to rely on memory. This can lead to unreliable feedback [2]. One way to reduce the e ort can be found in choice-based preference elicitation. Where most recommender systems ask users to provide a number of ratings on items (explicit feedback), recommender systems applying choice-based preference elicitation ask the user to make a number of choices (implicit feedback). Using implicit feedback to produce personalized ranking has been shown to provide better tting prediction models than using explicit feedback [8]. In recent user studies users of collaborative ltering systems were proCCS Concepts vided with choice-based preference elicitation[4, 6]. Where in the more standard rating-based preference elicitation peoInformation systems ! Recommender systems; Human-ple are asked to rate the items they know, in choice-based centered computing ! Human computer interaction preference elicitation they are asked to choose the item that (HCI); User studies; User models; best matches there preference from a list. In our own work, this alternative has been shown to require less e ort than</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1.2</p>
    </sec>
    <sec id="sec-2">
      <title>Choice-Based Preference Elicitation</title>
      <p>Previous research showed that choice-based preference
elicitation can be successfully used to reduce e ort during user
cold start, resulting in an improved user satisfaction with
the recommender system. However, it has also been shown
to result in highly popular recommendations. In the present
study we investigate if trailers reduce this bias to popular
recommendations by informing the user and enticing her
to choose less popular movies. In a user study we show
that users that watched trailers chose relatively less popular
movies and how trailers a ected the overall user experience
with the recommender system.</p>
      <p>New user cold start is one of the central problems in
recommender systems. It occurs when a user starts using a
recommender system. As there is no information for this
user to base recommendations on, the recommender system
requires her to provide feedback in order to receive
recommendations. This requires quite often signi cant e ort of
the user.</p>
      <p>In addition, as users watch only a certain amount of movies
over any time period, asking users to provide a set amount
of feedback may require them to provide feedback on items
that they have experienced a longer time ago which will
re</p>
      <p>Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full
citation on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.</p>
      <p>IntRS 2016, September 16, 2016, Boston, MA, USA.
c 2016 ACM. ISBN 123-4567-24-567/08/06. . . $15.00
DOI: 10.475/123 4
1.3</p>
    </sec>
    <sec id="sec-3">
      <title>Memory Effects in Recommender Systems</title>
      <p>
        Memory e ects could be a possible explanation for this
bias towards popular movies that results in people
receiving overly popular recommendations. In rating-based
recommender systems memory e ects have been shown to
inuence how users provide feedback. Bollen et al.[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] have
demonstrated that ratings given closer to the time the movie
was actually watched tend to be more extreme than ratings
for movies that have been watched a longer time ago. They
argue that this is because of people forgetting information
about the movies required to rate them, which has
consequences for the reliability of the input provided. This same
e ect could result in users choosing items that they
recognize in a choice-based preference elicitation task: it is more
likely that people remember more popular movies than less
popular movies.
1.4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Trailers as source of extra information</title>
      <p>The current study tries to investigate if this bias towards
picking popular movies can be alleviated by giving users
additional information to make more informed choices.</p>
      <p>In order to both minimize the e ort required and
maximize the reliability of the input given during the new user
cold start situation, we propose to use choice-based
preference elicitation and provide the user with additional
information to give her the means to make more informed
choices.</p>
      <p>In most recommender systems users can already rely on
meta-information like for example genre, cast and a
synopsis. A possible additional source of information about a
movie can be found in trailers. Trailers may help the user in
two ways. Firstly, trailers can help a user in refreshing the
memory to provide reliable feedback, alleviating potential
memory problems described in the previous section.
Secondly, even for movies that a user has not seen yet, a trailer
can be used to evaluate whether or not a movie is worth
watching. This is an advantage of choice-based preference
elicitation over rating-based preference elicitation, because
in rating-based users typically only rate (and provide
information on) movies they have actually watched.
1.5</p>
    </sec>
    <sec id="sec-5">
      <title>Research Question and hypotheses</title>
      <p>The present research aims to investigate how providing
additional information in the form of trailers during
choicebased preference elicitation a ects the interaction in terms
of both objective behavior and subjective user experience.</p>
      <p>In terms of objective behavior we hypothesize that trailers
allow users to make more informed choices and rely less on
popularity when making these choices. In other words, we
expect the possibility to watch trailers to reduce the
popularity of the items a user chooses.</p>
      <p>In terms of user experience we expect trailers to provide
the user with more information, which is expected to be
reected in the perceived informativeness of the system. As
we expect trailers to motivate users to select less popular
movies, we expect perceived recommendation novelty (the
opposite of popularity) and diversity to increase. Both
novelty and diversity may a ect system and choice satisfaction.</p>
      <p>
        It is hard to formulate expectations about the direction of
the e ect of trailers on user satisfaction. We expect user
satisfaction in this setting to consist of system satisfaction (i.e.
\how well does this system help me") and choice satisfaction
(\how happy am I with the item that I choose based on this
system"). In previous research novelty and system
satisfaction were shown to be negatively correlated [
        <xref ref-type="bibr" rid="ref4 ref9">9, 4</xref>
        ]. On the
other hand, trailers might make users open to less
popular movies and as such novelty could have a positive e ect
on choice satisfaction. Additionally previous studies have
shown that system satisfaction positively in uences choice
satisfaction[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Having the possibility to watch trailers may
result in an increased system satisfaction and thus choice
satisfaction. Considering all these e ects it is hard to
foresee in what way trailers will a ect user experience.
      </p>
      <p>The expected e ects are shown in Fig. 2 below, with
where possible the directions of the hypothesized e ect.</p>
    </sec>
    <sec id="sec-6">
      <title>METHOD</title>
      <p>A system was developed to address the research questions
through an online study. Participants were invited to browse
to a website where they could access our recommender
system. Upon entering the website participants were assigned
randomly to one of two experimental conditions: the trailer
condition, where participants were given the possibility to
watch trailers and the non-trailer condition, where
participants could not watch those trailers. They were
subsequently shown an introduction page with an informed
consent form and a brief explanation about the task at hand.</p>
      <p>
        After the explanation, the preference elicitation phase started
(see Fig 1 for a screenshot), where the experimental
manipulation came into e ect. Participants in the trailer condition
were able to see trailers, where participants in the non-trailer
condition were not. Applying the same methodology as in
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] participants were presented with a set of 10 movies to
choose from. The participants in the trailer condition would
be informed about how they could watch trailers for the
recommended movies. Participants were asked to evaluate the
list and select the movie they would like to watch.
      </p>
      <p>
        After choosing, the system would incorporate the choice
and provide the participant with a new set of
recommendations. Participants would be assigned a null vector upon
entering. After which each choice was incorporated by the
recommender system in four steps, described in more detail
in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Firstly, the user vector in the matrix factorization
model was moved in the direction of the chosen item.
Secondly, new rating predictions were calculated. Thirdly, the
proportion of movies with lowest predicted rating was
discarded. Fourthly a new choice set was calculated by taking
the maximally diversi ed set from the remaining movies.
Diversi cation was done through a greedy selection algorithm
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] with the goal of minimizing intra-list similarity [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] by
maximizing the sum of the distances in the matrix
factorization space between recommended items.
      </p>
      <p>After 9 such choices, the user would see an explanation
about how the choices they made would be used to calculate
the nal set of recommendations. The screen with nal
recommendations was identical to the previous screens except
for the explanation. The nal recommendations consisted of
the Top-10 movies based on the last calculated user vector.
People were asked to make the nal choice from this list
after which they were invited to complete a survey designed
to measure the user experience.</p>
      <p>The interface allowed users to watch trailers in the trailer
condition by hovering over the presented movie covers. The
trailers were retrieved through The Movie Database1. After
hovering for 2 seconds, a video player would appear in
allocated space in the interface. Each trailer for which a user
1https://www.themoviedb.org/
pressed the play button was stored as a view.
2.1</p>
    </sec>
    <sec id="sec-7">
      <title>Recommender Algorithm</title>
      <p>The recommendations were predicted through a matrix
factorization model trained on ratings for the 2500 most
rated movies in the 10M MovieLens dataset. The nal dataset
consisted of 69k users, 2500 items and 8.82M ratings. The
performance metrics of the used model were up to standards
(MAE: 0.61358, RMSE: 0.79643, measured through 5-fold
cross-validation).
2.2</p>
    </sec>
    <sec id="sec-8">
      <title>Participants</title>
      <p>In total 89 participants made at least one choice in the
system. Participants were recruited from di erent courses
in the department and were entered in a ra e for one of 5
gift cards. No demographic information was asked. Out of
the 89 participants 50 were in the condition where no trailers
could be watched, 39 were able to watch trailers. The people
who were able to do so, watched on average 10.38 trailers
(median = 10; SD = 9:69).</p>
      <p>In total 74 participants completed the survey. After
inspection, data from 3 participants was removed because they
completed the survey unrealistically fast. A total of 71 (40
of which did not have the possibility to watch trailers, 31
did) responses was thus used to study the e ects on user
experience.
2.3</p>
    </sec>
    <sec id="sec-9">
      <title>Measures</title>
      <p>In order to test our hypothesis we measured aspects of
behavior and developed a survey to measure user
experience. In terms of behavior we measured the popularity of
the movies people chose and whether or not they watched
any trailers. Popularity is de ned as the ranked order based
on the number of ratings in the original MovieLens dataset.
The movies are ranked from the most rated (1) to the least
rated (2500).</p>
      <p>
        We investigate the user experience following the
evaluation framework from Knijnenburg et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In line with the
research questions we developed a survey with the aim of
measuring 5 aspects of user experience: perceived
informativeness, perceived novelty, perceived diversity, system
satisfaction and choice satisfaction. The items used are shown in
Table 1. All items were submitted to a con rmatory factor
analysis (CFA). The CFA used repeated ordinal dependent
variables and a weighted least squares estimator,
estimating 5 factors. Items with low factor loadings, high
crossloadings, or high residual correlations were removed from
the analysis. The factor loadings for the novelty construct
were not su ciently high, so it was dropped from the nal
factor analysis.
      </p>
    </sec>
    <sec id="sec-10">
      <title>RESULTS</title>
      <p>The results section will rst describe how trailers a ect
the choices users make. After that the analysis of the
survey data will provide insight in how trailers a ect the user
experience.
3.1</p>
    </sec>
    <sec id="sec-11">
      <title>Behavior</title>
      <p>The e ects on user behavior are expected to be two-fold.
Firstly, as trailers allow the user to make more informed
choices, we expect the individual chosen items to be less
popular for people watching trailers. In other words, movies
chosen by users who watch trailers are expected to have
1000</p>
      <p>0
e
v
iltr
a
e
−1000
−2000
watched_trailers</p>
      <p>FALSE
TRUE
2.5
5.0
choiceset
7.5
10.0
lower popularity ranks. Secondly, when people make less
popular choices throughout the interaction with the system,
we expect the individual choice sets to be less popular as a
whole. For users that watch trailers we expect the average
popularity rank of choice sets is expected to be lower.</p>
      <p>An alternative way to study this e ect is by looking at the
relative popularity of the choices users make, instead of the
absolute popularity. To do this we calculated for each choice
the di erence between the popularity rank of the chosen item
and the average popularity rank of the items in the set. If
this number is positive, the chosen item is above average in
terms of popularity, if it is negative, the chosen item is below
average in terms of popularity.</p>
      <p>Although there was no di erence across experimental
conditions, the plot in Figure 3 shows that for participants that
actually watched trailers (i.e. people in the trailer
conditions that watched at least one trailer) the relative
popularity of the chosen item decreases after around 5 choices
compared to participants that did not watch trailers (i.e.
people in the non-trailer condition or in the trailer
condition that did not watch any trailers). In a repeated
measures ANOVA this e ect proves to be signi cantly lower
(F (1; 87) = 6:992; p &lt; 0:01) for users that watched
trailers. Watching trailers thus made users choose relatively less
popular movies.</p>
      <p>In order to understand what the results are for the user
experience we investigate the survey data.
3.2</p>
    </sec>
    <sec id="sec-12">
      <title>User Experience</title>
      <p>The subjective constructs from the CFA were organized
into a path model using Structural Equation Modeling. The
resulting model had good model t ( 2(66) = 1052:974; p &lt;
0:001; CF I = :997; T LI = :996; RM SEA = :029; 90%CI :
[0:000; 0:084]). The corresponding path model is shown in
Figure 4.</p>
      <p>
        Di erent from earlier studies[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] we did not nd that
system satisfaction in uences choice satisfaction directly.
Moreover, system and choice satisfaction are not strongly related.
A possible explanation for this could be that in this study
the distinction between the preference elicitation task and
recommendation stages is less clear than in previous
studies. As every choice task has the same appearance as a set of
recommendations (despite the clear explanation), the choice
      </p>
      <p>Choice
Satisfaction
task from the nal list of recommendations might not have
been perceived as much di erent from the choice tasks
during the preference elicitation task. System Satisfaction in
turn is positively in uenced by Informativeness. In
addition, the more people experience Informativeness, the less
they perceive Diversity. Opposed to previous studies, we
nd that higher diversity results in a lower Choice
Satisfaction.</p>
      <p>In order to investigate the overall e ects of the trailers we
in addition consider the marginal means. The trailers a ect
the user experience in a number of ways. Firstly,
providing trailers is experienced as an increase in informativeness
of the system (statistically signi cant: = 0:664; t(69) =
3:142; p &lt; 0:01), as can be seen in the path model (Figure
4) and the marginal means (Figure 5).</p>
      <p>It also results in an increased perceived diversity, but this
e ect is counteracted by the decrease as a result of the
increased informativeness. This indirect e ect of the
manipulation on diversity through perceived informativeness results
in the non-signi cant e ect we observer in Figure 5. As far
as system satisfaction is concerned, trailers actually decrease
the system satisfaction. But similar to perceived diversity,
this direct e ect of trailers on system satisfaction is
counteracted by the positive e ect of the increased informativeness
on system satisfaction.
4.</p>
    </sec>
    <sec id="sec-13">
      <title>CONCLUSION AND DISCUSSION</title>
      <p>The present study aimed to decrease the tendency to use
popularity as a heuristic in a choice-based preference
elicitation task by providing users with means to make informed
choices.</p>
      <p>The analysis on user behavior showed that people
watching trailers are more inclined to pick relatively less popular
items. By investigating the user experience we found that
aside from the impact on the decisions users make, the user
experience was in uenced. Informativeness of the system
increased with the possibility of watching trailers. While no
signi cant di erences were found on the other aspects of user
experience, the path model provides insight in the positive
and negative consequences of providing trailers, consisting
of an increased informativeness and diversity, but decreased
system satisfaction.
4.1</p>
    </sec>
    <sec id="sec-14">
      <title>Limitations</title>
      <p>
        One of the limitations is that the e ect of trailers on user
experience with a recommender system is not tested against
a more standard approach of preference elicitation. As users
expressed rating items costs more e ort than choosing[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
providing them with trailers during rating tasks may make
the task cost too much e ort and subsequently users may
decide to not look at trailers. Nonetheless, comparing the
e ects of using trailers in choice-based versus rating-based
preference elicitation can be valuable future research.
      </p>
      <p>One aspect of behavior worth investigating based on the
ndings in this study is information regarding in what stage
of the preference elicitation task users watched trailers. The
way data was stored in the current dataset does not allow
us to investigate for example if people watch more trailers in
the beginning, or towards the end of the study, which could
provide more ne grained insight in how trailers in uence
the choices people make. Future research should
incorporate not only whether or not people watch trailers, but also
when they do so. It would be particularly interesting to see if
users use trailers di erently in the choice of the nal
recommendations compared to the choices during the preference
elicitation task.</p>
      <p>The e ect of popularity on choice satisfaction needs to be
investigated in more detail. Previous studies have shown
that popularity of recommendations has a positive in uence
on choice satisfaction in lab settings, but whether or not this
e ect remains in the long run needs to be investigated. It
is possible that popularity can be used as a heuristic when
evaluating a recommender system, but that longer term
interaction is actually harmed by high popularity.</p>
    </sec>
    <sec id="sec-15">
      <title>Acknowledgments</title>
      <p>We thank Olivier van Duuren, Niek van Sleeuwen, Bianca
Ligt, Danielle Niestadt, Shivam Rastogi and Suraj Iyer for
programming the user inferface and performing the
experiment as part of their student research project.
5.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Bledaite</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Ricci</surname>
          </string-name>
          .
          <article-title>Pairwise preferences elicitation and exploitation for conversational collaborative ltering</article-title>
          .
          <source>In Proceedings of the 26th ACM Conference on Hypertext &amp;#</source>
          <volume>38</volume>
          ;
          <string-name>
            <surname>Social</surname>
            <given-names>Media</given-names>
          </string-name>
          ,
          <source>HT '15</source>
          , pages
          <fpage>231</fpage>
          {
          <fpage>236</fpage>
          , New York, NY, USA,
          <year>2015</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bollen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Graus</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Willemsen</surname>
          </string-name>
          .
          <article-title>Remembering the stars</article-title>
          ?
          <source>In Proceedings of the sixth ACM conference on Recommender systems - RecSys '12</source>
          ,
          <string-name>
            <surname>page</surname>
            <given-names>217</given-names>
          </string-name>
          , New York, New York, USA,
          <year>2012</year>
          . ACM Press.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Castells</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Hurley</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Vargas</surname>
          </string-name>
          .
          <article-title>Novelty and Diversity in Recommender Systems</article-title>
          .
          <source>In Recommender Systems Handbook</source>
          , volume
          <volume>54</volume>
          , pages
          <fpage>881</fpage>
          {
          <fpage>918</fpage>
          .
          <string-name>
            <surname>Springer</surname>
            <given-names>US</given-names>
          </string-name>
          , Boston, MA,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Graus and M. C.</surname>
          </string-name>
          <article-title>Willemsen. Improving the User Experience during Cold Start through Choice-Based Preference Elicitation</article-title>
          .
          <source>In Proceedings of the 9th ACM Conference on Recommender Systems - RecSys '15</source>
          , pages
          <fpage>273</fpage>
          {
          <fpage>276</fpage>
          , New York, New York, USA,
          <year>2015</year>
          . ACM Press.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Knijnenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Willemsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gantner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Soncu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Newell</surname>
          </string-name>
          .
          <article-title>Explaining the user experience of recommender systems</article-title>
          .
          <source>User Modeling</source>
          and
          <string-name>
            <surname>User-Adapted</surname>
            <given-names>Interaction</given-names>
          </string-name>
          ,
          <volume>22</volume>
          (
          <issue>4-5</issue>
          ):
          <volume>441</volume>
          {
          <fpage>504</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Loepp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hussein</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Ziegler</surname>
          </string-name>
          .
          <article-title>Choice-based preference elicitation for collaborative ltering recommender systems</article-title>
          .
          <source>In Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI '14</source>
          , pages
          <fpage>3085</fpage>
          {
          <fpage>3094</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>E.</given-names>
            <surname>Minack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Siberski</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Nejdl</surname>
          </string-name>
          .
          <article-title>Incremental diversi cation for very large sets</article-title>
          .
          <source>In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information - SIGIR '11, page 585</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rendle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Freudenthaler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gantner</surname>
          </string-name>
          , and L.
          <string-name>
            <surname>Schmidt-Thieme</surname>
          </string-name>
          .
          <article-title>Bpr: Bayesian personalized ranking from implicit feedback</article-title>
          .
          <source>In Proceedings of the Twenty-Fifth Conference on Uncertainty in Arti cial Intelligence</source>
          ,
          <source>UAI '09</source>
          , pages
          <fpage>452</fpage>
          {
          <fpage>461</fpage>
          ,
          <string-name>
            <surname>Arlington</surname>
          </string-name>
          , Virginia, United States,
          <year>2009</year>
          . AUAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Willemsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Graus</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B. P.</given-names>
            <surname>Knijnenburg</surname>
          </string-name>
          .
          <article-title>Understanding the role of latent feature diversi cation on choice di culty and satisfaction</article-title>
          . UMUAI, under revision.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>