<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Reproducing and Prototyping Recommender Systems in R</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ludovik Coba</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Panagiotis Symeonidis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Markus Zanker</string-name>
          <email>Markus.Zankerg@unibz.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Free University of Bozen-Bolzano</institution>
          ,
          <addr-line>39100, Bozen-Bolzano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we describe rrecsys, an open source extension package in R for rapid prototyping and intuitive assessment of recommender system algorithms. Due to its wide variety of implemented packages and functionalities, R language represents a popular choice for many tasks in Data Analysis. This package replicates the most popular collaborative ltering algorithms for rating and binary data and we compare results with the Java-based LensKit implementation for the purpose of benchmarking the implementation. Therefore this work can also be seen as a contribution in the context of replication of algorithm implementations and reproduction of evaluation results. Users can easily tune available implementations or develop their own algorithms and assess them according to the standard methodology for o ine evaluation. Thus this package should represent an easily accessible environment for research and teaching purposes in the eld of recommender systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>R represents a popular choice in Data Analytics and Machine Learning. The
software has low setup cost and contains a large selection of packages and
functionalities to enhance and prototype algorithms with compact code and good
visualization tools. Thus R represents a suitable environment for exploring the
eld of recommender systems. Therefore we present and contribute a novel R
package that reproduces several of the most popular recommender algorithms for
Likert scaled as well as binary rating values. The functionality of this framework
is mainly focused on prototyping and educational purposes due to the compact
code representation and the interactive way of invoking and visualizing results
in R. We took advantage of the large selection of packages in R to implement
the algorithms included in our package. We achieve competing performance due
to highly vectorized and mixed R/C++ implementations.</p>
      <p>
        We therefore introduce rrecsys[
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ] by presenting a general overview of the
implemented algorithms and the evaluation methodology. A few code examples
are included in an appendix. Furthermore, we proceed by comparing results of
rrecsys with those of the Lenskit library. We also include evidence on runtime
performance that document the e cient implementation of algorithms in R.
      </p>
      <p>rrecsys Package
rrecsys has a modular structure as well as includes expansion capabilities. The
core of the package includes the implementation of several popular algorithms,
an evaluation component and a couple of auxiliary parts for data analysis and
convergence detection. Next we concisely describe the included algorithms.
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Algorithms</title>
      <p>Baseline and Popularity: The included baseline predictors are the global
mean rating (Global Average), item's mean rating (Item Average), user's mean
ratings (User Average) as well as an unpersonalized Most Popular method that
determines item popularity based on the total number of (positive) ratings.</p>
      <p>
        Item Based K-nearest neighbors: Given a target user and her positively
rated items, the algorithm will identify the k-most similar items for each target a
and will rank them according to aggregated similarities with the di erent targets
as described by Sarwar et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. For similarity measures we develop the cosine
similarity and the adjusted cosine similarity.
      </p>
      <p>Items a and b are considered as rating vectors a and b in the user space.
Cosine similarity measures the cosine angle between those vectors:
sim(a; b) = cos(a; b) = jaaj bjbj (1)
The adjusted cosine similarity is computed by o setting the user average on
each co-rated pair on two item vectors. If Ia and Ib is the set of users that rated
correspondingly item a and b the adjusted cosine similarity is measured as:
sim(a; b) = r</p>
      <p>P
u2Ia\Ib</p>
      <p>P
u2Ia\Ib
(Rua
(Rua</p>
      <p>Ru) (Rub</p>
      <p>Ru)
Ru)2</p>
      <p>P
u2Ia\Ib
(Rub</p>
      <p>Ru)2
Where Ru is the average rating of user u, computed as:</p>
      <p>Ru =</p>
      <p>X Rui
i2Iu jIuj
Where Iu is the set of items rated by user u.</p>
      <p>
        Once similarities among all items are computed a neighborhood might be
formed by choosing the items with the highest similarity value. Prediction over
a target user u and item a are calculated as the weighted sum [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>P (sa;k
pu;a = all similar items;k</p>
      <p>P
all similar items;k</p>
      <p>Ru;k)
sa;k</p>
      <p>
        User Based K-nearest neighbors: Herlocker et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] proposed the
algorithm that nds similarities among users instead among items. In our
implementation we consider the cosine similarity and the Pearson correlation.
(2)
(3)
(4)
      </p>
      <p>User u and v are considered as rating vectors u and v in the item space.
Cosine similarity measures the cosine angle between those vectors using Formula 1</p>
      <p>Instead the Pearson correlation is measured by o setting the user average
on co-rated pairs among the user vectors:
sim(u; v) = P earson(u; v) = r</p>
      <p>P
i2Iu\Iv</p>
      <p>P
i2Iu\Iv
(Rui
(Rui</p>
      <p>Ru) (Rvi</p>
      <p>Rv)
Ru)2</p>
      <p>P
i2Iu\Iv
(Rvi</p>
      <p>Rv)2
Where Ru and Rv are correspondingly the average ratings of user u and user v,
computed like in Formula 3.</p>
      <p>
        Once similarities among all items are computed a neighborhood might be
formed by choosing the items with the highest similarity value. Prediction over
a target user u and item a are calculated as the weighted sum [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]:
pu;a = Ru +
      </p>
      <p>P (sa;k
all similar items;k</p>
      <p>P
all similar items;k</p>
      <p>Ru;k)
sa;k</p>
      <p>
        Weighted Slope One: proposed by Lemire et al [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] performs prediction
for a missing rating r^ui for user u on item i as the following average:
The average deviation rating devij between co-rated items is de ned by:
r^ui =
      </p>
      <p>P8ruj (devij + ruj)cij</p>
      <p>P8ruj cij</p>
      <p>:
devij =</p>
      <p>X
rui
cij
ruj :
Where cij is the number of co-rated items between items i and j and rui is
an existing rating for user u on item i. The Weighted Slope One takes into
account both, information from users who rated the same item and the number
of observed ratings.</p>
      <p>
        Simon Funk's SVD: Matrix factorization methods are used in
recommender systems to derive a set of latent factors, from the user item rating
matrix, to characterize both users and items by this vector of factors. The
useritem interaction are modeled as inner product of the latent factors space [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Accordingly each item i will be associated with a vector of factors Vi, and each
user u is associated with a vector of factors Uu. An approximation of the rating
of a user u on an item i can be derived as the inner product of their factor
vectors:
      </p>
      <p>R^ui =</p>
      <p>+ bi + bu + Uu ViT
Where is the overall average rating and bu and bi indicate the deviation due
to user u and item i from the mean rating.</p>
      <p>The U(user) and V(item) factor matrices are cropped to k features and
initialized at small values. Each feature is trained until convergence (where
convergence specifying the number of updates to be computed on a feature before
(5)
(6)
(7)
(8)
(9)
considering it converged, it can be either chosen by the user or calculated
automatically by the package). On each loop the algorithm predicts R^ui, calculates
the error and the factors are updated as follows:
The attribute
ization term.</p>
      <p>eui = Rui</p>
      <p>R^ui
Vik
Uuk</p>
      <p>Vik +
Uuk +
(eui Uuk
(eui Vik</p>
      <p>Vik)
Uuk)
(10)
(11)
(12)
represents the learning rate, while
corresponds to the
regular</p>
      <p>In addition, the following two algorithms address the One Class Collaborative
Filtering problem (OCCF).</p>
      <p>
        Bayesian Personalized Ranking: The algorithm has been introduced by
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. It turns the OCCF into a ranking problem by implicitly assuming that users
prefer items they have already interacted with another time. Instead of applying
rating prediction techniques, BPR ranks candidate items for a user without
calculating a "virtual" rating. The overall goal of the algorithm is to nd a
personalized total ranking &gt;u I2 for any user u 2 U sers and pairs of items
(i; j) 2 I2 that meet the properties of a total order (totality, anti-symmetry,
transitivity).
      </p>
    </sec>
    <sec id="sec-3">
      <title>Weighted Alternated Least Squares: We compute a low-rank approxi</title>
      <p>
        mation matrix R^ = (R^ij )m n = U V T , where U and V are the usual feature
matrix cropped to k features as introduced by [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Weighted low-rank aims to
determine R^ such that minimizes the Frobenius loss of the following objective
function:
      </p>
      <p>L(R^) = L(U; V ) = X Wij (Rij
ij</p>
      <p>Ui VjT )2 +
(kUik2F + kVik2F )
(13)
The regularization term weighted by is added to prevent over- tting. The
expression k:kF denotes the Frobenius norm. The alternated least square
algorithms optimization process solves partial derivatives of L with respect to each
entry U and V , @L@(UU;iV ) = 0 with xed V and @L@(UVj;V ) = 0 with xed U , to
compute Ui and Vi. Then U and V are initialized with random Gaussian numbers
with mean zero and small standard deviation and are updated until convergence.
The matrix W = (Wij )m n 2 R+m n is a non-negative weight matrix that assigns
con dence values to observations (hence the name weighted ALS).
2.2</p>
    </sec>
    <sec id="sec-4">
      <title>Recommendation List</title>
      <p>
        The library is currently distributing two di erent methodologies for computing
the top-N recommendation. The rst is the Highest Predicted Ratings (HPR),
which proposes a sorted list based on the highest computed rating values by an
algorithm. The second method is the Most Frequent (MF), that determines the
top-N list based on the most frequent items available in the neighborhood of an
user or item. This methodology is known to produce better performance than
HPR [
        <xref ref-type="bibr" rid="ref15 ref9">9, 15</xref>
        ].
2.3
      </p>
    </sec>
    <sec id="sec-5">
      <title>Evaluation</title>
      <p>The evaluation module is based on the k-fold cross-validation method. A
stratied random selection procedure is applied when dividing the rated items of each
user into k folds such that each user is uniformly represented in each fold, i.e. the
number of ratings of each user in any fold di ers at most by one. For k-fold cross
validation each of the k disjunct fractions of the ratings are used k 1 times for
training (i.e. Rtrain) and once for testing (i.e. Rtest). Practically, ratings in
Rtest are set as missing in the original dataset and predictions/recommendations
are compared to Rtest to compute the performance measures.</p>
      <p>
        We included the most popular performance metrics according to the survey
in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. These are mean absolute error(MAE), root mean squared error(RMSE),
Precision, Recall, F1, True and False Positives, True and False Negatives,
normalized discounted cumulative gain (NDCG), rank score, area under the ROC
curve (AUC) and catalog coverage[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
3
      </p>
      <sec id="sec-5-1">
        <title>Experimental results</title>
        <p>
          In this Section, we compare our rrecsys library with the popular Lenskit [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] Java
library.
        </p>
        <p>
          In Figure 1, we compare both libraries in terms of RMSE and MAE, using
5-fold cross validation. Lenskit and rrecsys were con gured both with the same
algorithms and evaluation methodology. We used the MovieLens100K dataset
[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] for these experiments as the purpose is not only the results per se, but to
demonstrate the reproducibility of the results derived from LensKit.
        </p>
        <p>For the experiments in Figure 1 on the FunkSVD, we have tuned its
parameters as follows, the latent space is set to 100 features, the learning rate
is set to 0.001, the regularization term is set to 0.015. In the case of the item
based k-nearest neighbor algorithm, we have set the number of nearest neighbors
to 100, and adjusted cosine similarity is the similarity measure. In the case of
the user-based k-nearest neighbor algorithm, we have set the number of nearest
neighbors to 100, and used Pearson as similarity measure.</p>
        <p>
          As shown in Figure 1, all the reported results demonstrate our ability to
clearly reproduce and replicate the same results with those of Lenskit in terms of
both RMSE and MAE while other libraries failed to do so [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. The insigni cant
di erences with LensKit are the result of a random distribution of items in the
k-folds. Since BPR and wALS are not implemented in LensKit it is impossible
for us to compare results.
        </p>
        <p>In Table 1 we show the performance of rrecsys based on the latest
implementations with R/C++ code on the MovieLens100K dataset running on the
same machine. It is noticeable that rrecsys performs similarly to LensKit. We
compared only optimized algorithms. In future we will provide optimized
implementation of more state of the art algorithms.</p>
        <p>1:5</p>
        <p>1
0:5
glob</p>
        <p>RMSE rrecsys</p>
        <p>MAE rrecsys
In this Section, we introduce an executable script in R for running some of the
functionalities of rrecsys in order to demonstrate its intuitive use. Please notice
that due to space limitations in this paper we do not describe all commands in
detail. The library is distributed with a full range of vignettes and a manual
describing all available functionalities1.</p>
        <p>The package is loaded on the Comprehensive R Archive Network (CRAN),
therefore download, installation and loading of the package requires the
execution of the following two functions:
install.packages("rrecsys");</p>
        <p>library(rrecsys)</p>
        <p>Once the package is loaded the dataset MovieLens Latest2 will be available
within the environment. A setup of the data is required to de ne possible limits
and the structure of the dataset. Users can explore the dataset by checking the
1 https://cran.r-project.org/package=rrecsys
2 We redistribute the MovieLens Latest datasets for demonstration purposes only.</p>
        <p>Please notice that these datasets change over time and are not appropriate for
reporting experimental results.
number of ratings, its sparsity or even by modifying it to contain a speci c
number of ratings for each itemnuser.
data("mlLatest100k")
m &lt;- defineData(mlLatest100k, minimum = 1, maximum = 5,
halfStar = TRUE)
sparsity(m); numRatings(m); rowRatings(m); colRatings(m)
#Crop the dataset to contain at least 200 ratings on each user and
10 ratings on each item.
smallmlLatest &lt;- m[rowRatings(m) &gt;= 200, colRatings(m) &gt; 10]</p>
        <p>The following code shows how to train a model (e.g., ub10) on an algorithm
(e.g., UBKNN), which can be used for either rating prediction (e.g., p) or item
recommendation (e.g., rHPR and rMF).
ub10 &lt;- rrecsys(smallmlLatest, "UBKNN", neigh = 10, simFunct = 1)
p &lt;- predict(ub10)
rHPR &lt;- recommendHPR(ub10, topN = 10)
#pt is the positive threshold for recommending an item.
rMF &lt;- recommendMF(ub10, topN = 10, pt = 3)</p>
        <p>The following code shows how we generate the k-folds. Same fold distribution
can be used to evaluate di erent algorithms.
folds &lt;- evalModel(smallmlLatest, folds = 2)
#Recommendation evaluation.
evalRec(folds, "UBKNN", topN = 10, goodRating = 3,</p>
        <p>simFunct = 2, recAlg = 1)</p>
        <p>The output of the evaluation function looks like as follows:
#Prediction evaluation.
&gt; evalPred(folds, "funksvd", k = 10)
#Output:</p>
        <p>MAE RMSE globalMAE globalRMSE Time
1-fold 0.8565404 1.056737 0.9175207 1.161164 2.419723
2-fold 0.8473669 1.043400 0.8959667 1.131292 2.669417
Average 0.8519536 1.050069 0.9067437 1.146228 2.544570
5</p>
      </sec>
      <sec id="sec-5-2">
        <title>Conclusions</title>
        <p>This paper contributed a recently released package for prototyping and
interactively demonstrating recommendation algorithms in R. It comes with a nice
range of implemented standard algorithms for Likert scaled and binary
ratings. Reported results demonstrate that it reproduces results of the Java-based
Lenskit toolkit. Thus it remains to hope that this e ort will be of use for the
eld of recommender systems and the large R user community.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Coba</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zanker</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>: rrecsys: an r-package for prototyping recommendation algorithms</article-title>
          . In: Guy,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (eds.)
          <source>Poster Track of the 10th ACM Conference on Recommender Systems (RecSys</source>
          <year>2016</year>
          )
          <article-title>(RecSysPosters)</article-title>
          .
          <source>No. 1688 in CEUR Workshop Proceedings</source>
          , Aachen (
          <year>2016</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>1688</volume>
          /#paper-
          <fpage>12</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Coba</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zanker</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Replication and reproduction in recommender systems research evidence from a case-study with the rrecsys library</article-title>
          .
          <source>In: 30th International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE</source>
          <year>2017</year>
          , Arras, France, June,
          <year>2017</year>
          , Proceedings. Springer International Publishing,
          <string-name>
            <surname>Cham</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Ekstrand</surname>
            ,
            <given-names>M.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ludwig</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolb</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>J.T.</given-names>
          </string-name>
          :
          <article-title>Lenskit: A modular recommender framework</article-title>
          .
          <source>In: Proceedings of the Fifth ACM Conference on Recommender Systems</source>
          . pp.
          <volume>349</volume>
          {
          <fpage>350</fpage>
          . RecSys '11,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2011</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/2043932.2044001
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Funk</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <source>Net ix Update: Try this at Home</source>
          (
          <year>2006</year>
          ), http://sifter.org/ simon/journal/20061211.html
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Gunawardana</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shani</surname>
          </string-name>
          , G.:
          <article-title>A Survey of Accuracy Evaluation Metrics of Recommendation Tasks</article-title>
          .
          <source>The Journal of Machine Learning Research</source>
          <volume>10</volume>
          ,
          <volume>2935</volume>
          {
          <fpage>2962</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Harper</surname>
            ,
            <given-names>F.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          :
          <article-title>The movielens datasets: History and context</article-title>
          .
          <source>ACM Trans. Interact. Intell. Syst</source>
          .
          <volume>5</volume>
          (
          <issue>4</issue>
          ),
          <volume>19</volume>
          :1{
          <fpage>19</fpage>
          :19 (Dec
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Herlocker</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Borchers</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>An algorithmic framework for performing collaborative ltering</article-title>
          .
          <source>In: Proceedings of the 22Nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          . pp.
          <volume>230</volume>
          {
          <fpage>237</fpage>
          . SIGIR '99,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Jannach</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zanker</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Groning, M.: 13th International Conference on E-Commerce and
          <article-title>Web Technologies, chap</article-title>
          .
          <source>Recommender Systems in Computer Science and Information Systems { A Landscape of Research</source>
          , pp.
          <volume>76</volume>
          {
          <fpage>87</fpage>
          . Springer Berlin Heidelberg, Berlin, Heidelberg (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Karypis</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Evaluation of item-based top-n recommendation algorithms</article-title>
          .
          <source>In: Proceedings of the Tenth International Conference on Information and Knowledge Management</source>
          . pp.
          <volume>247</volume>
          {
          <fpage>254</fpage>
          . CIKM '01,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Lemire</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maclachlan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Slope one predictors for online rating-based collaborative ltering</article-title>
          .
          <source>In: SDM</source>
          . vol.
          <volume>5</volume>
          , pp.
          <volume>1</volume>
          {
          <issue>5</issue>
          .
          <string-name>
            <surname>SIAM</surname>
          </string-name>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cao</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>N.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lukose</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scholz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          :
          <article-title>One-class collaborative ltering</article-title>
          .
          <source>Data Mining</source>
          ,
          <year>2008</year>
          . ICDM'08. Eighth IEEE International Conference on pp.
          <volume>502</volume>
          {
          <issue>511</issue>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Rendle</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Freudenthaler</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gantner</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , Schmidt-thieme, L.:
          <article-title>BPR : Bayesian Personalized Ranking from Implicit Feedback</article-title>
          .
          <source>Proceedings of the Twenty-Fifth Conference on Uncertainty in Arti cial Intelligence cs.LG</source>
          ,
          <volume>452</volume>
          {
          <fpage>461</fpage>
          (
          <year>2009</year>
          ), http://dl.acm.org/citation.cfm?id=
          <fpage>1795167</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Said</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellog n</surname>
          </string-name>
          , A.:
          <article-title>Comparative Recommender System Evaluation: Benchmarking Recommendation Frameworks</article-title>
          . RecSys pp.
          <volume>129</volume>
          {
          <issue>136</issue>
          (
          <year>2014</year>
          ), http://dx.doi.org/10.1145/2645710.2645746
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Sarwar</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karypis</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riedl</surname>
          </string-name>
          , J.:
          <article-title>Item-based collaborative ltering recommendation algorithms</article-title>
          .
          <source>In: 10th Int. Conference on the World Wide Web</source>
          . pp.
          <volume>285</volume>
          {
          <issue>295</issue>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Symeonidis</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nanopoulos</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Papadopoulos</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manolopoulos</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Collaborative recommender systems: Combining e ectiveness and e ciency</article-title>
          .
          <source>Expert Syst. Appl</source>
          .
          <volume>34</volume>
          (
          <issue>4</issue>
          ),
          <volume>2995</volume>
          {3013 (May
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>