=Paper=
{{Paper
|id=Vol-1917/paper18
|storemode=property
|title=A Latent-Feature Plackett-Luce Model for Dyad Ranking Completion
|pdfUrl=https://ceur-ws.org/Vol-1917/paper18.pdf
|volume=Vol-1917
|authors=Dirk Schäfer
|dblpUrl=https://dblp.org/rec/conf/lwa/Schafer17
}}
==A Latent-Feature Plackett-Luce Model for Dyad Ranking Completion==
A Latent-Feature Plackett-Luce Model for
Dyad Ranking Completion
Dirk Schäfer
dirk.schaefer@jivas.de
Abstract. Dyad ranking is a specific type of preference learning prob-
lem, namely the problem of learning a model that is capable of ranking a
set of feature vector pairs, called dyads. In this paper a situation is con-
sidered where feature vectors consists of identifiers only. A new method
based on learning latent-features is introduced for this purpose. The
method is evaluated on synthetic data and is applied on the problem of
ranking from implicit feedback.
Keywords: Preference Learning, Dyad Ranking, Plackett-Luce Model,
Matrix Factorization, Implicit Feedback
1 Introduction
Preference learning is an emerging subfield of machine learning, which deals with
the induction of preference models from observed or revealed preference informa-
tion [3]. Such models are typically used for prediction purposes, for example to
predict context-dependent preferences of individuals on various choice alterna-
tives. Depending on the representation of preferences, individuals, alternatives,
and contexts, a large variety of preference models are conceivable, and many
such models have already been studied in the literature.
A specific problem within the realm of preference learning is the problem of
label ranking, which consists of learning a model that maps instances to rank-
ings (total orders) over a finite set of predefined alternatives [11]. An instance,
which defines the context of the preference relation, is typically characterized
in terms of a set of attributes or features; for example, an instance could be a
person described by properties such as sex, age, income, etc. As opposed to this,
the alternatives to be ranked, e.g., the political parties of a country, are only
identified by their name (label), while not being characterized in terms of any
properties or features.
In [9], dyad ranking is introduced as a practically motivated generalization of
the label ranking problem. In dyad ranking, not only the instances but also the
alternatives are represented in terms of attributes—a dyad is a pair consisting
of an instance and an alternative. Moreover, for learning in the setting of dyad
ranking, an extension of the Plackett-Luce model, a statistical model for rank
data, was proposed. The approach was based on modeling utility scores of dyads
via a bilinear product of feature vectors of instance and alternative respectively.
Copyright © 2017 by the paper’s authors. Copying permitted only for private and academic purposes.
In: M. Leyer (Ed.): Proceedings of the LWDA 2017 Workshops: KDML, FGWM, IR, and FGDB.
Rostock, Germany, 11.-13. September 2017, published at http://ceur-ws.org
In this paper a problem is addressed which is in a sense diametrical to the
above mentioned settings because instances and alternatives are not described
by any attributes. The reasons of this circumstance could be that observable
attributes do not relate well with the preference information or the attributes
contain many missing values or they do not exist at all. Data preprocessing and
feature engineering may fail in these cases. Often however, instances and alter-
natives are identified by a unique ID, especially when they are organized in a
relational database. Otherwise training examples from a finite set can always be
enumerated and a unique ID can be assigned. For this scenario a new variation
of the bilinear Plackett-Luce model is introduced which is called Latent-Feature
Plackett-Luce model (LFPL). In contrast to the original formulation of the bilin-
ear Plackett-Luce model it can deal with rankings over dyads which are specified
by identifiers only.
The rest of the paper is organized as follows. First a formal description of
the dyad ranking completion problem is given in Section 2. The LFPL method
to solve this problem based on latent-features is described in Section 3. The
relation between dyad ranking completion and the application of learning to
rank on implicit feedback data is described in Section 4. An overview of related
methods is provided in Section 5. Experimental results are presented in Section
6, prior to concluding in Section 7.
2 Problem Setting
2.1 Dyad Ranking
The problem of a dyad ranking is to learn a model that accepts as input any set
of (new) dyads and produces as output a ranking of them. A dyad is a pair of
feature vectors z = (x, y) ∈ Z = X × Y, where the feature vectors are from two
(but not necessarily different) domains X and Y.
Figure 1 shows the basic learning workflow: a learning algorithm creates a
model from a finite set of training examples. The trained model can afterwards
be used for producing rankings on (new) dyads. The training set D = {ρn }N n=1
consists of a set examples ρn (1 ≤ n ≤ N ), where each example is a dyad ranking
ρn : z (1) z (2) . . . z (Mn ) , Mn ≥ 2, (1)
of length Mn , where Mn can vary from example to example.
2.2 Dyad Ranking Completion
The domain from which the training dyads and their members respectively stem
from is denoted by R ⊂ X and C ⊂ Y (indicating (r)ows and (c)columns).
Depending on the occurrence of dyad members within the training set one can
characterize the dyads which need to be ranked. Table 1 provides an overview of
the types of new dyads which can be encountered at the prediction phase. Dyad
ranking completion deals with dyads of type 1 which are dyads whose members
Training
ρ1
z(4) ≻ z(1) ≻ z(3) ≻ z(2) L
Prediction
z(1) z(2)
z(3)
(5)
z(4) M z(3) ≻ z(1) ≻ z(4) ≻ z(5) ≻ z(2)
z
Fig. 1. Generic supervised learning task in dyad ranking.
have been encountered at the training phase individually but not yet jointly
together. They define the gaps in the R × C matrix and filling these by ranking
all of them or subsets of them characterizes the task.
Table 1. Characterization of dyads depending on whether their members have been
encountered in the training phase individually but not conjointly.
Dyad Member 1 Member 2 Prediction
Type Encountered? Domain
1 yes yes R×C
2 yes no R×Y
3 no yes X×C
4 no no (X \ R) × (Y \ C)
Ranking with Identifiers Dyads whose members are not described by at-
tributes can be utilized either by using existing database IDs or by assigning
natural numbers to them. When these dyads are encountered at the training
phase then any new dyads in the prediction phase must necessarily be of type 1.
3 A Plackett-Luce Model with Latent Features
3.1 Plackett-Luce Model
The Plackett-Luce (PL) model is a parameterized probability distribution on
the set of all rankings over a set of alternatives y1 , . . . , yK . It is specified by a
parameter vector v = (v1 , v2 , . . . , vK ) ∈ RK
+ , in which vi accounts for the (latent)
utility or “skill” of the option yi . The probability assigned by the PL model to
a ranking π is given by
K K−1
Y vπ(i) Y vπ(i)
P(π | v) = = PK . (2)
v
i=1 π(i)
+ v π(i+1) + . . . + v π(K) i=1 j=i vπ(j)
Obviously, the Plackett-Luce model is only determined up to a positive multi-
plicative constant, i.e., P(π | v) ≡ P(π | s · v) for all s > 0.
As an appealing property of the PL model is that its marginal probabilities
(probabilities of rankings on subsets of alternatives) are easy to compute and
can be expressed in closed form. More specifically, the marginal of a PL model
on M < K alternatives yi(1) , . . . , yi(M ) is again a PL model with parameters
vi(1) , . . . , vi(M ) .
Bilinear Plackett-Luce Model The bilinear model (BilinPL) [9] is based on
a functional formulation of the skill parameters as follows,
v(z) = v(x, y) = exp x> Wy
(3)
that is obtained by defining Φ as the Kronecker (tensor) product:
Φ(x, y) = x ⊗ y = x1 · y1 , x1 · y2 , . . . , xr · yc = vec xy > ,
(4)
which is a vector of length p = r · c consisting of all pairwise products of the
components of x and y, also known as cross-products. Thus, the inner product
hw, Φ(x, y)i can be rewritten as a bilinear form x> Wy with an r × c matrix
W = (wi,j ); the entry wi,j can be considered as the weight of the interaction
term xi yj .
3.2 Latent-Feature based PL Model
To circumvent the limitation of the Bilinear PL model on the dependence of
explicit features a novel extension of the Plackett-Luce model is proposed. It
is based on latent-features and is called LFPL. For this model the PL log-skill
parameter for a dyad z = (i, j) is defined as log v(z) = U i V > j .
The following notation is used. The indices i and j refers to the entities
associated with the first and the second dyad members, whereas U and V are
matrices with U ∈ R|R|×K and V ∈ R|C|×K , where K denotes the dimensionality
of latent feature vectors. |R| and |C| denote the number of entities in R and
C respectively. Both, R and C refer to special feature spaces which are one
dimensional and are made up of natural numbers. The notation U k refers to the
k − th row vector of the matrix U , whereas U k,i refers to the i − th latent factor
of the k − th vector.
The conditional probability of observing a dyad ranking with the parameters
θ = {U , V } can then be stated as:
h i
MYn −1 exp U zn (k,1) V > zn (k,2)
P(πn | %n , θ) = PMn h i . (5)
>
k=1 l=k exp U zn (l,1) V zn (l,2)
The function zn (k, i) in the likelihood (5) refers to the i − th member of a dyad
(j)
z observed in sample %n , where i ∈ {1, 2}. The index k refers to the dyad z n
which is put at the k − th rank of πn , s.t. πn (k) = j.
The model parameters θ = {U , V } can be found by minimizing the negative
log-likelihood (NLL) of the data,
N
X
`= `n . (6)
n=1
In (6) the NLL of a single example (dyad ranking) is given by
`n = − log P (πn | %n , θ) (7)
n −1 n −1
MX Mn
! MX
X h i
= log exp U zn (l,1) V >
zn (l,2) − U zn (k,1) V >
zn (k,2) . (8)
k=1 l=k k=1
The overall objective which is to be optimized is hence of the form:
`(θ, D) + Ω(θ) , (9)
where Ω(θ) is a regularization term for controlling the model capacity or likewise
a countermeasure for preventing over-fitting. Note that the objective (9) is not
convex and its minimization requires special considerations.
Learning Algorithm As learning algorithm the first order optimization tech-
nique alternating gradient descent is used. With this approach the model weights
are updated by keeping one matrix U or V fixed respectively. The first deriva-
tives are given by
PMn h i
>
l=k 1{s=zn (l,1)} V zn (l,2),i · exp U zn (l,1) V zn (l,2)
MXn −1
∂`n
= h i
∂U s,i PMn >
k=1 l=k exp U zn (l,1) V zn (l,2)
n −1
MX
− 1{s=zn (k,1)} V zn (k,2),i , (10)
k=1
with s ∈ Rn and 1 ≤ i ≤ K. Here Rn refers to the set of entities represented by
numerical IDs that occur as first dyad members in the sample n. For example,
for a dyad ranking ρn : (x3 , y2 ) (x3 , y144 ) (x5 , y9 ), Rn would be {3, 5} and
Cn = {2, 9, 144}. The derivatives for V can be stated similarly by treating the
components of U as constants:
PMn h i
>
∂`n
MXn −1
l=k 1 {t=zn (l,2)} U zn (l,1),i · exp U zn (l,1) V z n (l,2)
= h i
∂V t,i PMn
exp U V>
k=1 l=k zn (l,1) zn (l,2)
n −1
MX
− 1{t=zn (k,2)} U zn (k,1),i , (11)
k=1
with t ∈ Cn and 1 ≤ i ≤ K.
Algorithm 1 uses these expressions together with the AdaGrad strategy for
adaptive learning rate updates1 . AdaGrad provides the advantage of freeing the
user from determining the initial learning rate values and update schedules while
being simple to implement [2]. The actual implementation utilizes a quadratic
regularization term
1
Ω(θ) = λ(kU k2F + kV k2F ) , (12)
2
where k · kF refers to the Frobenius matrix norm. This approach is theoretically
justified by considering it as a problem of learning low rank matrices with a
convex relaxation; a relaxation that is based on the nuclear (trace) norm regu-
larization [1].
Algorithm 1 LFPL Alternating Gradient Descent with AdaGrad
Require: Initial matrices U ∈ R|R|×K , V ∈ R|C|×K , learning rate γ, number of itera-
tions T , dyad rankings DT r = {(%, π)n }.
1: function train lfpl (DT r , U , V , T )
2: for i ← 1 to T do
3: ∇U ← Formula (10) on all samples (%, π)n 1 ≤ n ≤ N
4: H U = H U + (∇U )2
5: U ← U − γ(H U )−1/2 ∇U
6: ∇V ← Formula (11) on all samples (%, π)n 1 ≤ n ≤ N
7: H V = H V +(∇V )2
8: V ← V − γ(H V )−1/2 ∇V
9: end for
10: return (U , V )
11: end function
3.3 Predictions with LFPL
With LFPL it is possible to solve the dyad ranking completion problem from
Section 2.2 by making prediction involving dyads of type 1 (c.f. Table 1). They
are all of the form z (k) = (ik , jk ), where ik , jk ∈ N and both indices ik and jk ap-
peared independently at other dyads during training but not jointly together yet.
The most probable ranking can be predicted by calculating v (k) = exp(U ik V > jk )
and then arranging the dyads in descending order to the scores.
1
Note, that any other learning rate update strategy for gradient descent could be
used too.
3.4 Connection with the Bilinear PL Model
There are two ways to relate the LFPL model to the Bilinear PL model. The first
relation can be seen by setting W = U V > . By describing each entity i by means
of a label embedding ei (e.g., via 1-of-k encoding), it is possible to cast (5) to the
Bilinear PL model formulation (3). The second relation can be seen by treating
the latent feature vectors in the rows of U and V as object features and the
bilinear weight matrix W as identity matrix I K . Both relations justify to classify
the LFPL to be a bilinear model, too. But in contrast to BilinPL (Section 3.1)
the LFPL model does not require the engineering of explicit features but learns
latent-feature vectors under the same bi-linearity assumption instead. Without
considering W to be of lower rank it is not possible for the BilinPL approach to
generalize to unseen dyad rankings of type 1 where dyad members are described
by identifiers only.
4 Learning from Implicit Feedback
Implicit feedback is probably the most basic kind of preference statement. It
refers to the selection of a single item from a set of choice alternatives at a time.
A reasonable interpretation for that action is that the selected item is probably
preferred over other items present at the time of selection. This kind of data
can be represented in a rectangular schema of rows and columns, e.g., where
they represent users and items. Cells in that schema indicate if an interaction
between a particular row and a column happened. In what follows we denote
row indices as ik ∈ R and column indices as jl ∈ C. Formally, implicit feedback
data is defined as the subset of indices F ⊂ R × C. An index pair (i, j) in F can
be interpreted as an action that happened between entities i and j. The goal is
to provide a ranking of dyads (i, j) for which no interactions have been observed
so far, i.e., (i, j) ∈
/ F.
Without the loss of generality we turn now to the special case of the LFPL
for contextual implicit feedback. Let the set of items on which interactions have
been recorded for a context i be denoted as Ci+ := {j ∈ C | (i, j) ∈ F}. We
treat this scenario as a completion problem which we want to solve with dyad
ranking. Figure 2 illustrates the conversion of original implicit feedback data to
contextual dyad rankings. For each row ik (or user) a separate dyadic preference
graph is generated. All graphs combined form a training data set DF , which is
DF = {(i, ja ) (i, jb ) | ja ∈ Ci+ ∧ jb ∈ F \ Ci+ } .
To overcome the problem of over-fitting on high frequent items, the learning
procedure needs a slight modification. The new procedure uses sampling from
DF similar to BPRMF [7] but reuses the existing Algorithm 1 for this purpose.
Instead of acting on single training examples it generates subsets of examples
(batches) of size L. This new approach is called LFPL/IF (for implicit feedback)
and is stated as Algorithm 2. As stopping criterion either the convergence of the
performance on a hold-out set or a predefined maximal number of iterations can
be used.
C
j1 j2 j3 j4
(i1,ja) ≻ (i1,jb) (i5,ja) ≻ (i5,jb)
i1 + ? ? +
j1 j1 j1 j1
i2 + + ? +
j2 j2 j2 j2
R i ? 3 ? + ? ...
j3 j3 j3 j3
i4 ? + ? +
j4 j4 j4 j4
i5 ? + ? ?
Fig. 2. Conversion of implicit feedback data (left side) to dyad rankings (right side).
Algorithm 2 LFPL/IF Learning Algorithm
Require: Implicit feedback data DF , number of latent factors K, batch-size L
1: Initialize U ∈ R|R|×K , V ∈ R|C|×K
2: repeat
3: S ← draw L many dyadic preferences from DF
4: Update LFPL (S, U , V , 1) . Algorithm 1
5: until Stopping Criterion
6: return U , V
Gradients of LFPL/IF The calculations of the gradient matrices ∇U in Eq.
(10) and ∇V in Eq. (11) of Algorithm 1 simplify drastically in the case of implicit
feedback data. Let ja , jb denote the indices of items, where the corresponding
item ja is preferred over jb . Let furthermore i be the index of a user and 1 ≤
k ≤ K be a latent feature dimension. The NLL of a training example is then
given by
`n = log(exp(Ui Vja ) + exp(Ui Vjb )) − Us Vja .
And the derivatives can be calculated using following expressions:
∂`n Vj ,k exp(Ui Vja ) + Vjb ,k exp(Ui Vjb )
= a − Vja ,k , (13)
∂Ui,k exp(Ui Vja ) + exp(Ui Vjb )
∂`n Ui,k exp(Ui Vja )
= − Us,k , (14)
∂Vja ,k exp(Ui Vja ) + exp(Ui Vjb )
∂`n Ui,k exp(Ui Vjb )
= . (15)
∂Vjb ,k exp(Ui Vja ) + exp(Ui Vjb )
5 Related Work
Plackett-Luce Networks (PLNet) is a method for dyad ranking that is able to
learn joint-feature representations [10]. Being based on a neural network it is
possible to express non-linear relationships between dyads and rankings. It is
applicable of solving the dyad ranking completion problem too but it is not
designed for the special case where dyad inputs are expressed by identifiers only.
Probabilistic Matrix Factorization (PMF) [8] is a classical preference com-
pletion method for preferences in form of ratings. A substantial property of
PMF is the learning and the prediction of quantitative instead of qualitative
preferences by filling gaps of an incomplete rating matrix. Although LFPL and
PMF differ fundamentally in this point, they also share some aspects. Firstly,
both approaches are based on the same model class, i.e., matrices U and V are
multiplied together to give a real valued matrix of dimensionality N × M . In
both cases, higher matrix values indicate stronger preferences. Secondly, both
approaches put Gaussian priors over U and V to control model capacity.
The methods BPRMF and WRMF are standard methods for learning on
implicit feedback data. The LFPL method is closely related to BPRMF (the
matrix factorization variant of Bayesian Personalized Ranking) although the first
acts on dyad rankings of the form (i, ja ) (i, jb ) and the latter on triplets ja i
jb [7]. Both approaches share the same model class and also the assumption of
Gaussian priors. LFPL generalizes BPRMF by supporting rankings of arbitrary
lengths instead of pairwise preferences, i.e., rankings of length 2, and by providing
contextual and non-contextual variants. WRMF is an approach that is also based
on matrix factorization [6]. It shares with LFPF and BPRMF the same model
class but in contrast to them it is a pointwise approach (one item) instead of a
pairwise approach (two items at a time).
6 Experiments
Ranking Completion (Synthetic Data) The advantage of LFPL over Bil-
inPL and PLNet is shown for the problem of ranking completion with identifiers.
Given a set of row entities R and a set of column entities C, which are specified
by their IDs only. Furthermore, provided with a training set of incomplete con-
textualized rankings over C, where contexts are provided by entities from R, the
task is to create a model which can be used to predict arbitrary rankings for a
set of new dyads in R × C.
One thousand rankings of lengths 10 were sampled from the LFPL model
distribution. In a 10-fold CV experiment where 900 of them were used for training
and 100 for testing for each run. The LFPL model used for the generation of
rankings was determined by the matrices U ∈ R|R|×K and V ∈ R|C|×K which
were generated by sampling from the normal distribution, with K = 5, the
number of latent factors. To apply the dyads with identifiers on BilinPL and
PLNet the natural numbers used as identifiers were converted into vector format
using 1-of-k encoding. PLNet is configured to contain one hidden layer with 10
neurons. The results given in Table 2 confirmed the hypotheses that BilinPL
and PLNet do not generalize well beyond the observed preferences. Especially
PLNet is prone to overfitting. LFPL in contrast to BilinPL is able to share model
parameters under the assumption that preferences are representable with a lower
rank matrix W .
Table 2. Results of the synthetic preference completion problem. Performance values
in terms of Kendall’s τ are obtained via 10-fold CV on 1000 rankings, |Xi | = 10.
|R| × |C| 40 × 50 60 × 100 80 × 150
Completeness (%) 2.026 0.225 0.056
BilinPL .844 ± .025 .724 ± .018 .770 ± .023
LFPL .973 ± .006 .950 ± .014 .923 ± .009
PLNet .721 ± .023 .611 ± .031 .493 ± .032
Implicit Feedback (Real Data) In this experiment the capabilities of LFPL/IF
on implicit feedback data was tested. The data set Movielense (1M) was utilized
due to its availability and high degree of popularity [5]. It is about 6040 users
which provided a total amount of 1 million explicit ratings on 3900 movies. Simi-
lar to the experimental setup of [7] the rating data set was converted into implicit
feedback data. Each rating 1 ≤ r ≤ 5 is interpreted as a feedback and the task
corresponds to estimating how likely a user would rate the residual entries.
For the evaluation we repeated the following leave-one-out procedure: One
random rating entry per user is removed from the data. These entries are not con-
sidered for training but for benchmarking the trained model afterwards. Train-
ing and test sets are thus disjunct subsets of user associated item indices, i.e.,
CiT r , CiT e ⊂ Ci+ , 1 ≤ i ≤ |U | and CiT r ∩ CiT e = ∅. For evaluation of the models
the average AUC is used:
1 X 1 X
AU C = I[v̂(u, ja ) > v̂(u, jb )] ,
|U | u |E(u)|
(ja ,jb )∈E(u)
in which the evaluation pairs E(u) are given by
n o
E(u) = (ja , jb ) | ja ∈ CuT r ∧ jb ∈
/ (CuT r ∪ CuT e ) .
As baseline methods random guessing and the user-independent most popu-
lar approach are used. Most popular determines v̂(u, i) as the number of users
that interacted with item i. The implementations of the BPRMF and WRMF
methods were taken from the software package MyMediaLite 3.11 [4] and their
default parameters were adopted. For LFPL the parameters were: dimension of
latent feature space set to 10, regularization values were fixed to 0.01, number
of iterations was 5000 and the batch size was set to 2000.
The results provided in Figure 3 reflect my expectation that LFPL/IF per-
forms similarly as BPRMF and WRMF. All three methods outperform the base-
1
0.9
0.8
0.926 ± 0.002
0.920 ± 0.001
0.860 ± 0.002
0.917 ± 0.002
0.7
0.6
0.5
BPRMF LFPL/IF Most Popular Random WRMF
Fig. 3. Results of the Movielense 1M Implicit Feedback experiment.
lines methods to a large extent. In direct comparison LFPL/IF is slightly superior
to WRMF and inferior to BPRMF in terms of AUC performance.
7 Conclusion
We presented a new dyad ranking approach based on the Plackett-Luce model
with latent-feature vectors applicable for the dyad ranking completion problem.
The advantage of LFPL is its applicability in scenarios where no information
on dyad members are available but their unique identifiers. The problem of
learning and prediction given implicit feedback data could be addressed as an
application of dyad ranking completion. The learning algorithm is based on
alternating gradient descent to deal with the non-convexity of the objective
function. An obvious disadvantage is that predictions with latent features can
be carried out in a straightforward way only on dyads within R × C. LFPL in
its current form is also a bilinear model and may be limited to model more
complicated non-linear relationships between dyads and preferences.
References
1. Carlo Ciliberto, Dimitris Stamos, and Massimiliano Pontil. Reexamining low rank
matrix factorization for trace norm regularization. CoRR, abs/1706.08934, 2017.
2. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for
online learning and stochastic optimization. Journal of Machine Learning Research,
12:2121–2159, 2011.
3. Johannes Fürnkranz and Eyke Hüllermeier. Preference learning: An introduction.
In Johannes Fürnkranz and Eyke Hüllermeier, editors, Preference Learning, pages
1–17. Springer-Verlag, 2010.
4. Zeno Gantner, Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme.
MyMediaLite: A free recommender system library. In Proceedings of the 5th ACM
Conference on Recommender Systems (RecSys 2011), 2011.
5. F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and
context. ACM Trans. Interact. Intell. Syst., 5(4):19:1–19:19, 2015.
6. Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit
feedback datasets. In Proceedings of the 2008 Eighth IEEE International Confer-
ence on Data Mining, ICDM ’08, pages 263–272. IEEE Computer Society, 2008.
7. Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme.
Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of
the 25th conference on uncertainty in artificial intelligence, pages 452–461. AUAI
Press, 2009.
8. Ruslan Salakhutdinov and Andriy Mnih. Probabilistic matrix factorization. In
Advances in Neural Information Processing Systems, volume 20, 2008.
9. Dirk Schäfer and Eyke Hüllermeier. Dyad ranking using a bilinear Plackett-Luce
model. In Proceedings ECML/PKDD–2015, European Conference on Machine
Learning and Knowledge Discovery in Databases, Porto, Portugal, 2015. Springer.
10. Dirk Schäfer and Eyke Hüllermeier. Plackett-Luce networks for dyad ranking. In
Workshop LWDA, “Lernen, Wissen, Daten, Analysen”, Potsdam, Germany, 2016.
11. S. Vembu and T. Gärtner. Label ranking algorithms: A survey. In J. Fürnkranz
and E. Hüllermeier, editors, Preference Learning. Springer-Verlag, 2011.