<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>ACM RecSys</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Exploiting Popularity and Similarity for Link Recommendation in Twitter Networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jun Zou</string-name>
          <email>junzou@gatech.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Faramarz Fekri</string-name>
          <email>fekri@ece.gatech.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Georgia Institute of Technology</institution>
          ,
          <addr-line>Atlanta, GA 30332</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2014</year>
      </pub-date>
      <volume>10</volume>
      <abstract>
        <p>Twitter functions both as a social network and an information network, where users follow other users to make social connections as well as to receive information. Both popularity and similarity are important factors that drive the growth of the Twitter network. In this paper, we propose two approaches to exploiting both popularity and similarity for link recommendation. The first approach employs the rank aggregation technique to combine rankings generated by popularity-based and similarity-based recommendation algorithms. The second approach adapts the collaborative filtering algorithms to incorporate popularity in addition to similarity. The empirical evaluation results on real-world datasets confirm that combining popularity and similarity improves the recommendation performance.</p>
      </abstract>
      <kwd-group>
        <kwd>Link recommendation</kwd>
        <kwd>Twitter</kwd>
        <kwd>Popularity</kwd>
        <kwd>Similarity</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>Twitter is a popular on-line platform for social
networking and information sharing. Twitter users can follow other
users to receive messages, or tweets, from them. However,
given the limited attention and time, most users would like
to follow only the most relevant users. Due to the huge
number of users in Twitter, recommendation algorithms are
needed to help users automatically discover new interesting
users to follow.</p>
      <p>
        Many existing link recommendation algorithms for
social networks are developed with focus on the link
structure [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The simplest example is to recommend the most
popular users with the largest number of connections. Other
common algorithms first weigh each link by some
importance score, and rank the nodes according to the sum of
importance scores of their links, e.g., the PageRank
algorithm [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Twitter provides the user recommendation
service called WTF (“Who To Follow”) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], which was built
on SALSA (Stochastic Approach for Link-Structure
Analysis) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], a random-walk algorithm similar to PageRank.
We generally refer to those algorithms as “popularity” or
“weighted popularity” based algorithms.
      </p>
      <p>
        Yet, unlike other social networks such as Facebook,
besides to establish social connections, many Twitter users
follow other users to receive information interesting to them.
Hence, it is promising to exploit the similarity between
Twitter users for recommendation, i.e., to recommend other users
similar to the followees already followed by the follower, or
to recommend other users who are similar to the follower.
In [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], the authors proposed a content-based algorithm that
match user interests by directly analyzing the texts of user
tweets. In [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], the authors proposed a collaborative filtering
algorithm using the matrix factorization technique to learn
latent user interests from the user feedbacks, e.g., to follow
a user or not.
      </p>
      <p>
        Indeed, a recent study has shown that popularity and
similarity are the two important factors that drive the growth
of a variety of networks including the Internet and social
networks [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In this paper, we compare various
popularitybased algorithms and similarity-based algorithms for
Twitter user recommendation, and propose two approaches to
exploiting both popularity and similarity. The first approach
is to employ rank aggregation techniques [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]; the second
approach is to adapt the collaborative filtering algorithms to
incorporate popularity in addition to similarity.
2.
      </p>
    </sec>
    <sec id="sec-2">
      <title>POPULARITY VERSUS SIMILARITY</title>
      <p>In this section, we describe various popularity-based
algorithms and similarity-based algorithms.
2.1</p>
    </sec>
    <sec id="sec-3">
      <title>Problem Description</title>
      <p>
        To recommend new users for the follower to follow, the
recommendation algorithm generates a personalized ranked
list of users. However, due to the huge number of users in
Twitter, it is too costly to compute a ranking of
networkwide users for making personalized recommendations to each
follower. Many algorithms instead recommend users from
the local network that centers around the follower [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The
empirical study in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] revealed that 90% of new links in
Twitter-like microblogging networks go to users two hops
away from the follower, i.e., followees of the follower’s
followees.
      </p>
      <p>As such, in the rest of this paper, we focus on
recommending users from 2-hop users for the follower. We denote the
follower whom to recommend users for as source s, the set
of its initial followees as F1, and the set of all followees of F1
as F2. We also denote F +(s) as the subset of F2 that source
s follows as network grows. The recommendation problem
is essentially to compute a ranked list of users from F2 for
source s.
2.2</p>
    </sec>
    <sec id="sec-4">
      <title>Recommendation Algorithms</title>
      <p>2.2.1</p>
      <sec id="sec-4-1">
        <title>Popularity-based algorithms</title>
        <p>
          For the popularity based or weighted popularity based
algorithms, we consider Adamic-Adar score [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] and SALSA [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
Adamic-Adar score is a simple and effective measure for link
recommendation. We adopt it for Twitter user
recommendation, and compute the score for a 2-hop user u ∈ F2 of
source s as
        </p>
        <p>AA(s, u) =</p>
        <p>X
z∈P(u)∩F1</p>
        <p>1
|R(z)|
,
where R(u, i) = 1 if u follows i and R(u, i) = 0 otherwise.</p>
        <p>The MF-BPR method learns a model for correctly ranking
pairs of users in F2. It is assumed that if user u follows user
i but not user j, ∀i, j ∈ F2, then user u prefers user i over
j. We represent such relationship as (u, i, j). The training
data is created as D = {(u, i, j)|i ∈ F +(u), j ∈ F2\F +(u)}.
We associate each user u with a K-dimensional latent vector
wu, and each i ∈ F2 with a K-dimensional latent vector vi.
Let W and V represent the collections of all wu and all vi,
respectively. The probability that user u prefers i over j is
defined as
1</p>
        <p>P (i &gt;u j|W, V ) = 1 + e−(X(u,i)−X(u,j)) ,
where X(u, i) = hwu, vii, and h, i indicates the dot product.
We further introduce a normal distribution N (0, λ−1IK ) as
the prior distribution for each wu and vi, where λ &gt; 0
and IK is a K × K identity matrix. The latent vectors are
learnt by maximizing the posterior probability, which can
be formulated as</p>
        <p>P (i &gt;u j|W, V )P (W )P (V ).
(6)
where P(u) denotes the set of followers of user u, and R(z)
denotes the set of followees of user z.</p>
        <p>In the SALSA algorithm, we construct a bipartite graph
G by assigning F1 as “hubs” and F2 as “authorities”. The
algorithm performs two distinct forward-backward random
walks starting from the “authorities” side or the “hubs” side.
The scores for the authorities and hubs are related as
ai =</p>
        <p>X
{k|(k,i)∈G}</p>
        <p>hk
|R(k)|
, hk =</p>
        <p>X
{i|(k,i)∈G}</p>
        <p>ai
|P(i)|
where ai and hk denote the scores of authority node i and
hub node k, respectively, and (k, i) denotes an edge in
bipartite graph G. The users in F2 (“authorities”) are ranked
by the authority scores.
2.2.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Similarity-based algorithms</title>
        <p>
          For the similarity-based algorithms, we focus on the
collaborative filtering recommendation algorithms using the
observed follower-followee relationship between users as
implicit feedback, including the neighborhood method and the
MF-BPR (Matrix Factorization with Bayesian Personalized
Ranking) method [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. The basic principle is to recommend
to source s the followees of the F1 users who share similar
interests with source s. Using the terms of traditional
useritem recommender systems, the set of “users” corresponds
to U = {s} ∪ F1, and the set of “items” corresponds to F2.
Moreover, the established follower-followee relationships
between U and F2 are considered as observed implicit
feedback on “items” in F2 from “users” in U. The collaborative
filtering approach exploits the collective implicit feedbacks
to generate recommendations.
        </p>
        <p>In the neighborhood method, we compute the similarity
between source s and other users u ∈ F1 using Jaccard’s
coefficient as</p>
        <p>J(s, u) =
|F +(u) ∩ F +(s)|
|F +(u) ∪ F +(s)|
where F +(u) denotes the subset of F2 followed by user u.
The score on user i ∈ F2 is predicted as</p>
        <p>R(s, i) =</p>
        <p>
          X J(s, u)R(u, i),
u∈F1
(1)
(2)
(3)
(4)
(5)
(7)
(8)
(9)
We apply the stochastic gradient-descent algorithm with
bootstrap sampling as in [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] to solve this optimization
problem. The latent vectors are updated for triple (u, i, j) ∈ D
with learning rate μ as follows
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>3. PROPOSED LINK RECOMMENDATION</title>
      <p>We propose two approaches to exploiting both
popularity and similarity: rank aggregation and popularity-biased
collaborative filtering. The rank aggregation approach
combines the recommendation results of the popularity-based
algorithm and the similarity-based algorithm, while each
individual algorithm generates recommendations independently.
It is worth noting that the scores predicted by different
recommendation algorithms can be very different in both the
range and the form, which are not suitable for aggregation,
and hence we only consider order-based rank aggregation
that combines the rankings which can be easily derived by
sorting users by the predicted scores.</p>
      <p>The popularity-biased collaborative filtering approach
directly adapts the original collaborative filtering algorithms
to incorporate popularity in addition to similarity. We
introduce two specific cases of the neighborhood algorithm and
the MF-BPR algorithm.
3.1</p>
    </sec>
    <sec id="sec-6">
      <title>Rank Aggregation</title>
      <p>
        Rank aggregation combines several ranking lists to
generate a “consensus” ranking. It has been widely applied in Web
search and document retrieval, e.g., meta-search engines and
multi-criteria search. There are various rank aggregation
algorithms such as Borda’s method, median rank aggregation,
and Markov chain methods. We choose the Markov chain
method for its superior performance. In particular, we focus
on the MC3 method among the four Markov chain methods
proposed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Suppose we have two ranked lists of users denoted by
τ1 and τ2, which are generated using the popularity-based
algorithm and the similarity-based algorithm, respectively.
The Markov chain rank aggregation method assigns a unique
state to each of the users, and specifies the transition
matrix as follows: If the current state is user i, randomly pick
a ranking list τ with probabilities P (τ = τ1) = α and
P (τ = τ2) = 1 − α, then uniformly pick a user j from τ ,
and go to j if j &gt;τ i else stay in i. Here, j &gt;τ i means the
list τ ranks j closer to the top than i. Let Tk represent the
transition matrix for the Markov chain derived from
individual ranking list τk, k = 1, 2, where Tk(i, j) is the transition
probability from state i to j,
(10)
(11)
 |F12| ,
Tk(i, j) = 1 − |{j|j|F&gt;2τ|k i}| , if j = i
if j &gt;τk i
0,
otherwise
The transition matrix T of the rank aggregation Markov
chain can be written as</p>
      <p>T = αT1 + (1 − α)T2.</p>
      <p>The aggregated ranking is obtained by ranking users
according to the stationary probabilities of the Markov chain.</p>
      <p>By varying the parameter α between 0 and 1, we can bias
the aggregation rank towards similarity or popularity, with
α = 0 for similarity and α = 1 for popularity. Further, α can
be personalized for individual users to better predict their
behavior and thus meet individual preferences.
3.2</p>
    </sec>
    <sec id="sec-7">
      <title>Popularity-biased Collaborative Filtering</title>
      <p>We adapt the two cases of the neighborhood method and
the matrix factorization method, which are the most
popular collaborative filtering recommendation algorithms, for
popularity-biased collaborative filtering. It should be noted
that the specific techniques proposed here might not be
directly applicable to other collaborative filtering algorithms,
as they may have very different prediction models.
Nevertheless, we stress the viability and benefits of incorporating
popularity into collaborative filtering link recommendation.
3.2.1</p>
      <sec id="sec-7-1">
        <title>The popularity-biased neighborhood method</title>
        <p>The neighborhood method predicts the score on user i ∈
F2 for source s using (4), where the feedback R(u, i) from
user u in F1 (set as 1 for following i and 0 for not following
i) is weighted by the similarity of user u to source s.
However, the similarity computed using the Jaccard’s coefficient
assigns zero to users who do not have common followees
with the source. To bias the neighborhood method towards
popularity, we modify the similarity measure as follows
Jp(s, u) =
|F +(u) ∩ F +(s)| + c
|F +(u) ∪ F +(s)|
,
(12)
where c &gt; 0 ensures that every user u ∈ F1 is counted with
Jp(s, u) &gt; 0. In this way, the popular users who are followed
by many users in F1, though not necessarily highly similar
to the source, can have a higher predicted score than other
users followed by only a few users with high similarity to the
source.
3.2.2</p>
      </sec>
      <sec id="sec-7-2">
        <title>The popularity-biased MF-BPR method</title>
        <p>The MF-BPR method represents each user i ∈ F2 using
a latent vector as described in Sec. 2.2.2. The similarity of
user interests is modelled by the angle between latent
vectors. We now adapt the model to also incorporate the
popularity of each user by the length, or magnitude, of the latent
vector. We assume a prior distribution N (γ1, λ−1IK) for
the latent vectors, where 1 denotes a vector with all entries
one, and randomly initialize the latent vectors following the
normal distribution N (γ1, βIK), where γ needs to be
chosen relatively large. Using the stochastic gradient descent
learning algorithm, the latent vectors are updated for triple
(u, i, j) ∈ D as follows
The latent vector vi of user i is updated through (14), where
the angle between vi and wu is expected to be small, as they
are both close to the vector γ1 if γ is relatively large. Hence,
vi increases in length after each update. As popular users
are updated more often in the stochastic optimization
process, their associated latent vectors will have larger length.</p>
        <p>The score on user i ∈ F2 by source s is predicted by
X(s, i) = hws, vii = cos θsi|ws||vi|,
(16)
where θsi is the angle between ws and vi, which reflects the
similarity of interests between user i and source s. Moreover,
with the popularity-biased MF-BPR method, the magnitude
|vi| is large for popular users. Therefore, both similarity and
popularity are taken into account. Also, increasing γ biases
the algorithm more towards popularity.
4.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>EVALUATION</title>
      <p>In this section, we compare the empirical performance of
all recommendation algorithms on the Twitter dataset and
the Tencent Weibo dataset. Tencent Weibo is a
Twitterlike online microblogging service widely used in China. The
Twitter dataset is downloaded from Twitter.com using the
Twitter API. The Tencent Weibo dataset is a subset of the
dataset provided by KDD Cup 2012 Track 11. The source
users are selected such that they have at least 100 followees.
For each source user, we use its earliest 50 followees as the
initial set F1, and construct the set F2 from the followees of
F1. The statistics of the two datasets are shown in Table 1,
where the number of F2 users is the total of F2 users from
all source users, and the number of edges refers to the total
number of follower-followee relationships between users.</p>
      <p>In the experiment, we use 70% of the subset of F2
followed by each source user for training, 15% for validation,
1http://www.kddcup2012.org/c/kddcup2012-track1
and 15% for testing. To reduce computational complexity,
we further prune the F2 users that are followed by less than
5 users, as they are very unlikely to be followed by the source
users. The metrics used for evaluating recommendation
performance are AUC (Area Under the ROC Curve) and P@10
(Precision at top 10). The AUC metric measures the
algorithm’s overall ability to rank positive instances above
negative instances, whereas the P@10 metric emphasizes more
on the quality of the top 10 recommended instances. We
report results of all algorithms on the testing set.
4.1</p>
    </sec>
    <sec id="sec-9">
      <title>Popularity versus Similarity</title>
      <p>We first compare the performance of the popularity-based
algorithms (Adamic-Adar and SALSA) and the
similaritybased algorithms (Neighborhood and MF-BPR) as shown
in Table 2. The parameters of the MF-BPR algorithm are
set as K = 20, λ = 0.01 and μ = 0.001. The results show
that the popularity-based algorithms generally achieve
better AUC than the similarity-based algorithms. However,
the similarity-based algorithms have quite good P@10
performance. Overall, combining the results suggests that while
Twitter users tend to follow popular users, they also like to
follow users with similar interests to them. Hence, there is
a trade-off between popularity and similarity.
4.2</p>
    </sec>
    <sec id="sec-10">
      <title>Combining Popularity and Similarity</title>
      <p>For the rank aggregation approach, we experiment with
different combinations of popularity-based and
similaritybased algorithms, including AA &amp; Neighb (Adamic-Adar
&amp; Neighborhood), AA &amp; MF-BPR (Adamic-Adar &amp;
MFBPR), SLS &amp; Neighb (SALSA &amp; Neighborhood), and SLS &amp;
MF-BPR (SALSA &amp; MF-BPR). The parameter α is selected
independently for each individual user such that the AUC is
maximized on the validation set for that user.</p>
      <p>For the popularity-biased collaborative filtering approach,
we evaluate the Pop-Neighb (Popularity-biased
Neighborhood) algorithm and the Pop-MF-BPR (Popularity-biased
MF-BPR) algorithm. The parameter c is set to 1 for
PopNeighb, and the parameters for Pop-MF-BPR are set as
γ = 1 and β = 0.01, while other parameters are set the
same as in the original MF-BPR algorithm.</p>
      <p>The results in Table 3 show that the popularity-biased
MF-BPR algorithm achieves the best performance in terms
of both AUC and P@10. Comparing the results of rank
aggregation algorithms with those of the individual algorithms
in Table 2, we can see that rank aggregation can combine
the advantages of the popularity-based algorithm and the
similarity-based algorithm. The popularity-biased
neighborhood method also improves over the original neighborhood
method. In summary, the evaluation results confirm that
combining popularity and similarity can improve the overall
recommendation performance in terms of AUC as well as
the quality of the top ranked recommendations.
5.</p>
    </sec>
    <sec id="sec-11">
      <title>CONCLUSIONS</title>
      <p>We proposed two approaches, the rank aggregation
approach and the popularity-biased collaborative filtering
approach, to exploiting both popularity and similarity for
Twitter user recommendation. Through experimental evaluation
on two real-world datasets, we showed that the rank
aggregation algorithms can combine the advantages of
popularitybased and similarity-based algorithms, and the
popularitybiased collaborative filtering algorithms improve upon the
original algorithms in terms of both AUC and P@10.
6.</p>
    </sec>
    <sec id="sec-12">
      <title>ACKNOWLEDGMENTS</title>
      <p>This material is based upon work supported by the
National Science Foundation under Grant No. IIS-1115199.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Dwork</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Naor</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Sivakumar</surname>
          </string-name>
          .
          <article-title>Rank aggregation methods for the web</article-title>
          .
          <source>In WWW'10</source>
          , pages
          <fpage>613</fpage>
          -
          <lpage>622</lpage>
          , May
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Goel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Zadeh</surname>
          </string-name>
          . WTF:
          <article-title>The who to follow service at Twitter</article-title>
          .
          <source>In WWW'13</source>
          , pages
          <fpage>505</fpage>
          -
          <lpage>514</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hannon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bennett</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Smyth</surname>
          </string-name>
          .
          <article-title>Recommending Twitter users to follow using content and collaborative filtering approaches</article-title>
          .
          <source>In RecSys'10</source>
          , pages
          <fpage>199</fpage>
          -
          <lpage>206</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Lempel</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Moran</surname>
          </string-name>
          . Salsa:
          <article-title>The stochastic approach for link-structure analysis</article-title>
          .
          <source>ACM Trans. on Info. Syst.</source>
          ,
          <volume>19</volume>
          (
          <issue>2</issue>
          ):
          <fpage>131</fpage>
          -
          <lpage>160</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Liben-Nowell</surname>
          </string-name>
          and
          <string-name>
            <surname>J. Kleinberg.</surname>
          </string-name>
          <article-title>The link prediction problem for social networks</article-title>
          .
          <source>In CIKM'03</source>
          , pages
          <fpage>556</fpage>
          -
          <lpage>559</lpage>
          ,
          <year>November 2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Menon</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Elkan</surname>
          </string-name>
          .
          <article-title>Link prediction via matrix factorization</article-title>
          .
          <source>In ECML PKDD'11</source>
          , pages
          <fpage>437</fpage>
          -
          <lpage>452</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Page</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Motwani</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Winograd</surname>
          </string-name>
          .
          <article-title>The PageRank citation ranking: Bringing order to the Web</article-title>
          .
          <source>Technical report</source>
          , Stanford InfoLab,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Papadopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kitsak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Serrano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Boguna</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Krioukov</surname>
          </string-name>
          .
          <article-title>Popularity versus similarity in growing networks</article-title>
          .
          <source>Nature</source>
          ,
          <volume>489</volume>
          :
          <fpage>537</fpage>
          -
          <lpage>540</lpage>
          ,
          <year>September 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rendle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Freudenthaler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gantner</surname>
          </string-name>
          , and L.
          <string-name>
            <surname>Schmidt-Thieme</surname>
          </string-name>
          .
          <article-title>BPR: Bayesian personalized ranking from implicit feedback</article-title>
          .
          <source>In UAI'09</source>
          , pages
          <fpage>452</fpage>
          -
          <lpage>461</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiong</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B. D.</given-names>
            <surname>Davison</surname>
          </string-name>
          .
          <article-title>Link formation analysis in microblogs</article-title>
          .
          <source>In SIGIR'11</source>
          , pages
          <fpage>1235</fpage>
          -
          <lpage>1236</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>