<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Personalized Location Recommendation by Aggregating Multiple Recommenders in Diversity</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ziyu Lu</string-name>
          <email>zylu@cs.hku.hk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hao Wang</string-name>
          <email>wanghao@nju.edu.cn</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nikos Mamoulis</string-name>
          <email>nikos@cs.hku.hk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wenting Tu</string-name>
          <email>wttu@cs.hku.hk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David W. Cheung</string-name>
          <email>dcheung@cs.hku.hk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept. of Computer Science, The Univ. of Hong Kong</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>State Key Lab. for Novel, Software Technology, Nanjing University</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <volume>19</volume>
      <issue>2015</issue>
      <abstract>
        <p>Location recommendation is an important feature of social network applications and location-based services. Most existing studies focus on developing one single method or model for all users. By analyzing real location-based social networks, in this paper we reveal that the decisions of users on place visits depend on multiple factors, and different users may be affected differently by these factors. We design a location recommendation framework that combines results from various recommenders that consider various factors. Our framework estimates, for each individual user, the underlying influence of each factor to her. Based on the estimation, we aggregate suggestions from different recommenders to derive personalized recommendations. Experiments on Foursquare and Gowalla show that our proposed method outperforms the state-ofthe-art methods on location recommendation.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Categories and Subject Descriptors</title>
      <sec id="sec-1-1">
        <title>H.3.3 [Information Search and Retrieval]: Information Filtering</title>
        <p>Location recommendation, personalization, aggregation</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>Due to the proliferation of mobile devices, location-based social
networks (LBSNs) have emerged. Some of these social networks
are dedicated to location sharing (e.g., Gowalla, Foursquare, etc.)
while others provide location-based features together with other
social networking services (e.g., Facebook, Twitter, etc.). In either
case, users can share with their friends the experience of visiting
locations (such as restaurants, view spots, etc.). Location
recommendation is very important for these applications, because it
provides better user experience and may thus contribute to business
promotion in cyber-physical space.</p>
      <p>
        This paper focuses on check-in records of LBSN users. An
LBSN user may check-in at different locations from time to time,
by explicit check-in operations via mobile applications or by
implicit actions such as posting photos with location annotations.
Therefore, over time, a user generates a sequence of check-in
records. It is important to distinguish user-location check-ins from
their analogues in classic recommender systems (e.g., user-item
ratings [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]). As Hu et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] have pointed out, a rating is explicit,
indicating directly whether and how a user likes an item, whereas a
check-in record is implicit with some unique characteristics:
1. There is no negative feedback in check-in records, i.e.,
check-ins do not indicate whether a user dislikes a location.
2. The check-in frequency, even after normalization, is not a
reliable indicator of how much a user likes a location.
      </p>
      <p>
        Despite any differences between ratings and check-ins,
conventional collaborative filtering (CF) methods (e.g., [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]) are
computationally applicable to check-in data. Recently, many new
approaches have been proposed, making use of social, geographical,
temporal, and semantic information of LBSN data (e.g., [
        <xref ref-type="bibr" rid="ref10 ref18 ref26 ref7">7, 10, 18,
26</xref>
        ] and others discussed later in Section 2). However, most
existing methods take unified perspectives towards the recommendation
problem, though some of them do consider more than one aspects
of the check-in data. In other words, most exising methods focus
on developing one single method / model for all users.
      </p>
      <p>
        In this paper we argue, and reveal by analyzing real LBSN
datasets, that the decisions of users on where to go depend on
multiple factors, and different users may be affected differently by these
factors. For example, some users are easily affected by their friends
and do not care much about the places to visit, while some others
are more independent with more concern of the places. Therefore,
we aim to personalize location recommenders. That being the
purpose, we propose a general framework to combine multiple
recommenders that are potentially useful. A pair-based optimization
problem is formulated and solved to identify the underlying
preference of each user. The contributions of this paper are as follows:
By analyzing two real LBSN datasets, Foursquare [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and
Gowalla [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], we reveal the diversity among
recommendations made by different recommenders and the diversity of
user preferences over recommenders.
      </p>
      <p>We propose a framework for location recommendation.
Under our framework we are able to personalize location
recommenders to better serve the users.</p>
      <p>We evaluate our method using Foursquare and Gowalla data.
The results show that our method outperforms the
state-ofthe-art location recommenders.</p>
      <p>The rest of this paper is organized as follows. By analyzing
Foursquare and Gowalla data, Section 2 shows the diversity in
recommenders and user preferences. Motivated by certain
observations, Section 3 describes LURA, our proposed framework, which
is extensively evaluated in Section 4. Finally, Section 5 presents
some related work and Section 6 concludes the paper.</p>
    </sec>
    <sec id="sec-3">
      <title>DATA AND RECOMMENDERS</title>
      <p>In this section we first introduce the datasets and formally define
the location recommendation problem. Then, we select 11
representative recommenders which, to the best of our knowledge, cover
all the factors that the state-of-the-art methods consider in
location recommendation. By analyzing their performance on different
users, we demonstrate the diversity of (i) recommendations from
the representative recommenders and (ii) users’ check-in behaviors.
2.1</p>
    </sec>
    <sec id="sec-4">
      <title>Datasets &amp; Recommendation Problem</title>
      <p>
        We use Foursquare [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and Gowalla [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] datasets. In both
datasets, there are users (U ), locations (L), and check-ins each in
the form of a tuple (u; `; t) recording the event that user u
visited location ` at time t. Note that from the tuples we can infer
the check-in frequency cu;` of user u to location `. In many
recommenders, the matrix C = (cu;`)u2U;`2L is the primary data source.
Associated with each location there is a longitude and latitude
coordinate. Friendships are represented as undirected, unweighted
pairs of users. In addition, for each location we have collected its
semantic information (i.e., category) from a Foursquare API1.
      </p>
      <p>We filtered the datasets to include only users that have at least 10
check-in records, and locations that are visited at least twice. The
resulting Foursquare dataset has 11,326 users, 96,002 locations,
and 2,199,782 check-in records over 364 days; Gowalla is a bit
larger, containing 74,725 users, 767,936 locations, and 5,829,873
check-in records over a period of 626 days.</p>
      <sec id="sec-4-1">
        <title>Top-N location recommendation. Given a user u, the top-N</title>
        <p>location recommendation problem is to suggest to u a list of N
locations, previously unvisited by u, with the expectation that u
will be intrigued to visit (some of) these locations.
2.2</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Representative Recommenders</title>
      <p>We select 11 representative recommenders that consider social,
geographical, temporal, as well as semantic aspects of the data.
2.2.1</p>
      <sec id="sec-5-1">
        <title>User-based CF Methods</title>
        <p>User-based CF methods assume that similar users have similar
preferences over locations, thus the score of location ` for user
u, score(u; `), is computed by the similarity-weighted average of
other users’ visits to `:
score(u; `) =</p>
        <p>P
v2U wu;v cv;`
P
v2U wu;v
:
N locations with the largest scores are recommended to user u.
Different weighing schemas (i.e., wu;v) yield different
recommenders (R1-R5).</p>
        <p>
          R1: User-based CF (UCF). wu;v could be the cosine similarity
between users u and v, i.e., wu;v = cos (cu; cv),where cu and
1https://developer.foursquare.com/
cv are corresponding rows in the check-in matrix C. This is the
conventional user-based CF method [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
        <p>
          R2: Friend-based CF (FCF). Similarity between users could be
reflected by their common friends. For friends u and v, let wu;v =
Jaccard (Fu; Fv), where F refers to the set of friends of a user
and Jaccard ( ; ) computes the Jaccard index. This recommender
is proposed by Konstas et al. [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
        </p>
        <sec id="sec-5-1-1">
          <title>R3: Friend-location CF (FLCF). In addition to common</title>
          <p>
            friends, commonly visited locations may as well reflect the
closeness of two friends: wu;v = Jaccard (Fu; Fv) + (1 )
Jaccard (Lu; Lv), where u and v are friends; L is the set of
locations that a user has visited [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]. Parameter 2 [0; 1] is the
weighing between common friends and common locations. Setting
= 0:1 achieves the best performance for this method on our data.
          </p>
          <p>
            R4: Geo-distance CF (GCF). The rationale of GCF is that
nearby friends are more influential than faraway ones. This
intuition is used by Ying et al. [
            <xref ref-type="bibr" rid="ref28">28</xref>
            ], where the weight wu;v is a rescale
of the geographical distance between u and v:
wu;v = 1
          </p>
          <p>geodist (u; v)
maxw2Fu geodist (u; w)
:
The location of each user can be inferred from her most frequently
visited locations.</p>
          <p>
            R5: Category CF (CCF). Consider users as keywords and
location categories as documents; between user u and category C
there can be a relevance score, rel(u; C) (e.g., TF-IDF). To
measure the similarity between two users u and v, Bao et al. [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ]
consider the sum of minimum relevance scores over all categories, i.e.,
sim(u; v) = PC minfrel(u; C); rel(v; C)g. This value is then
penalized by the difference between users’ randomness in
preferences, thus the weight wu;v is
wu;v =
          </p>
          <p>sim(u; v)
1 + jent(u) ent(v)j
;
where ent( ) is the entropy of a user’s preference over categories.
2.2.2</p>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>Item-based CF Methods</title>
        <p>Item-based CF methods take a “transposed” view of the data, i.e.,
users are recommended to items instead of the other way round.
When the weight w`;`0 between two locations is properly defined,
the score of user u to location `, score(u; `), can be computed as
score(u; `) =</p>
        <p>P
`02L w`;`0 cu;`0
P
`02L w`;`0
:</p>
        <p>
          R6: Item-based CF (ICF). Similar to UCF, w`;`0 could be the
cosine similarity cos (c`; c`0 ), where c` and c`0 are corresponding
columns in C the check-in matrix [
          <xref ref-type="bibr" rid="ref1 ref21">1, 21</xref>
          ].
        </p>
        <p>
          R7: Time-weighted CF (TCF). Recent check-in records are
relatively more reliable if we consider any evolution of user
preference. Ding and Li [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] use an exponential discounting
function f (t) = e t to describe temporal bias in w`;`0 . Namely,
w`;`0 = cos (c`; c`0 ) f (tu;`0 ) where tu;`0 is the time of the user’s
check-in at `0. is the decreasing rate. We set = 17 so that TCF
achieves the best performance.
2.2.3
        </p>
      </sec>
      <sec id="sec-5-3">
        <title>Probabilistic Methods</title>
        <p>Besides the CF family, another methodology of making
recommendations is to estimate the probability of user u visiting location
`, conditioned on the check-in history of u, i.e.,</p>
        <p>score(u; `) = Pr f` jLu g :
The most probable N locations are recommended to the user. The
next three recommenders (R8-R10) attempt to estimate Pr f` jLu g.
tlv{Hea(pre`lsanu,icae`ress0a),,no|{PUdfgr`ebl{oo∩`acdr|aUiLetsit`udo0(en}`t6=s,er`=tm∅0h)a}i}Qnt,(e`h`,da`a0n∈0vv)dLe∈iauPraaet,apdol·oeabgratstaesetofitrhdvotaientstisetn(tpgch`oo,epwm`r0ose)mecrb-te,olsanowsw.fhvediasrilieslttordpri,biasruPtaatminoc=ene-. rfrsseeuoubNm22311050550000000 rfrsseeouubNm42680000
cheRc9k:-inKs eartnloecladtieonnssitLyum=od{e`l1,(`K2,D· M··),. `Fno}r, tahiussfearctuor wesittihmpataesst 00.5 0.6 Div0e.7rsity 0v.a8lue 0.9 1.0 00.5 0.6 Div0e.7rsity 0v.a8lue 0.9 1.0</p>
        <p>R8: Power-law model (PLM). Ye ethte aplr.ob[a2bi6li]ty sotfuudyvisiptianigrsa new location35`0 using kernel tech- (a) 1F0o0ursquare (b) Gowalla
tvofHeaf(re`lsnu;lcaeo`es0ca),,anjtPfUidorg`bnfeo\s`adrjULeitsh`udta0(etg`t6=e;hr=`m;a0v)gigQen,(e``daa;0`nt20v)dLi2lauePraaea,sprtoeoggbrotreeneyoteshsdsaiisacotton(piilvnntmsymh`aip−oqlR;aepmruwudcr`oe1e+s1oo0s0oescscv4n))::aerne.esbl-PmwitTsed,lSrdsahheev{pow.neirwks`aritfeeshtm|drLiioaednidatosvuleorildtelaesl}=errknt,[ler4c=dpiRrie1s]biansP9ustiernu1.rsstalaabetPitmidenndhoteeeIbwcjn=endnn=yede-s.s1oteiZiemtnnKhayedaahnnlnmas(gotigoiiofantednunodeadgddillveeiiCsisot(drfrsseeubouNmyt223dh11Su(0505i05oaKo`as000000wnf,tlDd`(tb[`jhM3a,)el7s)`o)i,]0.dsn).w.agTtihawthheuie(rdsideseidsisKsstieraheatno(cncut·tode)-r a2o4mli1sm-t3eo6nf0d1ef0orsrlFotuiecrrfsseeubouNmgss426a8iutt00n00iirngoegnF1)s.o:.FuTDorhsirseqetupraaiecbrrheufout(risDomenarayunosc,fee1da-oi2cvfh4et0rhrseefictoroyermcvtormaamlieunmneidnse.egnrdagenerdniseDrtahateyenss
tcnhhieqeRucp9ekr:s-o:ibnKPasrbeafirtln`iltoejyLclaoudtfigeonun=ssivtLiyns1uitmPi=nogjndf=ae`1l1nK;(e`Kwh2;D(lgMoeco)a;.dt`iiFosnle2n1otomnDgnP(rp(,s``·lc)tojn;aah=y`uglie1jiussdvaK)isefnk)eha2et,grhrcD(nweltaeukoltlhk(ae,re`tewrwir)etnrnius−eteiedhttlleihKlhmadatt=(eenhpa`ndctaj(sneh)islt,−o)s-tylnodg+n1ei(4tsu`ti)d=me−anotli−foonna61(,.`loji0c0).e)a.5.;t,iohnPe,r(0rea{a.6,n`)dl|DaLFtKiou(v0·eu.h})7rrssiaistqny=dua0va.a8rleue 0ltdm.oh9eceepaaiptscieuto1rernf.edo0sdrbmtbyhyaaanttnchaee001re.1on5-fuadcrmietmucb0aoee.l6nmr(lysbomiDfo)veinhiv0nGsaie.itdl7tsroeesv(rwdieitR.ycaebt0j.vlyo,l..ar8atluhuo)eef. nw0Huh.m9eicnbhceert,1ho.eaf0 jruetshceoremluemmceaennndtbeides
iiTwsmhhaRpiesrsr1oecm0va:doelemdd=eSelpknieats1rtunioiasesvleletdthrkrebaeRiyrndn9eiZme.dhleaonndnIsgenianosantsnneiiadtanlydidCtiyhmvooiofdowfudgate[hel3loeb0faauCd(2aa]psdSgim..e.spTsao2rKriihtotoIsl.ayaen4d(x.Dn.`ildepmhMa(;Msasdlta`otrMFiiittm0cris)isa)oceaa.tatucntsatttioelioerTnaowommnitrfncx,hpmeptCecˆMitoFrsueso.na,vFdtfin`AodcaiiieltrtsfinlerzroeuiseenCbdr)ˆcresodPeio+1t;szlu-ltlor)4avoayaern.attncegfilidounosnmetnQtddsomtrelsteuonhuncwtecrdta3huy-mar0rltatceiaih1nousstakr-,nto`ti3sxfmCoiˆ6cntffhaaa0Cn=tec1rtifcto0chmoPhreeeirsalFzQncoyatk(bietcit-gii.ehiosseanuu.nmtta,ismirna(olgheMadagonatteeovrF1).sinde.)x:.t FTDohirseetpraiecitrfseubohNmb232011rh......005505ufoutrisomeCCClllnuuursssattteeerrr uno321c,feedaoicCCCvlllfuuuhsssettteeetrrrr654hrseeictroyemCCcllvuussotteemarr87mleumnedse.enCrldu7654321gseteerrniseCrt1114o437ah1282v.......626te0805e292er4435%%%an%%%%sge Va2518410r.......i9249876a3888067nce
consider distances between latitudes and Rl1o1n:gImitupldiceist matrix feamcto-rization (IMF). Proposed by Hnuuemtber of 0h.0i1ts2 (i3.e.4, t5he6 n7u m8b9er10o1f1 recommended
and measured by the Recommender
pn1loPy jn=a1 K2Dh (lkaetr(n`e)l dlaetn(s`ijty); loenst(i`m)atiolno,nau(ls`.eij[.r1e)f4.e),]e;,dtbhhPaisecrkrifsse`(,asjumlLcaohtud(aifisg)cchaaeticno=knd-ionfs)t.he coldnoevcepanittciiotonenadlsMbtyhFafaotnrai1mre1pl-aidcciittmuaelnlysiovnisa(ilate)vdCeecbnttyoerrsuoof)ft.hewHchlueiscntehcrset,hea jutshe(rbe)lueSmtactiaestnnictsboiesfthe clusters
lon( ) give the latitude and longitude of a2.l3ocatDioinv,erasnidty KinhRisecaommenthdeatpieornfosrmance of recommender RFji.gure 2: Diversity in user preferences.
2D scaled kernel, with h = n d +14 = n 16d.ifWfereennotwrecsohmowmtehnadtatthieon1s1. rWeceoumsemtehnedfierrsst(33R.010-Rd1a1y)s goefnFeoruartesqvuearrye
2.T2h.e4 lastMreactormixmFenadcetrobreizloantgiosnto the m aiaFNnntovrdrosixutltwvhgiegfonaegfisscut3rtecs,odht9r44lli2oe2znc0aaagntdttidhiaoo-y1nNns,s3ob0r(fye7McGudooFsimfewf)rmeasrleerlneansdttpaoreteicccotoonivnmelsimltsitfrshoyr221tu.es...05c5n,WtdLete1hrcesao,nCCCrmfelllduuuocsssptttreeeoLarrrNm321r2e,mtJ=heaenccd1toae0CCCrprlll.duuus3-sss,ttteeerrr 654 CClluusstteerr 87 Clu132ster Co156v..567e.513r%%%age Va301r...i010a624nce
family. MF attempts to find latent structureisndoefxtchaen cbheeucsekd-itno mmeaatsruirxe the their si mebilarity. Therefore for 4 9.01% 0.82
uCs.er Ianndpalortcicautiloanr, pMroFfiltersi)esPtoanfidnQdlosuwc-hratnhepaakacitrh-mwC^uiassete=rr,aigcwgePersemQg(aeitai.ioessun.r,a[e1l3tgah]toe:eodnditversity of theuNm01..1501 recommenders by 56 74..5425%% 01..7555
aapgporoodxiemstaitmioantioofn Cc^u.;`AinzeC^ro;-rveacloumedmeenntrdyatciou;nd`isviecnrsaiCnty(tLhm1ea,nLyb2t·eh· u·ms, aLhdnae)v.=e 2 · Pi,j (1n−0·.0(J1nac−c2a1rd)3(Li4, RLej5c)o)m.6men7der 8 9 10 11 78 23..6751%% 20..5285</p>
        <p>
          R11: Implicit matrix factorization (IMInFtu)i.tivPerlyo,pthoissevadlubeywiHll bue ecltose to 0 if Li’s ar(eaa)llChiegnhtleyrssimoiflatrh,e clusterTshis experime(nbt)inSvotalvteissti4c,6s9o4f Fthoeurcsqluusatreerussers. To present
al. [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], this is a modification of the conveanntdioonthaelrwMiseFbefoclrosiemtopl1icifitthey are pairwise different. their preferences, we group them into 7 clusters using the K-means
user feedbacks (such as check-ins). seeF,igtuhree v1alsuheoswasrethealdlisdtirsitbruibtiuotnedofardoiuvenrdsit0y.9v.aluSepse.cAiFfisicgwaulelyrc,eainn2: Di vlaelelgrocsoriiotthrydmin.ianFteigusu,srweehr2e(rpae)rasenhfo1ew1r-sedtinhmeceecnessni.otenrasl ovfecthtoer7iscrleupstreersse
nintepdaarsala
        </p>
        <p>Foursquare, the values are all in the range [0.77, 0.99] with an av- sequence of 11 values. Figure 2(b) shows some statistics of each
2.3 Diversity in Recommend aertagieoonf s0.93, whereas in Gowalla the minimum, maximum, and cluster. For example, the entry “1u,4s2e.r5s5,%,0.67” means that,
Clusaverage values are 0.58, 0.99, and 0.93 respTeoctivperley.seTnhits tinhdeicpatreesferentceer s1 coofnt3ai,n9s4422.5F5o%uorsfqthueaurseers; and thewveariganroceuwpithin the
clus</p>
        <p>We now show that the 11 recommenderst(hRat1th-eR1111r)ecgoemnmeernadteersvmerayke very diftfheeresnet ruecsoemrsmienntdoati8oncsl.usters tuesriins0g.6t7h.eFKrom-mtheisanexspaerligmoernittwhems.ee(Tthahtethneu1m1r-ecommenders
different recommendations. We use the first 300 days of FoursqUuasreer Prefebreernc8eiss chosen to get thepberefsotrmmdoifdfeurelnatrlyityonftrhoe m7cltuhseterrse.sulting
clusand the first 420 days of Gowalla to constr2u.4UctsetrhsDemivraeyecraoslsmiotymhaivenenddieffresr,ent preferteenrcse.s) ovFerigthuereasp2e(cats) osnhows mtheTnehdeecrasebnmotvaeekresanvoaelrfyystidhsiefsfue8prepnoctrltrusescootuemrrmscleaininmdastp:ioan(risa)ladlneidflfe(riei)ntdirfefceoremn-t
involving 3,942 and 1,307 users respectivelwyh.iWch ereccoommmpeandreerstharee tboapse-d (i.e., friceondosrhdipi,ngaetoegsr,apwhichaelrdeis-an 11u-sdeirmsheanvesidoifnfearlenvtebcehtoavrioirsal rpeaptterrenssewnittehdregaasrdato visiting
lo10 suggested locations by different recom mtaencned,eertcs..)2. TFoourntdwerostasnudcuhser preferseencqeus,enwceeteostfth1e111vareluc-es. Fcigatuiornes.2T(hbis) bsahsiocwallsy isnodmicaetessttahatitsatigcosodorfeceoamcmhender for one
length-10 recommendation lists, Jaccard ind3Nexotec athnatbseomuesteimdetsoamreecoam-mender cmlauystfeairl. tFoogreneexraatemaplliest, the erpneectrrosyomnm“i1esn,ndoetrsnemcaeyssaprrioldyugcoeobdeftoterr arneocothmerm. endations, though,
s1.65%, 3.06” means that, CCluosmtbeirning different
sure their similarity. Therefore for each useohr,faswlesneogmmthe e1fra0i.esnuFdroserbetuhxtaelmodnpeilevrs,eFdroC-Fex(iRst2i)nr1eLqBcuSiorNensst.athIiannt sstuh1ceh.t6acrags%eets,uoswfeerthe usiencres,maunltdiplteherecvoamrmiaenncdeerswtoitgheithnerthmeayclcuosvteerra iwsider range of
5
sity of the 11 recommenders by pairwise agcgomrepgleamtieonntth[e1l1e]n:gth-10 list with the m3o.0st6p.opWulearsloeceattihonast. the 11 rfeacctoomrs mthaetnpdoteernstiaplleyraffofercmta duisfefr’esrbeenhtalvyioor.n the
diversity(L1; L2 ; Ln) = 2 Pi;j (1 Jaccard (Li; Lj )) : 8 cTluhsetearsb.ove analysis supports our claims: (i) different
recomn (n 1) menders make very different recommendations and (ii) different
users have different behavioral patterns in visiting locations. This
basically indicates that a good recommender for one person is
not necessarily good for another. Combining different
recommenders may produce better recommendations, since multiple
recommenders together may cover a wider range of factors that
potentially affect users’ behaviors.</p>
        <p>Intuitively, this value will be close to 0 if Li’s are all highly similar,
and otherwise be close to 1 if they are pairwise different.</p>
        <p>Figure 1 shows the distribution of the diversity values. In
Foursquare, the diversity values are all between 0.79 and 0.99, with
an average of 0.92, whereas in Gowalla the minimum, maximum,
and average values are 0.58, 0.99, and 0.93 respectively. This
indicates that the 11 recommenders generate very different results.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>2.4 Diversity in User Preferences</title>
      <p>Users may also have different preferences over the aspects on
which recommenders are based (i.e., friendship, geographical
distance, etc.). To understand user preferences, we test the 11
recommenders using Foursquare (Days 1-300 for training and Days
2Note that sometimes a recommender may fail to generate a list
of length 10. For example, FCF (R2) requires that the target user
has some friends but loners do exist in LBSNs. In such cases, we
complement the length-10 list with the most popular locations.</p>
    </sec>
    <sec id="sec-7">
      <title>3. OUR METHOD</title>
      <p>In this section we explain LURA, our framework for combining
different location recommenders. (The name “LURA” comes from
its execution cycle of Learn-Update-Recommend-Aggregate.)
Algorithm 1 outlines the steps of LURA, which could be repeated
every t time (e.g., every 3 days or 1 week, as long as there is
necessity in providing new recommendations and there are sufficient
data). LURA is based on two important elements: recommenders
and user preferences; every time LURA runs, it keeps these
elements up to date. Specifically, if invoked at time t, LURA first
learns the user’s current preferences tu(i) (i.e., weights) over
different recommenders; this is done by testing the recommendations
made at time t t against the actual check-ins during the
period (t t; t] (Line 1). Then, LURA updates the recommenders
to make use of all the data available (Line 2). After that, LURA
makes recommendations to users, by aggregating the results of the
component recommenders (Lines 3-4).</p>
      <p>Algorithm 1 LURA(Gt; Rt</p>
      <p>t; u)
. At time t</p>
      <p>Input: Gt: the snapshot of LSBN up to the t-th (current) day; Rt t:
a set of n recommenders, trained using Gt t; u: a user
Output: N recommended locations for user u at time t</p>
      <p>t each recommender Rit t has recommended N
locations to u, Rit t(u) = h`itj ti .</p>
      <p>j=1;2; ;N
1: Learn user u’s current preferences, tu(i), on each recommender
Rit t, based on recommendations Rit t(u) and u’s check-in facts
during the time period (t t; t]
2: Update Rt t to Rt (or rebuild from scratch) using Gt
3: Recommend: each recommender Rt recommends N locations to u, in
i
the form of N location-score pairs, Rit(u),</p>
      <p>Rit(u) = h`itj ij=1;2; ;N .
4: Aggregate recommendations Rt(u) =
Rit(u) i=1;2; ;n using
weights tu = tu(i) i=1;2; ;n to generate the final
recommendation of N locations to u</p>
      <p>Although maintaining and executing individual recommenders
(Lines 2-3) is straightforward, LURA’s novelty lies in the learning
of user preferences tu (Section 3.1)3 and in the strategies for
aggregating the outputs Rut of different recommenders (Section 3.2).
In general, the aggregation is a linear combination:</p>
      <p>n
st(u; `; tu; ') = X
i=1
ut(i) ' Rit(u; `) :
(1)
Here Rit(u; `) is the score of user-location pair (u; `) estimated by
recommender Rit; '( ) is a strategy for handling individual scores.
When there is no ambiguity on tu or ' in the context, we may
omit one or both of them to simplify the notation.
3.1</p>
    </sec>
    <sec id="sec-8">
      <title>Learning User Preferences</title>
      <p>RA1tt titm;Re2tt, Lt;URA; Rhntas at ,seatndofearcehcoRmit metndhearsssuRggtestetd N=
locations to user u at time t t. New data during the period
(t t; t] are used to evaluate the quality of each Rit t with
regard to user u. This evaluation eventually results in weights ut(i),
which are then used to guide the aggregation process.</p>
      <p>As user check-in data are implicit, we utilize a pairwise
methodology for preference learning. Consider a user u and her visiting
history up to time t, Ltu; let Lt+ = Ltu n Ltu t and Lt = f`j` 62
Ltug. For two locations ` 2 Lt+ and `0 2 Lt , to some extent it is
reasonable to assume that u prefers ` to `0 (denoted as ` &gt;u `0).
Clearly, a good recommendation should be highly consistent with
&gt;u. This means, for the linear aggregation of Formula 1, pairs
(`; `0) 2 Put = Lt+ Lt can be used to tune the weights ut(i).
We consider maximizing the likelihood Pr Put tu , which is the
3It is worth mentioning that a trivial implementation of LURA is
to put equal weights on component recommenders. This, however,
usually leads to very bad recommendations due to the diversities we
studied in Section 2 (Figures 1-2). Indeed, the essence of learning
is to identify those good recommenders out of a large population of
bad ones, with regard to some individual user.
probability of observing Put given the preference
that pairs (`; `0) 2 Put are independent, then
t
Pr Pu
t
u
=</p>
      <p>Pr ` &gt;u `0
t
u :</p>
      <p>Y</p>
      <p>Computing Pr f` &gt;u `0 j u g is nontrivial, but an intuition is
that this probability should be proportional to the difference
between scores s(u; `; u) and s(u; `0; u): the larger the difference
s(u; `; u) s(u; `0; u), the more confident we will be on
concluding ` &gt;u `0. Based on this idea, we set
where
r (j) d`u;`0 the gradient:
2 (0; 1) is the learning rate,</p>
      <p>t
2 ' R1</p>
      <p>t
6 ' R2
r (j) d`u;`0 = 6
6
4
t(u; `)
t(u; `)
.
.
.</p>
      <p>t
' R1</p>
      <p>t
' R2
the step size, and
t(u; `0)
t(u; `0)
3
7
7 : (3)
7
5
t
' Rn
t(u; `)</p>
      <p>t
' Rn
t(u; `0)
After M independent iterations, Algorithm 2 terminates with a
personalized weight tu (Line 6).</p>
      <p>
        The quality of samples (Line 3) are of essential importance in
Algorithm 2. Intuitively, if a sampled pair (`; `0) is such that
' Rit t(u; `) ' Rit t(u; `0) for all Rit t (i.e., no
recommender has distinct preference over ` and `0), then it does not
provide much information for learning (i.e., r (j) d`u;`0 0).
Therefore, we use the strategies proposed by Rendle and
Freudenthaler [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] to sample informative pairs. Since the size of Lt+ is
typically small, the number of samples K is usually proportional
to jLt+j, e.g., K = K0 jLt+j for some integer K0 &gt; 1.
Therefore, sampling in Lt+ is to simply sample each ` 2 Lt+ uniformly
at random (i.e., K0 times on average), and strategies for sampling
informative pairs in fact focus on the set Lt .
      </p>
      <p>Random sampling (RS). This is the basic strategy. Locations in
Lt are selected uniformly at random, i.e.,</p>
      <p>Pr `0 ju
=</p>
      <p>1
jLt j :</p>
      <p>Static sampling (SS). This strategy favors popular locations, i.e.,
locations with many visitors have higher chances to be selected.
Specifically,</p>
      <p>Pr `0 ju
/ exp
rank(`0)
;
&gt; 0;
where rank( ) is the “1234” ranking of Lt by check-in frequencies;
smaller numbers are assigned to more popular locations.</p>
      <p>Adaptive sampling (AS). When sampling, this strategy gives
higher chances to those locations with higher scores (i.e., locations
considered as promising by the recommender):</p>
      <p>Pr `0 ju
/ exp
rank(u; `0)
;
&gt; 0;
where rank(u; `0) is the “1234” ranking of `0 based on the score
st t(u; `0). Such promising yet unvisited locations may be more
informative in identifying a user’s preference.
3.2</p>
    </sec>
    <sec id="sec-9">
      <title>Recommendation Aggregation</title>
      <p>The last step in our LURA framework is to aggregate the
recommendations from all the component recommenders, and then
provide the user u with a final list of N items (Formula 1). We
consider the following two different aggregation strategies.</p>
      <p>Score-based aggregation (SA). This strategy is to use the scaled
score of user-item pair (u; `) estimated by each recommender (at
time t). ' Rit(u; `) is defined as
' Rit(u; `) =</p>
      <p>Rit(u; `)
max`02Lt Rit(u; `0)
;
i = 1; 2;
; n:
1</p>
      <p>N</p>
      <sec id="sec-9-1">
        <title>Rank-based aggregation (RA). This strategy considers the</title>
        <p>
          ranked position of a location `. Given a ranked list of N locations,
`1; `2; ; `N , this strategy assigns higher scores to top locations.
In particular, '( ) is defined as
' Rit(u; `) = 1
(ranki(u; `)
1) ;
i = 1; 2;
; n;
where ranki(u; `) is the ranked position of a location ` in the
recommendation list provided to u by Rit. This strategy is a common
variant of Borda count [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>EXPERIMENTS</title>
      <p>In this section, we evaluate LURA on Foursquare and Gowalla
datasets. In Section 4.1 we explain the experiment setup as well as
the evaluation metrics. Then, in Section 4.2 we study how different
implementations, i.e., different sampling strategies (Section 3.1)
and aggregation strategies (Section 3.2), affect the performance of
LURA. After that, in Sections 4.3 and 4.4, we compare the best
implementations of LURA with (i) their own component
recommenders, and (ii) other advanced recommenders, respectively.
4.1</p>
    </sec>
    <sec id="sec-11">
      <title>Setup</title>
      <p>The experiments involve three time periods: (i) a base period
Tbtase = [1; t t], in which the data are used for constructing
component recommenders; (ii) a learning period Tltearn = (t t; t],
for learning user preferences, updating component recommenders,
and constructing a LURA recommender; and (iii) a testing period
Tttest = (t; t + t] for evaluating the recommenders. Competitor
methods that do not require any learning of user preferences are
built using all data in Tbtase [ Tltearn = [1; t], so that any comparison
between them and LURA is fair.</p>
      <p>To evaluate the performance of a recommender Rt, for each
user u we compare the top-N recommendation Rt(u) =
f`1; `2; ; `N g with L(ut;t+ t], the actual set of locations
visited by u during the testing period Tttest. In the following, we use
t = 300 for Foursquare and t = 420 for Gowalla; t is set to
60 for both datasets. We omit the experiments on evolving t due
to the lack of space. All the results at different t are similar, given
sufficient data in period (t t; t].</p>
      <p>
        Evaluation metrics. Two metrics are commonly used for
location recommendation: Precision@N and Recall@N [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Let
Hut =
      </p>
      <p>Rt(u) \ L(ut;t+ t] be the number of correctly predicted
locations with regard to user u (i.e., the number of successfully
predicted locations). Then,</p>
      <p>Hut</p>
      <p>;
P
P
u2A
N jAj</p>
      <p>P
u2A</p>
      <p>Hut
u2A</p>
      <p>L(ut;t+ t]
:
where A is the set of active users. For a user u to be active, she must
(i) visit at least 5 new locations in the learning period Tltearn so that
LURA can infer her preference, and (ii) visit at least 1 new location
in the testing period Tttest so that the evaluation is nontrivial.
4.2</p>
    </sec>
    <sec id="sec-12">
      <title>Different Implementations of LURA</title>
      <p>We first study different implementations of LURA, i.e., we
compare different sampling and aggregation strategies presented in
Sections 3.1 and 3.2, and see how they affect the performance of
LURA. We have 3 sampling strategies and 2 aggregation strategies,
thus in total we have 6 different implementations of LURA.</p>
      <p>Figure 3 shows the comparison result among these 6
implementations with respect to varying N . As we can see, the performance
of different implementations are nearly the same, with adaptive
sampling (AS) being slightly better than the other two sampling
strategies in most of the cases. On the other hand, we observe
that the performance of aggregation strategies is data-dependent:
on Foursquare, rank-based aggregation (RA) is slightly better than
score-based aggregation (SA) while on Gowalla it is the other way
round. This could be due to the fact that Gowalla has relatively
more data for learning user preferences (29,996 check-ins for 1,307
active users, comparing to 38,688 check-ins for 3,942 active users
in Foursquare), and therefore the final scores are more accurate.</p>
      <p>For both Foursquare and Gowalla, precision and recall values
are all at the level of 1%-10%. Such performance of location
recommendation is due to data sparsity, i.e., out of a huge collection
of locations (96,002 in Foursquare and 767,936 in Gowalla) most
people merely visit several to dozens of them.
4.3</p>
    </sec>
    <sec id="sec-13">
      <title>Comparing with the 11 Component Recommenders</title>
      <p>We compare LURA with its 11 component recommenders. We
use the best sampling and aggregation strategies for LURA, i.e.,
we use LURA-ASSA and LURA-ASRA based on the findings in
Section 4.2. The comparison results are shown in Figure 4.</p>
      <p>Both LURA-ASSA and LURA-ASRA outperform all the other
methods in all cases. This justifies that combining different
recommenders may provide better recommendations. Among the 11
2 )4
−(103
×
ll2
N
@
a
e1
c
R
0 5
10
10</p>
      <p>Gowalla
0
5
5
10
10
Foursquare
15</p>
      <p>N
Foursquare
component recommenders of LURA, UCF always performs the
best. This means that the behaviors of most users are best reflected
by other similar users. Nonetheless, comparing to UCF, LURA can
be better in Foursqure by up to 11.82% in precision and 11.72%
in recall; in Gowalla these numbers are 8.53% and 8.52%
respectively.</p>
      <p>Considering the small absolute values of precision and recall, we
also carry out tests of statistical significance. Specifically, we run
a paired t-test between the numbers of hits of LURA and UCF, the
best-performing component recommender. The p-values are both
less than 0.01 (2:99 10 4 for Foursquare and 1:79 10 4 for
Gowalla), indicating that the improvement of LURA over UCF is
statistically significant.</p>
    </sec>
    <sec id="sec-14">
      <title>4.4 Comparison with Other Methods</title>
      <p>We also compare LURA with other existing methods that adopt
similar ideas to what we use for LURA.</p>
      <p>
        USG [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. This is a linear combination of UCF, FLCF, and PLM,
i.e., the component recommenders R1, R3, and R8 of LURA.
      </p>
      <p>USG(u; `) = (1
+</p>
      <p>
        ) ' (R1(u; `))
' (R3(u; `)) +
' (R8(u; `)) ;
where and are parameters, and '( ) is a rescale of scores as
what we use for LURA (Section 3.2). To construct USG at time t,
we do as what [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] has proposed: we randomly select 70% of the
data before time t to construct the three component recommenders:
UCF, FLCF, and PLM, and then use the rest 30% to determine
parameters and . The final parameters are = = 0:1 for
Foursquare, and = 0:1, = 0:2 for Gowalla. These
parameters are also consistent with what [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] has suggested. Note that the
parameter settings are the same for all users, i.e., not personalized.
      </p>
      <p>
        iGLSR [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. This method combines GCF and KDM, i.e., the
component recommenders R4 and R9 of LURA, having a clear
focus on the geographical relationships between locations.
Different from USG and LURA which aggregate scores linearly, iGLSR
adopts the following combination:
      </p>
      <p>iGLSR(u; `) = ' (R4(u; `)) ' (R9(u; `)) ;
where '( ) is as in USG and LURA.</p>
      <p>
        RankBoost [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This is a method to combine a given set of weak
recommenders. For a target user u, the training data for RankBoost
is Put = Lt+ Lt , as what is fed to LURA (Section 3.1).
RankBoost takes K iterations to get a “boosted” recommender. During
the k-th iteration, it aims at maximizing the discrimination between
Lt+ and Lt ,
      </p>
      <p>Zk =</p>
      <p>X
20
25
10
20
25
where Dk is a weight distribution over location pairs in Put ,
measuring how important the pair (`; `0) currently is; '( ) is for scaling
recommendation scores as introduced in Section 3.2; and hk is the
selected weak recommender during the k-th iteration. While
pursuing this goal, RankBoost estimates a weight k for the selected
recommender hk and updates the distribution Dk. Eventually,
recommendations are made based on the following scoring function:</p>
      <p>K
RankBoost(u; `) = X</p>
      <p>k ' (hk(u; `)) :
k=1
Similar to LURA, RankBoost also has two options for '( ):
scorebased (SA) and rank-based (RA). In our experiments we consider
both options, i.e., we have RankBoost-SA and RankBoost-RA as
two different implementations of RankBoost.</p>
      <p>
        BPRMF [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. This method also takes a pairwise view of
locations, i.e., it considers (`; `0) 2 Put as training samples. Given the
check-in frequency matrix at time t, Ct, BPRMF manages to
maximize the posterior probability Pr Put , where is a
completion of the matrix Ct (i.e., has an estimation on every unknown
(u; `) entries in Ct). BPRMF transforms this posterior probability
using the Bayes’ theorem, and optimizes Pr Put j Pr f g using
a stochastic gradient descent method.
      </p>
      <p>
        SBPR [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ]. SBPR integrates social information into BPRMF. It
divides items into three disjoint sets : positive items I+, negative
items I , and social items IS . It assumes that items in I+ are more
preferable than those in IS , while the ones I are the least
preferable. Two partial orders are thus used in a BPR-like framework.
      </p>
      <p>
        GeoMF [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. GeoMF integrates geographical information into
weighted MF methods [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Specifically, it assumes that a
location may have a Gaussian impact in its neighboring area, which is
considered as weights in an MF framework.
      </p>
      <p>The comparison results are shown in Figure 5. USG and
RankBoost consistently perform the best among methods other than
LURA. LURA-ASSA and LURA-ASRA are clearly superior to
USG and RankBoost. In Foursquare, LURA can outperform the
best of USG and RankBoost by up to 7.16% in precision and 6.22%
in recall; in Gowalla, these numbers are 7.35% and 8.12%
respectively. Compared to USG which adopts unified parameters for
all users, the performance improvement of LURA comes from its
awareness of users’ preferences over recommenders.</p>
      <p>Similar to what we have done in Section 4.3, we also did a
paired t-test for the numbers of hits of LURA and USG, the
bestperforming competitor. The p-values are both less than 0.05 (0.021
for Foursquare and 0.005 for Gowalla), implying that LURA’s
improvement over existing methods is also statistically significant.
5.
5.1</p>
    </sec>
    <sec id="sec-15">
      <title>RELATED WORK</title>
    </sec>
    <sec id="sec-16">
      <title>Other POI Recommendation Methods</title>
      <p>
        GPS data from mobile services are also used for location
recommendation. Zhang et al. [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ] analyzed GPS trajectories to discover
points of interests for recommendation. Zhang et al. [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] studied
location-and-activity recommendation using GPS data, where
activities could be various human behaviors such as shopping,
watching movies, etc. Leung et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] proposed a co-clustering
framework for location recommendation based on user-activity-location
tripartite graphs. Some recent studies view location
recommendation problem from a perspective of topic modeling (e.g., [
        <xref ref-type="bibr" rid="ref17 ref27">17, 27</xref>
        ]).
In general, with additional information such as location contents,
user-provided tips, etc., these methods build topic models for users
and locations, based on which the likelihood of a user visiting a
location can be estimated. In addition, Yuan et al. [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]
considered time-dependent location recommendation, arguing that the
locations recommended to a user at lunch time should be different
from the ones recommended at leisure time.
5.2
      </p>
    </sec>
    <sec id="sec-17">
      <title>Recommender Ensemble</title>
      <p>
        Ensemble techniques have been used to combine several simple
recommenders to form a stronger one. To name a few, Bar et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
studied different ways to combine simple CF methods, including
bagging, boosting, fusion, and randomness injection; Schclar et
al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] considered AdaBoost regression on a neighbor-based CF
method. Experiments on movie ratings data showed that these
ensemble methods had improved performance. Besides, Tiemann and
Pauws [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] considered ensembles of content-based methods, which
succeeded in recommending music and news.
      </p>
      <p>These ensemble methods attempt to minimize the root mean
square error (RMSE) on ratings data. However, as we have
emphasized earlier in this paper, check-in data are implicit, and thus
pursuing numerical estimations of check-in frequencies will
essentially be a distraction from the goal of location recommendation.
5.3</p>
      <p>(Personalized) Learning to Rank</p>
      <p>
        The problem of learning to rank is to determine a ranking among
a set of items. Given some training data (e.g., observed user
clickthroughs that imply preference over search results), the problem
is to find a ranking function that best agree with the training data,
which can be done by machine learning techniques (e.g., [
        <xref ref-type="bibr" rid="ref13 ref4 ref5">4, 5, 13</xref>
        ]).
Recently, Wang et al. [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] and Song et al. [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] studied personalized
learning to rank, where personalized rankings were adapted from a
global user-independent ranking.
      </p>
      <p>The problem of (personalized) learning to rank is related to our
problem in general, in that they both pursue the best ranking of
items. However, our problem is to infer user preference over
different recommenders, which are themselves ranking functions over
items (locations); the above-mentioned learning to rank techniques
are thus not directly applicable to solve our problem.</p>
    </sec>
    <sec id="sec-18">
      <title>CONCLUSION</title>
      <p>In this paper we study the problem of location
recommendation. We first investigate two real-life LBSN datasets,
discovering diversities in (i) recommendations generated by representative
recommenders, and (ii) user preferences over the recommenders.
Based on these observations, we consider personalized location
recommendation by inferring user preferences over multiple
recommenders. We propose a LURA framework to achieve our goal
and test it with extensive experiments. The results show that LURA
achieves significant performance improvements over existing
recommendation methods. In the future, we plan to investigate other
possible ways to aggregate recommenders.</p>
    </sec>
    <sec id="sec-19">
      <title>ACKNOWLEDGMENTS</title>
      <p>The authors would like to acknowledge the support from Hong
Kong RGC (Grant No. HKU715413E) and the National Natural
Science Foundation of China (Grant No. 61432008).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Adomavicius</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Tuzhilin</surname>
          </string-name>
          .
          <article-title>Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions</article-title>
          .
          <source>TKDE</source>
          ,
          <volume>17</volume>
          (
          <issue>6</issue>
          ):
          <fpage>734</fpage>
          -
          <lpage>749</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zheng</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Mokbel</surname>
          </string-name>
          .
          <article-title>Location-based and preference-aware recommendation using sparse geo-social networking data</article-title>
          .
          <source>In SIGSPATIAL GIS</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rokach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Shani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Shapira</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Schclar</surname>
          </string-name>
          .
          <article-title>Improving simple collaborative filtering models using ensemble methods</article-title>
          .
          <source>In MCS</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Burges</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ragno</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          .
          <article-title>Learning to rank with nonsmooth cost functions</article-title>
          .
          <source>In NIPS</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Burges</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Shaked</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Renshaw</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lazier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Deeds</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hamilton</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Hullender</surname>
          </string-name>
          .
          <article-title>Learning to rank using gradient descent</article-title>
          .
          <source>In ICML</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Myers</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          .
          <article-title>Friendship and mobility: User movement in location-based social networks</article-title>
          .
          <source>In KDD</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ding</surname>
          </string-name>
          and
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          .
          <article-title>Time weight collaborative filtering</article-title>
          .
          <source>In CIKM</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Dwork</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Naor</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Sivakumar</surname>
          </string-name>
          .
          <article-title>Rank aggregation methods for the web</article-title>
          .
          <source>In WWW</source>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Freund</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Iyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Schapire</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Singer</surname>
          </string-name>
          .
          <article-title>An efficient boosting algorithm for combining preferences</article-title>
          .
          <source>JMLR</source>
          ,
          <volume>4</volume>
          :
          <fpage>933</fpage>
          -
          <lpage>969</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Liu</surname>
          </string-name>
          . gSCorr:
          <article-title>Modeling geo-social correlations for new check-ins on location-based social networks</article-title>
          .
          <source>In CIKM</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Halvey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Punitha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hannah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Villa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hopfgartner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Goyal</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Jose</surname>
          </string-name>
          . Diversity, assortment, dissimilarity, variety
          <article-title>: A study of diversity measures using low level features for video retrieval</article-title>
          .
          <source>In ECIR</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Koren</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Volinsky</surname>
          </string-name>
          .
          <article-title>Collaborative filtering for implicit feedback datasets</article-title>
          .
          <source>In ICDM</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Joachims</surname>
          </string-name>
          .
          <article-title>Optimizing search engines using clickthrough data</article-title>
          .
          <source>In KDD</source>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>I.</given-names>
            <surname>Konstas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stathopoulos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Jose</surname>
          </string-name>
          .
          <article-title>On social networks and collaborative recommendation</article-title>
          .
          <source>In SIGIR</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>K. W.-T. Leung</surname>
            ,
            <given-names>D. L.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            , and
            <given-names>W.-C.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
          </string-name>
          .
          <article-title>CLR: A collaborative location recommendation framework based on co-clustering</article-title>
          .
          <source>In SIGIR</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Lian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sun</surname>
          </string-name>
          , E. Chen, and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rui</surname>
          </string-name>
          . GeoMF:
          <article-title>Joint geographical modeling and matrix factorization for point-of-interest recommendation</article-title>
          .
          <source>In KDD</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yao</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Xiong</surname>
          </string-name>
          .
          <article-title>Learning geographical preferences for point-of-interest recommendation</article-title>
          .
          <source>In KDD</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Aberer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Miao</surname>
          </string-name>
          .
          <article-title>Personalized point-of-interest recommendation by mining users' preference transition</article-title>
          .
          <source>In CIKM</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rendle</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Freudenthaler</surname>
          </string-name>
          .
          <article-title>Improving pairwise learning for item recommendation from implicit feedback</article-title>
          .
          <source>In WSDM</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rendle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Freudenthaler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gantner</surname>
          </string-name>
          , and L.
          <string-name>
            <surname>Schmidt-Thieme</surname>
          </string-name>
          .
          <article-title>BPR: Bayesian personalized ranking from implicit feedback</article-title>
          .
          <source>In UAI</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.</given-names>
            <surname>Sarwar</surname>
          </string-name>
          , G. Karypis,
          <string-name>
            <given-names>J.</given-names>
            <surname>Konstan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Riedl</surname>
          </string-name>
          .
          <article-title>Item-based collaborative filtering recommendation algorithms</article-title>
          .
          <source>In WWW</source>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Schclar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tsikinovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rokach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Meisels</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Antwarg</surname>
          </string-name>
          .
          <article-title>Ensemble methods for improving the performance of neighborhood-based collaborative filtering</article-title>
          .
          <source>In RecSys</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>He</surname>
          </string-name>
          .
          <article-title>Adapting deep RankNet for personalized search</article-title>
          .
          <source>In WSDM</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tiemann</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Pauws</surname>
          </string-name>
          .
          <article-title>Towards ensemble learning for hybrid music recommendation</article-title>
          .
          <source>In RecSys</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-W. Chang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>R. W.</given-names>
          </string-name>
          <string-name>
            <surname>White</surname>
            , and
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Chu</surname>
          </string-name>
          .
          <article-title>Personalized ranking model adaptation for web search</article-title>
          .
          <source>In SIGIR</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.-C.</given-names>
            <surname>Lee</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.-L.</given-names>
            <surname>Lee</surname>
          </string-name>
          .
          <article-title>Exploiting geographical influence for collaborative point-of-interest recommendation</article-title>
          .
          <source>In SIGIR</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          . LCARS:
          <article-title>A location-content-aware recommender system</article-title>
          .
          <source>In KDD</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>J. J.-C. Ying</surname>
            ,
            <given-names>E. H.-C.</given-names>
          </string-name>
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>W.-N.</given-names>
          </string-name>
          <string-name>
            <surname>Kuo</surname>
            , and
            <given-names>V. S.</given-names>
          </string-name>
          <string-name>
            <surname>Tseng</surname>
          </string-name>
          .
          <article-title>Urban point-of-interest recommendation by mining user check-in behaviors</article-title>
          . In UrbComp,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sun</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Thalmann</surname>
          </string-name>
          .
          <article-title>Time-aware point-of-interest recommendation</article-title>
          .
          <source>In SIGIR</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>J.-D.</surname>
          </string-name>
          Zhang and C.-Y. Chow. iGSLR:
          <article-title>Personalized geo-social location recommendation - a kernel density estimation approach</article-title>
          .
          <source>In SIGSPATIAL GIS</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>McAuley</surname>
          </string-name>
          ,
          <string-name>
            <surname>and I. King.</surname>
          </string-name>
          <article-title>Leveraging social connections to improve personalized ranking for collaborative filtering</article-title>
          .
          <source>In CIKM</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>V. W.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xie</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          .
          <article-title>Collaborative location and activity recommendations with GPS history data</article-title>
          .
          <source>In WWW</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xie</surname>
          </string-name>
          , and W.-Y. Ma.
          <article-title>Mining interesting locations and travel sequences from GPS trajectories</article-title>
          .
          <source>In WWW</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>