<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>October</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Contextual Modeling Approach to Context-Aware Recommender Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Umberto Panniello</string-name>
          <email>u.panniello@poliba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Gorgoglione</string-name>
          <email>m.gorgoglione@poliba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Polytechnic of Bari</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2011</year>
      </pub-date>
      <volume>23</volume>
      <issue>2011</issue>
      <abstract>
        <p>Methods for generating context-aware recommendations were classified into the pre-filtering, post-filtering and contextual modeling approaches. This paper proposes a novel type of contextual modeling (CM) based on the contextual neighbors approach and introduces four specific contextual neighbors methods. It compares these four types of contextual neighbors techniques to determine the best-performing alternative among them. Then it compares this best-of-breed method with the contextual pre-filtering, post-filtering and un-contextual methods to determine how well the CM approach compares with other context-aware recommendation techniques.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Recommender systems</kwd>
        <kwd>pre-filtering</kwd>
        <kwd>post-filtering</kwd>
        <kwd>contextual modeling</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>
        To incorporate contextual information into recommender systems
(RSes), a new subfield, called CARS (Context–Aware
Recommender Systems), has recently emerged. There are several
approaches to incorporating contextual information into
recommender systems that were previously proposed in the
literature [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In particular, they are categorized into contextual
modeling, pre-filtering and post-filtering methods [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Although
the contextual pre- and post-filtering methods have been
previously studied before, e.g. in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the contextual modeling
methods have been little explored. Among the few attempts, [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
and [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] proposed two approaches to include context in the
recommendation engine. In this paper, we study the contextual
modeling (CM) methods and propose a specific type of CM that
we call contextual neighbors. We also propose four specific types
of contextual neighbors methods, called Mdl1, Mdl2, Mdl3 and
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
where Context is a set of contextual variables, each such variable
K having a hierarchical structure defined by a set of k atomic
variables, i.e., K = (K1,…, Kq) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Further, the values taken by
variable Kq define finer (more granular) levels, while K1 coarser
(less granular) levels of contextual knowledge [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. For example,
Figure 1(a, b) presents the hierarchy for the contextual variables
“Season” and “Intent of the purchase” respectively that we use in
the study presented in Sections 4.
      </p>
      <p>
        The function R can be of the following two types. In the
ratingsbased RSes, users rate some of the items that they have seen in the
past by specifying how much they liked these items. Alternatively,
in the transaction-based RSes, function R defines the utility of an
item for a user and is usually specified either as (a) a Boolean
variable indicating if the user bought a particular item or not, (b)
as the purchasing frequency of an item, or (c) as a click-through
rate (CTR) of various Web objects (URLs, ads, etc.) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In this
paper we follow the transaction-based approach and measure the
utility of product j for user i with the purchasing frequency xij
specifying how often user i purchased product j.
As proposed in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and shown in Figure 2, this estimation can be
done using the following three types of methods, each of them
starting with data (on users, items, ratings and contextual
information) and resulting in generating contextual
recommendations:
1. Contextual pre-filtering (PreF): contextual information is
used to filter out irrelevant ratings before they are used for
computing recommendations using classical (2D) methods.
2. Contextual post-filtering (PoF): contextual information is
used after the classical (2D) recommendation methods are applied
to the standard (non-contextual) recommendation data.
3. Contextual modeling (CM): contextual information is used
inside the recommendation-generating algorithms.
      </p>
      <p>
        The work presented in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] helped researchers to understand
different aspects of using the contextual information in the
recommendation process. However, [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] did not examine which of
these methods are more effective for providing contextual
recommendations. To address this issue, Panniello et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
proposed certain contextual pre- and post-filtering approaches and
compared them among themselves and also with the un-contextual
(2D) approach to determine which one is better. More
specifically, [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposed the Weight and Filter post-filtering
approaches and the exact pre-filtering (EPF) method. Because of
space limits we do not present the details which can be found in
the original paper. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] shows that the comparison of the
uncontextual and the contextual RSes depends very significantly on
the type of the post-filtering method used. The results also
suggested that there is a big difference between good and bad
post-filtering approaches in terms of performance measures. For
example, the performance differences between the Filter and
Weight methods range between 37% and 90% for the F-measure
across different datasets and varies between 2.5 and 17 for the
MAE and 1 and 3.5 for the RMSE metric.
      </p>
    </sec>
    <sec id="sec-2">
      <title>3. CONTEXTUAL MODELING</title>
      <p>In this section we present a new CM method called
contextualneighbors CM, and see how it compares against the pre- and the
post-filtering methods. This approach is based on user-based
collaborative filtering and works as follows. First, for each user i
and context k, we define the user profile in context k, i.e. the
contextual profile Prof(i, k). For example, if contextual variable k
has two values (e.g., Winter and Summer), then we have two
contextual profiles for each user, one for the Winter and the other
for the Summer.</p>
      <p>
        Note that these contextual profiles can be defined in many
different ways, some of which are presented in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] [
        <xref ref-type="bibr" rid="ref11 ref6">6, 11</xref>
        ], and our
approach does not depend on any particular choice of a profiling
method. However, in the experimental study described in Section
4 we use the following specific contextual profiling technique. As
explained in Section 2, we follow the transaction-based approach
to RSes and measure the utility rijk of product j for user i in
context k with the purchasing frequency xijk specifying how often
user i purchased product j in context k. Then we use this measure
to define contextual profile as Prof(i, k) = (ri1k , … rink ).
We use these profiles to define similarity among users and also to
define and find N nearest “neighbors” of user i in context k, where
“neighbors” are determined using contextual profiles Prof(i’, k’)
and similarity measures between the profiles. In order to focus on
the comparison among CARS, we decided to use a popular CF
approach as a common method on the CARS that we compare,
despite much research has generated more sophisticated methods,
and defined the distance using the cosine similarity in our
experiments. The basis for generating different contextual
neighbors approaches is the way context is used to form the
neighborhood. We find N pairs (i’, k’) such that the similarity
between these profiles is the largest among all the candidate pairs
(i’, k’) subject to the following constraints:
• Mdl1: There are no constraints on the set of (i’, k’) pairs, and
we select N pairs that are the most similar to (i, k).
• Mdl2: we select an equal proportion of pairs (i’, k’)
corresponding to each context k (e.g., if the contextual variable
has only two values, Winter and Summer respectively, and the
neighborhood size is 80, we select 40 neighbors from Winter and
40 from Summer).
• Mdl3: we select N pairs (i’, k’) that are the most similar to (i, k)
corresponding to each context k at the same level of the context of
interest (e.g., if the context of interest is “Winter Holiday” in Fig.
2(a), we select the neighborhood by using only profiles referred to
level K2 of that contextual variable).
• Mdl4: we select an equal proportion of pairs (i’, k’)
corresponding to each context k at the same level of the context of
interest (e.g., if the context of interest is “Winter Holiday” in Fig.
2(a) and the neighborhood size is 80, we define the neighborhood
by using 20 users from the context “Winter Holiday”, 20 users
from the context “Winter Not Holiday”, 20 users from the context
“Summer Holiday” and 20 users from the context “Summer Not
Holiday”).
      </p>
      <p>After selecting the neighbors, we used their contextual profiles to
make the rating predictions. Once we introduced the contextual
neighbors approach and its four implementations Mdl1, Mdl2,
Mdl3 and Mdl4, we next want to (a) compare them to determine
which one is the best among them, and (b) see how it compares
against the previously studied pre- and post-filtering methods.</p>
    </sec>
    <sec id="sec-3">
      <title>4. EXPERIMENTAL SETUP</title>
      <p>
        In this study, we compared the four types of CM methods,
contextual modeling vs. the un-contextual case, and pre- vs.
postfiltering vs. contextual modeling recommendations across a wide
range of experimental settings. First, we selected two different
data sets having contextual information. The first dataset (DB1)
comes from an e-commerce website commercially operating in a
certain European country which sells electronic products. For this
dataset, we selected the time of the year (or Season) as a
contextual variable (Fig. 1(a)). The classification into Summer or
Winter and Holiday or Not Holiday is based on the experiences of
the CEO of the e-commerce website that we used in our study.
The second dataset (DB2) is taken from the study described in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
The key contextual information elicited from the students was the
intent of a purchase (IntentOfPurchase), and it was hierarchically
classified as in Fig. 1(b).
      </p>
      <p>
        In our study, we recommend product categories instead of
individual items because the e-commerce applications that we
consider have very large numbers of items (hundreds of thousands
or even millions). Therefore, if single items were used, the
conversion from implicit to explicit ratings would not work due to
the low amount of rated data (e.g., many of the products were not
purchased at all). We tried different item aggregation strategies
and found that the best results are for 14 categories for DB1 and
24 categories for DB2. In particular, we performed experiments
varying the number of categories and we found that each
recommender system reached the best performances with these
levels of aggregation. For our two datasets, we aggregated items
into categories of products according to the classification
provided by the Web site product catalogue. When using a
context-aware recommender system it is useful to recommend a
category instead of a product because users may not know what
categories to look for in a specific context (for example, one may
would like to receive a recommendation about the category of
items for a not familiar context, such as a gift for a child).
The utility of items for the customers were measured by the
purchasing frequencies, as described in Section 2 for the
transaction-based RSes. Estimations of unknown utilities were
done by using a standard user-based collaborative filtering (CF)
method [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>The neighborhood size N was set to N = 80 users as follows. We
performed an experiment where we varied the neighborhood size,
moving from 30 to 200 users, and we computed the F-measure.
We performed this experiment for each dataset. In general, the
Fmeasure increased as we increased the number of neighbors.
However, these improvement gains stopped when we set the
neighborhood size around 80, and the performance decreased
when it went over 80 users. Therefore, we set N = 80 users as an
appropriate neighborhood size for our experiments.</p>
      <p>When comparing the pre-, the post-filtering and the contextual
modeling methods, we used the two post-filtering approaches
(Weight and Filter), the exact pre-filtering (EPF) method and the
four contextual modeling methods Mdl1, Mdl2, Mdl3 and Mdl4
described above. Furthermore, we used the same user-based CF
method for estimating unknown ratings in the pre-, the
postfiltering and CMs cases to make sure that we compare “apples
with apples”. Since our aim was to compare different contextual
approaches instead of finding the best contextual approach, we
used a well known collaborative filtering instead of a newest, but
less known, recommendation engine.</p>
      <p>
        Further, we have performed t-tests in order to determine if the
chosen contextual variables matter. The results of these tests
demonstrated that the contextual variables Season and
IntentOfPurchase matter (i.e., result in statistically significant
differences in ratings across the values of the contextual variable
at 95%). We used Precision, Recall, F-Measure, Mean Absolute
Error (MAE) and Root Mean Square Error (RMSE) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] as
performance measures in our experiments. To this aim we divided
each dataset into the training and the validation sets, the training
set containing 2/3 and the validation set 1/3 of the whole dataset.
For the DB1 dataset, the first two years were the training set and
the third year was the validation set. For the DB2 dataset, we
randomly split it in 2/3 for the training set and the remaining 1/3
for the validation set (in this case, it was impossible to make a
good temporal split because all the transactions were made within
a couple of months).
      </p>
    </sec>
    <sec id="sec-4">
      <title>5. RESULTS</title>
      <p>
        First of all we compared the four contextual modeling approaches
among themselves across each experimental setting. In particular,
Fig. 3 shows the comparison between the four CM approaches
(namely Mdl1, Mdl2, Mdl3 and Mdl4) for each dataset. Because of
space limits, we only show the graphs of F-measure, Recall and
RMSE for the two databases. The graphs of Precision and MAE
are very similar to those of F-measure and RMSE, respectively.
Fig. 3 demonstrates that the performances of the four CM
approaches are not remarkably different. The difference between
Mdl1 and Mdl2 for DB1 is 0.008, 0.13 and 0.02 in terms of F-,
MAE and RMSE measures, respectively and for DB2 is 0.09,
0.17, 0.18, respectively, which is not very significant. In
comparison, performance differences between various pre- and
post-filtering methods, as reported in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], are much more
pronounced in comparison to these differences (as an example, [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
reports that post-filtering Filter PoF method outperformed exact
pre-filtering on DB1 by 0.21 and on DB2 by 0.4 in terms of the
Fmeasure). Also, Mdl1 slightly dominates other Mdl methods in
some cases (see Fig. 3(b)) and is very close to Mdl2 in other cases
(see Fig. 3(a)). This makes sense because the N neighbors are
selected for Mdl1 in an unconstrained manner, whereas they are
selected according to various types of constraints for the other
three approaches. Since Mdl1 (slightly) outperforms other Mdl
methods, we selected it as the “best-of-breed” and will use it for
comparing it with the pre- and the post-filtering methods in the
rest of the paper.
      </p>
      <p>We have also compared the contextual neighbors methods with
the un-contextual approach across various experimental
conditions. Table 1 reports all the accuracy gains (in terms of
Fmeasure) across each recommender systems for DB1 and DB2
(negative values mean performance reduction). For example, its
first row shows the performance gains (reductions), in terms of
Fmeasure, for the un-contextual RS vis-à-vis the EPF, Filter PoF,
Weight PoF and Mdl1 methods. The matrix in Table 1 is
antisymmetric, as should be the case when two methods are compared
in terms of their relative performance. As Table 1 shows, the
contextual modeling approaches dominate the un-contextual case
across all the levels of context for the F-Measure, Precision, MAE
RECALL
0.35
RMSE</p>
      <p>Mdl_1
Mdl_2
Mdl_3
Mdl_4</p>
      <p>Mdl_1
Mdl_2
Mdl_3
Mdl_4
and RMSE. For example, if we consider Mdl1 and the F-measure,
for DB1, the difference between contextual and un-contextual
models is 22% on average and for DB2 it is 7% on average.Mdl1
clearly outperform the un-contextual method. The fact that the
contextual modeling methods outperform the un-contextual one in
almost all of the cases is not surprising because the contextual
modeling method uses the same information as the un-contextual
one and also includes the contextual variable which brings
homogeneity in the data without causing the sparsity effect. We
next compare the CM, pre-filtering and post-filtering approaches
to determine the best among them.</p>
      <p>
        As explained in Section 2, the performance of the post-filtering
methods may significantly depend on the type of the post-filtering
approach being used [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Therefore, we decided to use two
postfiltering methods in our experiments, Weight PoF and Filter PoF,
to account for these differences. Fig. 4 presents the comparison
results among the two post-filtering methods Weight PoF and
Filter PoF, the exact pre-filtering (EPF) and the contextual
modeling method Mdl1 across each contextual level and each
dataset (DB1 and DB2). As Fig. 4 (and Table 1) demonstrates,
Filter PoF dominates the EPF approach across the considered
experimental settings. In particular, the difference between Filter
PoF and EPF in terms of F-measure is 19% on average for DB1
and 26% for DB2. In contrast, EPF dominates Weight PoF in our
experiments. In particular the difference between EPF and Weight
PoF models in terms of F-measure is 29% on average for DB1 and
16% for DB2. In addition, the CM method Mdl1, dominates the
Weight PoF and in some cases the EPF. In particular, the
difference between Mdl1 and Weight PoF models in terms of
Fmeasure is 39% on average for DB1 and 18% for DB2, while the
difference between Mdl1 and EPF is 21% on average for DB1 and
5% for DB2. In contrast, the Filter PoF dominates the modeling
method. In particular the difference between Mdl1 and Filter PoF
models in terms of F-Measure is 3% on average for DB1 and 28%
for DB2.
      </p>
      <p>
        These results mean that the performance of the CM approach (as
represented by Mdl1) is very similar to that of the EPF method.
This also implies, among other things, that CM is better than the
un-contextual case and some of the weaker post-filtering methods,
such as Weight PoF. However, like EPF, it is inferior to the
bestperforming post-filtering methods, such as Filter PoF. However,
as argued in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], finding the best-performing post-filtering
methods can be a hard problem. Therefore, the CM approach, as
represented in this paper by the Mdl1, Mdl2, Mdl3 and Mdl4
methods, constitutes a stable, easy to implement and a reasonably
well-performing alternative that does not require expensive
identification procedures, unlike the post-filtering methods.
Therefore, considering our experiment settings it has its niche
among the range of various CARS methods, as EPF does.
      </p>
    </sec>
    <sec id="sec-5">
      <title>6. CONCLUSIONS</title>
      <p>In this paper we proposed a new type of CM, that we called
contextual neighbors, and four specific types of contextual
neighbors methods, called Mdl1, Mdl2, Mdl3, Mdl4, each of them
selecting contextual neighborhoods in a different way. We also
compared the contextual neighbors methods Mdl1, Mdl2, Mdl3,
Mdl4 to identify the best performing one. Finally, we compare it to
other approaches to CARS. This is the first step of a broader
research in which we want to compare the relative performance of
different contextual modeling approaches, including ours.
Although Mdl1 slightly outperforms the others, we have shown
that there are no relevant performance differences among them.
0.6 EPF
0.5 WeightPoF
0.4 Filter PoF</p>
      <p>Mdl_1
0.3
0.8
0.7
0.6 EPF
0.5 WeightPoF
0.4 Filter PoF
0.3 Mdl_1
0.5
0.45
0.4
0.35
0.3
0.25 WeightPoF
0.2 Mdl_1
0.15
0.1 Filter PoF</p>
      <p>
        EPF
RMSE
Filter PoF
Mdl_1
EPF
This result is not surprising because different ways of selecting
contextual neighborhood do not fundamentally change
recommendation results. We have also compare Mdl1 with the
pre-, post- filtering and un-contextual methods developed in our
previous studies across various experimental settings, including
two datasets, different levels of item aggregation, different
neighborhood sizes, different contextual levels (K1 and K2) and
several performance measures (Precision, Recall, F-Measure,
MAE and RMSE). We have shown that Mdl1 dominates the
traditional un-contextual approach and is comparable to the
prefiltering method (EPF). We have also shown that Mdl1 dominates
some of the less advanced post-filtering methods (such as Weight
PoF) but is inferior to the best post-filtering methods (such as
Filter PoF). Since identification and selection of the best
postfiltering methods is a laborious process (as argued in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]), this
means that contextual neighbors CM methods (such as Mdl1) have
their prominent place in the spectrum of various CARS
recommendation methods: they are easy to implement, reasonably
well-performing and do not require expensive identification
procedures, unlike the post-filtering methods. The main limit of
these results is the fact that we do not compare our contextual
modeling approach to the existing ones. The reason is that we
only present the first step of the research.
      </p>
      <p>In a future work, we will present the comparison of the contextual
neighbor to other CM approaches. In addition, we will use other
recommendation engines and other representations of the
contextual variables different from the straightforward kNN and
the hierarchical representation of the context used in this paper.
We will use other performance metrics, beyond those
accuracybased, such as the recommendations diversity, in order to better
understand the impact of the different contextual approaches on
customers behavior. In future research steps we will also measure
the effect of different CM approaches on customers’ trust and on
their actual purchases.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Adomavicius</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sankaranarayanan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sen</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tuzhilin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Incorporating contextual information in recommender systems using a multidimensional approach</article-title>
          . ACM T. Inform. Syst.
          <volume>23</volume>
          ,
          <issue>1</issue>
          (
          <year>2005</year>
          ),
          <fpage>103</fpage>
          -
          <lpage>145</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Adomavicius</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tuzhilin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2011</year>
          .
          <article-title>Recommender Systems Handbook</article-title>
          , Chapter 7, Springer.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Dourish</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>What we talk about when we talk about context</article-title>
          .
          <source>Personal and Ubiquitous Computing</source>
          <volume>8</volume>
          ,
          <fpage>19</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Herlocker</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Terveen</surname>
            ,
            <given-names>L.G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>J.T.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>Evaluating collaborative filtering recommender systems</article-title>
          .
          <source>ACM Transaction on Information System 22</source>
          ,
          <fpage>5</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Link prediction approach to collaborative filtering</article-title>
          ,
          <source>In Proc. of the 5th ACM/IEEE-CS joint conference on Digital libraries</source>
          ,
          <fpage>141</fpage>
          -
          <lpage>142</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Karatzoglou</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amatriain</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baltrunas</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Oliver</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Multiverse recommendation: n-dimensional tensor factorization for context-aware collaborative filtering</article-title>
          ,
          <source>In Proc. of the fourth ACM conference on recommender Systems</source>
          ,
          <volume>79</volume>
          -
          <fpage>86</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Kwon</surname>
            ,
            <given-names>O</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Concept lattices for visualizing and generating user profiles for context-aware service recommendations</article-title>
          .
          <source>Expert Systems with Applications 36</source>
          ,
          <fpage>1893</fpage>
          -
          <lpage>1902</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Palmisano</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tuzhilin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Gorgoglione</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>Using Context to Improve Predictive Models of Customers in Personalization Applications</article-title>
          .
          <source>IEEE TKDE</source>
          ,
          <volume>20</volume>
          (
          <issue>11</issue>
          ),
          <fpage>1535</fpage>
          -
          <lpage>1549</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Panniello</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tuzhilin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gorgoglione</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palmisano</surname>
            <given-names>C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Pedone</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Experimental comparison of pre- vs. post-filtering approaches in context-aware recommender systems</article-title>
          .
          <source>In Proc. of RecSys '09</source>
          ,
          <fpage>265</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Resnick</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iacovou</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suchak</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bergstrom</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>1994</year>
          .
          <article-title>GroupLens: an open architecture for collaborative filtering of netnews</article-title>
          .
          <source>In Proc. of Conference on Computer Supported Cooperative Work</source>
          ,
          <fpage>175</fpage>
          -
          <lpage>186</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Shi</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hanjalic</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Mining moodspecific movie similarity with matrix factorization for context-aware recommendation</article-title>
          .
          <source>In Proc. of the Workshop on Context-Aware Movie Recommendation</source>
          ,
          <fpage>34</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>