<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Aspect-based active learning for user preference elicitation in recommender systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>María Hernández-Rubio</string-name>
          <email>maria.hernandezr@estudiante.uam.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alejandro Bellogín</string-name>
          <email>alejandro.bellogin@uam.es</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Iván Cantador</string-name>
          <email>ivan.cantador@uam.es</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Escuela Politécnica Superior, Universidad Autónoma de Madrid</institution>
          ,
          <addr-line>Madrid</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Escuela Politécnica Superior, Universidad Autónoma de Madrid</institution>
          ,
          <addr-line>Madrid</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Escuela Politécnica Superior, Universidad Autónoma de Madrid</institution>
          ,
          <addr-line>Madrid</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recommender systems require interactions from users to infer personal preferences about new items. Active learning techniques aim to identify those items that allow eliciting a target user's preferences more eficiently. Most of the existing techniques base their decisions on properties of the items themselves, for example according to their popularity or in terms of their influence on reducing information variance or entropy within the system. Diferently to previous work, in this paper we explore a novel active learning approach focused on opinions about item aspects extracted from user reviews. We thus incorporate textual information so as to decide which items should be considered next in the user preference elicitation process. Experiments on a real-world dataset provide positive results with respect to competitive state of the art methods.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>
        Recommender Systems (RS) and, in particular, Collaborative
Filtering (CF) techniques, are widely used tools that help users to find
relevant information according to their preferences [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Being one
of the most common and successful approaches to provide
personalized recommendations [
        <xref ref-type="bibr" rid="ref11 ref8">8, 11</xref>
        ], CF typically needs a user–item rating
matrix, where each rating reflects the preference from certain user
towards a particular item. In this context, it is paramount to know
as much information as possible from the users, in particular, in
the form of ratings.
      </p>
      <p>
        To overcome the lack of user information, Active Learning (AL)
strategies are used to elicit such preferences in an eficient way, so
that additional ratings are acquired from the users by optimizing
certain goal [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In general, these strategies only consider
properties related to the items, such as their popularity, how diverse the
received ratings are (as a proxy towards their level of controversy
or uncertainty), and their influence on reducing global information
variance or entropy within the system. Few and recent methods, in
contrast, exploit the content features of the items, such as domain
attributes and metadata [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], e.g., directors and actors of a movie,
genres of a music artist, and so on. These methods have
demonstrated good results in the so-called item cold-start problem, which
is closely related to the final goal of active learning, since in both
cases there is a need for increasing the number of known ratings.
      </p>
      <p>
        In this sense, opinions and sentiments expressed by users about
items in personal, textual reviews are valuable signals of user
preferences. However, their modeling and further identification is
challenging. Specific features or aspects (e.g., technical characteristics
and components) of the reviewed items –such as the price for
cameras and mobile phones, and the atmosphere for restaurants and
hotels– need to be extracted for properly modeling the users’
preferences. In fact, the research literature on aspect extraction is
extensive and has shown the positive value of aspect opinion mining
for user modeling and recommendation [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Despite this, to the best of our knowledge, there is no AL strategy
that exploits aspect opinions, instead of numeric ratings or generic
non opinion-related attributes, to guide the preference elicitation
process. Addressing this research gap, in this paper we explore a
novel active learning approach that elicits the next item to present
a user by considering its aspect-based similarity with previously
interacted items by the user. We report experiments on a real-world
dataset with Amazon reviews, showing that the proposed
aspectbased AL strategy is able to elicit a similar number of relevant
preferences with respect to existing AL strategies, but much earlier
in the process, which allows mitigating the cold-start problem better
and faster than item-based approaches. Moreover, we show that
a recommendation method exploiting the preferences elicited by
our AL strategy achieves better performance, not only in terms of
rating prediction metrics as measured in previous works, but also
in terms of ranking metrics, which are closer to the actual user
experience [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], an issue neglected in past work that evaluated AL
for recommender systems [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>BACKGROUND AND RELATED WORK</title>
      <p>
        As mentioned before, in this work we aim to exploit item aspects
for active learning. For this purpose, we first need to extract the
aspects from textual reviews provided by users. As presented in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ],
aspect extraction methods can be categorized as approaches that are
based on aspect vocabularies, word frequencies, syntactic relations,
and topic models. Representing a starting point of our research on
aspect-based AL, we focus on approaches based on aspect
vocabularies, where explicit mappings between terms and aspects are
specified. The use of other types of aspect extraction approaches is
left as future work.
      </p>
      <p>
        Regarding the active learning strategies used in recommender
systems, the literature is extensive, so we will follow the survey
and taxonomy presented in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In that work, AL strategies are
classified as non-personalized and personalized depending on whether
they request all users for the same set of items or not. Moreover,
within such categories, the authors further classified the strategies
into single- and combined-heuristic methods, which correspond to
whether they implement a unique item selection rule or if several
rules are somehow combined. These heuristics aim to optimize
diferent criteria, such as reducing the uncertainty or the error in
rating prediction, focusing on the items that received the highest
attention from users, those more familiar to the user (and, hence,
more rateable), or those that would provide the most impact on the
system as a whole.
      </p>
      <p>In the experiments reported in this paper, we used a
representative set of strategies that cover distinct heuristics and
hypotheses regarding the optimal elicited items, mostly from the
nonpersonalized category, since they have shown a good trade-of
between performance and eficiency.
3</p>
    </sec>
    <sec id="sec-3">
      <title>A NOVEL ASPECT-BASED ACTIVE</title>
    </sec>
    <sec id="sec-4">
      <title>LEARNING METHOD</title>
      <p>As already discussed, active learning strategies have shown
promising results to elicit information about user preferences in diferent
domains. However, when dealing with situations where users
express their opinion on items by writing reviews, it becomes more
important to understand and process such textual content in
detail to better model the user preferences. An approach that has
recently brought positive results is exploiting the rich information
elements that can be extracted from the reviews, in particular, the
item aspects mentioned and the opinion or sentiment associated to
them.</p>
      <p>
        Following a recent work on this topic [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], we propose to extract
these aspects and use them to elicit those items that are more
similar to the ones previously assessed by the user. Our goal is helping
the user to find items that share characteristics with previously
interacted items, and supporting the system to gather more
preferences and better user models. Exploiting item aspects, instead
of other content or collaborative information, should alleviate the
cold-start problem [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], and could help the user in expressing her
preferences more easily, as well as reducing the mistakes made
by the recommender systems. To test this hypothesis, we leave as
future work the integration and evaluation of our method within a
conversational agent [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        The proposed method for AL selects those items with higher
similarities to the user’s previously rated items. More specifically,
we extend the item-to-item similarity matrix with the rating
information already available in the system by means of a hybrid
recommendation approach recently presented in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], where a latent
space is learnt based on collaborative information and side
similarity, in our case, an aspect-based item similarity. We use cosine
similarity over the item profile, that is,  = { }=1 built on the
 aspect opinions extracted for each item, where  is the weight
assigned to aspect  for item  :
sim(,  ) = q
Í 
Í 2 Í 2
(1)
      </p>
      <p>
        According to the taxonomy presented in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], this method belongs
to the item-item category, and it is a personalized single-heuristic
strategy, where the item-to-item similarity is computed based on
the aspects extracted for each item in the system.
      </p>
      <p>
        Regarding the exploited aspects, in this work we use a
vocabularybased aspect extraction method, in particular, the one called voc
in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This method makes use of a vocabulary for item aspects on a
given domain, and analyzes syntactic relations between the words
of each sentence in user reviews to extract the personal opinions
about the aspects. We have chosen this method because it exhibits a
good trade-of between simplicity and positive results. Other aspect
extraction methods could be explored in the future.
As done by other researchers, in our experiments, we used the
popular McAuley’s Amazon product reviews dataset [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and more
specifically, we preliminary focused the experiments on the
subset associated to the Movies &amp; TV domain. Initially, this dataset
includes 1,697,533 ratings, by 123,960 users on 50,052 items. Once
we filtered out those items without aspects, we obtained 1,683,190
ratings on 48,074 items. Then, we decided to filter out users with
less than 20 ratings since, as we shall explain in Section 4.2, the
evaluation methodology is quite exhaustive, and we need to have
as many users with enough information as possible. Nonetheless,
in the future, we would like to extend this analysis to more formal
methodologies focused on the new user problem, such as the one
used in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. After the above process, 819,148 ratings by 14,010 users
on 47,506 items remained.
      </p>
      <p>
        Regarding the aspect coverage, the method voc introduced
before initially provides 369,175 aspect annotations of 23 distinct
aspects on 48,074 items, which were reduced to 367,750 aspect
annotations of 23 distinct aspects on 47,506 items after the whole
ifltering process.
To study the performance of the AL strategies considered in our
experiments, we use the following simulation procedure, adapted
from [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Diferently to previous works where improvements were
typically measured only for the same user, our procedure is oriented
to evaluate the overall performance of the system.
      </p>
      <p>Specifically, we divided the full dataset into 3 splits, namely
training set, with ratings known by the system, candidate set, with
ratings known by the users but not by the system, and test set, with
a portion of the ratings known by the users that are withheld from
the previous set. Then, a recommender system was trained using the
entire training set, and for each user, in each evaluation iteration,
10 items were elicited from a particular AL strategy (which should
return 10 or less items). All the items that belong to the candidate
set for each user were included in the training set to be used in the
next iteration. This process is repeated 170 times (iterations). At
the end of each iteration, the evaluation metrics presented in the
next section were computed for the recommendations generated
from the updated training set.</p>
      <p>In the experiments, we started with 2% of the data for training,
68% for candidates, and the remainder for test. Due to the high
computational cost of the presented methodology, we sampled
1,500 users, and only used the above ratings from them, resulting
http://jmcauley.ucsd.edu/data/amazon/
in 80K-90K ratings on around 27K items, on average. Additionally,
we repeated this procedure 3 times (where the splitting was done
randomly) to report average metric values.
4.3</p>
    </sec>
    <sec id="sec-5">
      <title>Evaluation metrics</title>
      <p>
        In the last years, performance evaluation of recommender systems
has shifted from measuring rating prediction errors to measuring
ranking quality [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However, since most of the literature on
preference elicitation used error metrics, such as Mean Absolute Error
(MAE), we decided to report this metric, and thus better compare
our proposal against the most influential research works.
      </p>
      <p>
        We also report ranking-based metrics such as Precision at
different cutofs, by following the RelPlusN methodology described
in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], where a ranking is created for every user’s relevant item
where that item, together with other  items ( = 100 in our
experiments) randomly selected, are ranked according to the estimated
scores by the recommender. In this procedure, we assume items are
relevant for a user whenever the rating in the test set is 5.
4.4
      </p>
    </sec>
    <sec id="sec-6">
      <title>Baseline methods</title>
      <p>
        Regarding the AL strategies compared with our method presented
in Section 3, we consider the following baselines: random, variance
–which selects the items with the highest variance–, popularity
–which selects the most popular items–, entropy –which selects
the items with the highest dispersion of the ratings for an item–,
and log-pop-entropy –which finds a balance between the last
two previous strategies. We also considered a wide array of
nonpersonalized methods because in the original paper of the item-item
approach [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], its performance was not always superior to other
non-personalized strategies.
5
      </p>
    </sec>
    <sec id="sec-7">
      <title>RESULTS</title>
      <p>Figure 1 shows the evolution on the number of ratings correctly
elicited by each strategy, that is, how many elicited items belong to
the candidates set in each iteration. We observe that most of the
strategies converge to the same number except our proposal, which
has a limited item coverage since not all items have aspect opinions.
However, we want to emphasize an interesting phenomenon; in the
ifrst 25 iterations, the strategies are divided into two groups: aspects
and random (which provide less correct items), and the remainder.
This observation is important when considered in combination
with the performance of the recommender.</p>
      <p>Indeed, Figure 2 shows the evolution on the error of the
diferent AL strategies. Again, we observe that, on the long run, ours is
the worst performing method (not shown for space constraints).
However, in the first iterations, it is the strategy that reduces the
most the error of the system. This by itself is remarkable, since it
is the most useful scenario for a real user, who does not want to
spend 50 iterations giving feedback to the system. This result is
even more positive considering that the number of elicited
preferences is smaller than with other strategies. Hence, the proposed
method is able to reduce the error of the system during the first
iterations of the preference elicitation process, even though it is
not as competitive as other methods when finding the known items
that were hidden to the system, i.e., the items in the candidate set.
In this paper, we have proposed a novel active learning approach
based on opinions about item aspects. We have preliminary shown
its efect on user preference elicitation by experimenting with a
real-world dataset. In our empirical results, the developed method
outperformed state-of-the-art strategies in terms of both rating
prediction error and ranking precision metrics, computed after a
recommender system was trained with user preferences elicited by
each of the active learning strategies.</p>
      <p>
        These results are very promising, even though we took many
simple solutions to address some of the issues at hand. In this sense, in
the future we aim to consider more exhaustive experiments testing
several recommender systems, more sophisticated aspect
extraction methods than the one used here, and datasets from several
domains with diferent characteristics [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Additionally, we plan to
formally analyze the behavior of our method on diferent cold-start
settings [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], together with an online evaluation with real users, for
instance, by integrating our method into a conversational agent or
chatbot, with which we will check whether the user preferences
are elicited faster or with higher quality, as we have observed in
the ofline experiments herein presented.
      </p>
    </sec>
    <sec id="sec-8">
      <title>ACKNOWLEDGMENTS</title>
      <p>This work was supported by the Spanish Ministry of Science and
Innovation (PID2019-108965GB-I00).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Konstantina</given-names>
            <surname>Christakopoulou</surname>
          </string-name>
          , Filip Radlinski, and
          <string-name>
            <given-names>Katja</given-names>
            <surname>Hofmann</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Towards Conversational Recommender Systems</article-title>
          .
          <source>In KDD. ACM</source>
          ,
          <volume>815</volume>
          -
          <fpage>824</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Mehdi</given-names>
            <surname>Elahi</surname>
          </string-name>
          , Francesco Ricci, and
          <string-name>
            <given-names>Neil</given-names>
            <surname>Rubens</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Active learning strategies for rating elicitation in collaborative filtering: A system-wide perspective</article-title>
          .
          <source>ACM Trans. Intell. Syst. Technol. 5</source>
          ,
          <issue>1</issue>
          (
          <year>2013</year>
          ),
          <volume>13</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          :
          <fpage>33</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Mehdi</given-names>
            <surname>Elahi</surname>
          </string-name>
          , Francesco Ricci, and
          <string-name>
            <given-names>Neil</given-names>
            <surname>Rubens</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>A survey of active learning in collaborative filtering recommender systems</article-title>
          .
          <source>Comput. Sci. Rev</source>
          .
          <volume>20</volume>
          (
          <year>2016</year>
          ),
          <fpage>29</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Ignacio</given-names>
            <surname>Fernández-Tobías</surname>
          </string-name>
          , Iván Cantador, Paolo Tomeo, Vito Walter Anelli, and Tommaso Di Noia.
          <year>2019</year>
          .
          <article-title>Addressing the user cold start with cross-domain collaborative filtering: exploiting item metadata in matrix factorization</article-title>
          .
          <source>User Model. User-Adapt. Interact</source>
          .
          <volume>29</volume>
          ,
          <issue>2</issue>
          (
          <year>2019</year>
          ),
          <fpage>443</fpage>
          -
          <lpage>486</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Evgeny</given-names>
            <surname>Frolov</surname>
          </string-name>
          and
          <string-name>
            <given-names>Ivan V.</given-names>
            <surname>Oseledets</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>HybridSVD: when collaborative information is not enough</article-title>
          .
          <source>In RecSys. ACM</source>
          ,
          <volume>331</volume>
          -
          <fpage>339</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Asela</given-names>
            <surname>Gunawardana</surname>
          </string-name>
          and
          <string-name>
            <given-names>Guy</given-names>
            <surname>Shani</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Evaluating Recommender Systems</article-title>
          .
          <source>In Recommender Systems Handbook</source>
          . Springer,
          <fpage>265</fpage>
          -
          <lpage>308</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>María</given-names>
            <surname>Hernández-Rubio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Iván</given-names>
            <surname>Cantador</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Alejandro</given-names>
            <surname>Bellogín</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>A comparative analysis of recommender systems based on item aspect opinions extracted from user reviews</article-title>
          .
          <source>User Model. User-Adapt. Interact</source>
          .
          <volume>29</volume>
          ,
          <issue>2</issue>
          (
          <year>2019</year>
          ),
          <fpage>381</fpage>
          -
          <lpage>441</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Yehuda</given-names>
            <surname>Koren and Robert M. Bell</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Advances in Collaborative Filtering</article-title>
          .
          <source>In Recommender Systems Handbook</source>
          . Springer,
          <fpage>77</fpage>
          -
          <lpage>118</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Julian</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>McAuley</surname>
            and
            <given-names>Alex</given-names>
          </string-name>
          <string-name>
            <surname>Yang</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Addressing Complex and Subjective Product-Related Queries with Customer Reviews</article-title>
          .
          <source>In WWW. ACM</source>
          ,
          <volume>625</volume>
          -
          <fpage>635</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Sean M. McNee</surname>
          </string-name>
          ,
          <string-name>
            <surname>John Riedl</surname>
          </string-name>
          , and
          <string-name>
            <surname>Joseph</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Konstan</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>Being accurate is not enough: how accuracy metrics have hurt recommender systems</article-title>
          .
          <source>In CHI Extended Abstracts. ACM</source>
          ,
          <volume>1097</volume>
          -
          <fpage>1101</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Xia</surname>
            <given-names>Ning</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Christian</given-names>
            <surname>Desrosiers</surname>
          </string-name>
          , and
          <string-name>
            <given-names>George</given-names>
            <surname>Karypis</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>A Comprehensive Survey of Neighborhood-Based Recommendation Methods</article-title>
          .
          <source>In Recommender Systems Handbook</source>
          . Springer,
          <fpage>37</fpage>
          -
          <lpage>76</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Al</given-names>
            <surname>Mamunur</surname>
          </string-name>
          <string-name>
            <surname>Rashid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>István</given-names>
            <surname>Albert</surname>
          </string-name>
          ,
          <article-title>Dan Cosley, Shyong K</article-title>
          . Lam,
          <string-name>
            <surname>Sean M. McNee</surname>
            ,
            <given-names>Joseph A.</given-names>
          </string-name>
          <string-name>
            <surname>Konstan</surname>
            ,
            <given-names>and John</given-names>
          </string-name>
          <string-name>
            <surname>Riedl</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>Getting to know you: learning new user preferences in recommender systems</article-title>
          .
          <source>In IUI. ACM</source>
          ,
          <volume>127</volume>
          -
          <fpage>134</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Francesco</surname>
            <given-names>Ricci</given-names>
          </string-name>
          , Lior Rokach, and
          <string-name>
            <given-names>Bracha</given-names>
            <surname>Shapira</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Recommender Systems: Introduction and Challenges</article-title>
          .
          <source>In Recommender Systems Handbook</source>
          . Springer,
          <fpage>1</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Alan</given-names>
            <surname>Said</surname>
          </string-name>
          and
          <string-name>
            <given-names>Alejandro</given-names>
            <surname>Bellogín</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Comparative recommender system evaluation: benchmarking recommendation frameworks</article-title>
          .
          <source>In RecSys. ACM</source>
          ,
          <volume>129</volume>
          -
          <fpage>136</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Yu</surname>
            <given-names>Zhu</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Jinghao</given-names>
            <surname>Lin</surname>
          </string-name>
          , Shibi
          <string-name>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <surname>Beidou</surname>
            <given-names>Wang</given-names>
          </string-name>
          , Ziyu Guan, Haifeng Liu, and
          <string-name>
            <given-names>Deng</given-names>
            <surname>Cai</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Addressing the Item Cold-Start Problem by Attribute-Driven Active Learning</article-title>
          .
          <source>IEEE Trans. Knowl. Data Eng</source>
          .
          <volume>32</volume>
          ,
          <issue>4</issue>
          (
          <year>2020</year>
          ),
          <fpage>631</fpage>
          -
          <lpage>644</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>