<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Explainable Entity-based Recommendations with Knowledge Graphs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rose Catherine</string-name>
          <email>rosecatherinek@cs.cmu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maxine Eskenazi</string-name>
          <email>max@cs.cmu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kathryn Mazaitis</string-name>
          <email>krivard@cs.cmu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>William Cohen</string-name>
          <email>wcohen@cs.cmu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science, Carnegie Mellon University</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <abstract>
        <p>Explainable recommendation is an important task. Many methods have been proposed which generate explanations from the content and reviews written for items. When review text is unavailable, generating explanations is still a hard problem. In this paper, we illustrate how explanations can be generated in such a scenario by leveraging external knowledge in the form of knowledge graphs. Our method jointly ranks items and knowledge graph entities using a Personalized PageRank procedure to produce recommendations together with their explanations.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>
        Improving the accuracy of predictions in recommender systems is
an important research topic. An equally important task is explaining
the predictions to the user. Providing an explanation has been
shown to build user’s trust in the recommender system [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        The focus of this paper is a system to generate explanations for
Knowledge Graph (KG) -based recommendation. Users and items
are typically associated with factual data, referred to as content. For
users, the content may include demographics and other profile data.
For items such as movies, it might include the actors, directors,
genre, and the like. The KG encodes the interconnections between
such facts, and leveraging these links has been shown to improve
recommender performance [
        <xref ref-type="bibr" rid="ref13 ref2 ref3">2, 3, 13</xref>
        ].
      </p>
      <p>Although a number of explanation schemes have been proposed
in the past (Section 2), there has been no work which produces
explanations for KG-based recommenders. In this paper, we present
a method to jointly rank items and entities in the KG such that the
entities can serve as an explanation for the recommendation.</p>
      <p>Our technique can be run without training, thereby allowing
faster deployment in new domains. Once enough data has been
collected, it can then be trained to yield better performance. The
proposed method can also be used in a dialog setting, where a user
interacts with the system to refine its suggestions.</p>
    </sec>
    <sec id="sec-2">
      <title>RELATED WORK</title>
      <p>
        Generating explanations for recommendations has been an active
area of research for more than a decade. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] was an early work that
assessed diferent ways of explaining recommendations in a
collaborative filtering (CF) -based recommender system. In content-based
recommenders, the explanations revolve around the content or
proifle of the user and the item. The system of [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] simply displayed
keyword matches between the user’s profile and the books being
recommended. Similarly, [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] proposed a method called
‘Tagsplanations’, which showed the degree to which a tag is relevant to the
item, and the sentiment of the user towards the tag.
      </p>
      <p>
        With the advent of social networks, explanations that leverage
social connections have also gained attention. For example, [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
produced explanations that showed whether a good friend of the
user has liked something, where friendship strength was computed
from their interactions on Facebook.
      </p>
      <p>
        More recent research has focused on providing explanations that
are extracted from user written reviews for the items. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] extracted
phrases and sentiments expressed in the reviews and used them to
generate explanations. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] uses topics learned from the reviews as
aspects of the item, and uses the topic distribution in the reviews
to find useful or representative reviews.
      </p>
      <p>
        Knowledge Graphs have been shown to improve the performance
of recommender systems in the past. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] proposed a meta-path
based method that learned paths consisting of node types in a
graph. Similarly, [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] used paths to find the top-N recommendations
in a learning-to-rank framework. A few methods such as [
        <xref ref-type="bibr" rid="ref3 ref6">3, 6</xref>
        ]
rank items using Personalized PageRank. In these methods, the
entities present in the text of an item are first mapped to entities in
a knowledge graph. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] proposed probabilistic logic programming
models for recommendation on knowledge graphs. None of the
above KB-based recommenders attempted to generate explanations.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>EXPLANATION METHOD</title>
      <p>
        In this section, we propose our method, which builds on the work
of [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] by using ProPPR [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] for learning to recommend. ProPPR
(Programming with Personalized Page Rank) is a first order logic
system. It takes as input a set of rules and a database of facts, and
uses these to generate an approximate local grounding of each query
in a small graph. Candidate answers to the query are the nodes in
the graph that satisfy the rules. The candidates are then ranked by
running a Personalized PageRank algorithm on the graph.
      </p>
      <p>Our technique proceeds in two main steps. First, it uses ProPPR
to jointly rank items and entities for a user. Second, it consolidates
the results into recommendations and explanations.</p>
      <p>
        To use ProPPR to rank items and entities, we first define a notion
of similarity between nodes in the graph, using the same similarity
rules as [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (Figure 1). This simple rule states that two entities X
and E are similar if they are the same (Rule 1), or if there is a link
in the graph connecting X to another entity Z, which is similar to E
(Rule 2). Note that this definition of similarity is recursive.
sim(X, X) ←true.
sim(X, E) ←link(X, Z), sim(Z, E).
      </p>
      <p>Figure 1: Similarity in a graph
(1)
(2)</p>
      <p>Next, the model has two sets of rules for ranking: one set for
joint ranking of movies that the user would like, together with the
most likely reason (Figure 2), and a similar set for movies that the
user would not like. In Figure 2, Rule 3 states that a user U will like
an entity E and a movie M if the user likes the entity, and the entity
is related (sim) to the movie. The clause isMovie ensures that the
variable M is bound to a movie, since sim admits all types of entities.
Rule 3 invokes the predicate likes(U,E), which holds for an entity
E if the user has explicitly stated that they like it (Rule 4), or if they
have provided positive feedback (e.g. clicked, thumbs up, high star
rating) for a movie M containing (via link(M,E)) the entity (Rule
5). The method for finding movies and entities that the user will
dislike is similar to the above, except ‘like’ is replaced with ‘dislike’.
willLike(U, E, M) ←likes(U, E), sim(E, M), isMovie(M). (3)
likes(U, E) ←likesEntity(U, E). (4)
likes(U, E) ←likesMovie(U, M), link(M, E). (5)</p>
      <p>Figure 2: Predicting likes</p>
      <p>
        To jointly rank the items and entities, we use ProPPR to query
the willLike(U,E,M) predicate with the user specified and the
other two variables free. Then, the ProPPR engine will ground the
query into a proof graph by replacing each variable recursively
with literals that satisfy the rules from the KG [
        <xref ref-type="bibr" rid="ref12 ref2">2, 12</xref>
        ]. A sample
grounding when queried for a user alice who likes tom_hanks
and the movie da_vinci_code is shown in Figure 3.
      </p>
      <p>willLike(alice,E,M)</p>
      <p>Rule 3</p>
      <p>likes(alice, E),
sim(E, M), isMovie(M)</p>
      <p>Rule 4
likesEntity(alice, E),
sim(E, M), isMovie(M)
E = tom_hanks
sim(tom_hanks, M),</p>
      <p>isMovie(M)</p>
      <p>Rule 2
link(tom_hanks,Z),sim(Z,M), isMovie(M)
Z=M= bridge_of_spies
&amp; Rule 1</p>
      <p>Z=M= inferno
&amp; Rule 1
tom_hanks,
inferno</p>
      <p>Rule 5
likesMovie(alice, M1), link(M1, E),</p>
      <p>sim(E,M), isMovie(M)
M1 = da_vinci_code
link(da_vinci_code, E),
sim(E,M), isMovie(M)
E = drama_thriller
sim(drama_thriller,M), isMovie(M)</p>
      <p>Rule 2
link(drama_thriller,Z), sim(Z,M), isMovie(M)
Z=M= bridge_of_spies Z=M= snowden
&amp; Rule 1 &amp; Rule 1
drama_thriller, drama_thriller,
bridge_of_spies snowden</p>
      <p>After constructing the proof graph, ProPPR runs a Personalized
PageRank algorithm with willLike(alice, E, M) as the start
node. In this simple example, we will let the scores for (tom_hanks,
bridge_of_spies), (tom_hanks, inferno), (drama_thriller,
bridge_of_spies), and (drama_thriller, snowden), be 0.4, 0.4,
0.3 and 0.3 respectively.</p>
      <p>Now, let us suppose that alice has also specified that she dislikes
crime movies. If we follow the grounding procedure for dislikes and
rank the answers, we may obtain (crime, inferno) with score
0.2. Our system then proceeds to consolidate the recommendations
and the explanations by grouping by movie names, adding together
their ‘like’ scores and deducting their ‘dislike’ scores. For each
movie, the entities can be ranked according to their joint score. The
end result is a list of reasons which can be shown to the user:
(1) bridge_of_spies, score = 0.4 + 0.3 = 0.7, reasons =
{ tom_hanks, drama_thriller }
(2) snowden, score = 0.3, reasons = { drama_thriller }
(3) inferno, score = 0.4 - 0.2 = 0.2, reasons = { tom_hanks,
(-ve) crime }</p>
    </sec>
    <sec id="sec-4">
      <title>4 REAL WORLD DEPLOYMENT</title>
      <p>
        The proposed method is presently used as the backend of a personal
agent running on mobile devices for recommending movies [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
undergoing Beta testing. The knowledge graph for recommendations
is constructed from the weekly dump files released by imdb.com.
The personal agent uses a dialog model of interaction with the user.
In this setting, users are actively involved in refining the
recommendations depending on what their mood might be. For example,
for a fun night out with friends, a user may want to watch an action
movie, whereas when spending time with her significant other, the
same user may be in the mood for a romantic comedy.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5 CONCLUSIONS</title>
      <p>Knowledge graphs have been shown to improve recommender
system accuracy in the past. However, generating explanations to
help users make an informed choice in KG-based systems has not
been attempted before. In this paper, we proposed a method to
produce a ranked list of entities as explanations by jointly ranking
them with the corresponding movies.</p>
    </sec>
    <sec id="sec-6">
      <title>ACKNOWLEDGMENTS</title>
      <p>This research was supported in part by Yahoo! through the
CMUYahoo InMind project.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Mustafa</given-names>
            <surname>Bilgic</surname>
          </string-name>
          and
          <string-name>
            <given-names>Raymond J.</given-names>
            <surname>Mooney</surname>
          </string-name>
          .
          <year>2005</year>
          .
          <article-title>Explaining Recommendations: Satisfaction vs</article-title>
          . Promotion. In Beyond Personalization Workshop.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Catherine</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Cohen</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Personalized Recommendations Using Knowledge Graphs: A Probabilistic Logic Programming Approach</article-title>
          .
          <source>In Proc. RecSys</source>
          '
          <volume>16</volume>
          .
          <fpage>325</fpage>
          -
          <lpage>332</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chaudhari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Azaria</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Mitchell</surname>
          </string-name>
          .
          <article-title>An Entity Graph Based Recommender System</article-title>
          .
          <source>In RecSys '16 Posters.</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Herlocker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Konstan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Riedl</surname>
          </string-name>
          .
          <year>2000</year>
          .
          <article-title>Explaining collaborative filtering recommendations</article-title>
          ..
          <source>In CSCW</source>
          .
          <volume>241</volume>
          -
          <fpage>250</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>McAuley</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          .
          <article-title>Hidden Factors and Hidden Topics: Understanding Rating Dimensions with Review Text</article-title>
          . In RecSys '
          <volume>13</volume>
          .
          <fpage>165</fpage>
          -
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Musto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Basile</surname>
          </string-name>
          , M. de Gemmis, P. Lops, G. Semeraro, and
          <string-name>
            <given-names>S.</given-names>
            <surname>Rutigliano</surname>
          </string-name>
          .
          <article-title>Automatic Selection of Linked Open Data features in Graph-based Recommender Systems</article-title>
          . In CBRecSys
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Ostuni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. Di</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. Di</given-names>
            <surname>Sciascio</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Mirizzi</surname>
          </string-name>
          .
          <article-title>Top-N Recommendations from Implicit Feedback Leveraging Linked Open Data</article-title>
          . In RecSys '
          <volume>13</volume>
          .
          <fpage>85</fpage>
          -
          <lpage>92</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pecune</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Baumann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Matsuyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Akoju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Catherine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cassell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Eskenazi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Black</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Cohen. InMind Movie Agent - A Platform for</surname>
          </string-name>
          <article-title>Research (In Preparation)</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Pu</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          .
          <article-title>Trust Building with Explanation Interfaces</article-title>
          .
          <source>In IUI '06</source>
          .
          <fpage>93</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sharma</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Cosley</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Do Social Explanations Work?: Studying and Modeling the Efects of Social Explanations in Recommender Systems</article-title>
          . In WWW '
          <volume>13</volume>
          .
          <fpage>1133</fpage>
          -
          <lpage>1144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Vig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Riedl</surname>
          </string-name>
          . Tagsplanations:
          <article-title>Explaining Recommendations Using Tags</article-title>
          . In IUI '
          <volume>09</volume>
          .
          <fpage>47</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mazaitis</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Cohen</surname>
          </string-name>
          .
          <article-title>Programming with Personalized Pagerank: A Locally Groundable First-order Probabilistic Logic</article-title>
          .
          <source>In Proc. CIKM '13.</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>X.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sturt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Khandelwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Norick</surname>
          </string-name>
          , and J. Han.
          <article-title>Personalized Entity Recommendation: A Heterogeneous Information Network Approach</article-title>
          . In WSDM '
          <volume>14</volume>
          .
          <fpage>283</fpage>
          -
          <lpage>292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , G. Lai,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , Y. Liu, and
          <string-name>
            <given-names>S.</given-names>
            <surname>Ma</surname>
          </string-name>
          .
          <article-title>Explicit Factor Models for Explainable Recommendation Based on Phrase-level Sentiment Analysis</article-title>
          .
          <source>In SIGIR '14</source>
          .
          <fpage>83</fpage>
          -
          <lpage>92</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>