<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploiting Neuro-Symbolic Graph Embeddings based on First-Order Logical Rules for Knowledge-aware Recommendations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Giuseppe Spillo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cataldo Musto</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pasquale Lops</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco de Gemmis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanni Semeraro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bari Aldo Moro</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper1, we discuss a knowledge-aware recommendation framework based on neuro-symbolic graph embeddings that encodes first-order logical (FOL) rules. Our workflow starts from a knowledge graph (KG) encoding user preferences and item properties. Next, knowledge-aware recommendations are obtained through the combination of three modules: (i) a rule learner, that extracts FOL rules from the KG; (ii) a graph embedding module, that learns the embeddings of users and items based on the triples of the KG and the FOL rules previously extracted; (iii) a recommendation module, that uses the embeddings to feed a deep learning architecture. In the experimental session we evaluate the efectiveness of our strategy on two datasets. The results show that the combination of KG embeddings and FOL rules improves the predictive accuracy and the novelty of the recommendations.</p>
      </abstract>
      <kwd-group>
        <kwd>recommender systems</kwd>
        <kwd>graph embeddings</kwd>
        <kwd>symbolic reasoning</kwd>
        <kwd>neuro-symbolic systems</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Recommender Systems (RS) are getting a crucial role in decision-making processes [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Differently from early RS approaches, which relied on simple user-item interactions and ignored
descriptive information, more recent attempts showed that knowledge-aware recommender
systems (KARS) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] significantly improve the performance of RS [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ] by exploiting descriptive
features available in knowledge graphs (KGS), such as DBpedia [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In this research line, recent
shreds of evidences [
        <xref ref-type="bibr" rid="ref8">8, 9, 10</xref>
        ] show how good KARS based on KG embeddings [11, 12] perform
in recommendation tasks; however, these methods are purely data-driven and non-symbolic1.
Instead, the current wave of neuro-symbolic AI systems [13], fostered the development of models
that combine data-driven approaches with pure symbolic methods, in order to take the best from
both the worlds. For example, methods for joint embedding of First-Order Logics (FOL) rules
and graphs have been proposed [14, 15]: in this setting, graphs provide explicit knowledge, while
FOL rules could be exploited to explicitly inject some background knowledge which is likely to
improve the embedding process and the resulting representation. In this work, we follow these
intuitions and we investigate whether RS can benefit of the integration between symbolic and
non-symbolic knowledge as well. To this end, we present a knowledge-aware recommendation
framework that relies on neuro-symbolic graph embeddings exploiting first-order logical rules .
Starting from a KG encoding information about users, ratings and descriptive properties of the
items, we design a modular framework based on three components: (i) a Rule Learner, that
extracts FOL rules based on the information encoded in the KG; (ii) a Graph Embedding module,
that jointly learns a vector-space representation of users and items based on both the explicit
information encoded in the KG and the background knowledge encoded in the rules previously
learned; (iii) a Recommendation Framework, that takes as input the embeddings and use them to
feed a deep architecture that predicts the top-k items for the user. In the experimental session, we
evaluate the efectiveness of our strategy on two datasets and results show that the combination
of KG embeddings and FOL rules led to an improvement of the predictive accuracy.
      </p>
      <p>The rest of the paper is organized as follows. In Section 2 we present related work in the area.
Next, the diferent modules that compose our framework are introduced in Section 3. Results of
the experiment are shown in Section 4. Finally, conclusions and future work are sketched in
Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Graph Embeddings for Recommender Systems. The aim of graph embedding techniques
is to represent entities and relations in a KG as dense vectors by projecting them in a vector
space, preserving the original structure of the graph [11]. Similarly to the approach proposed
by Palumbo et al. [16], we applied graph embeddings over a tripartite graph encoding both
collaborative and content-based information. Other works (such as [17, 18, 19, 12]) showed that
recommendation models exploiting graph embedding techniques outperform typical baselines.
Next, previous research [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], showed that translation models, such as TransE [20], obtain
competitive performance in recommendation tasks. Accordingly, this work exploits a translation
model, i.e. KALE [14], as graph embedding technique. KALE is considered as an extension
of TransE that exploits first-order logic to merge in a unified representation logical rules and
triples encoded in a knowledge graph. Up to our knowledge, the use of KALE in deep learning
architectures for recommendation tasks has never been investigated in literature.
      </p>
      <p>
        Knowledge-aware Recommender Systems (KARS). KARS foster the idea of injecting
information encoded in KGs and RSs. While early works were based on similarity and Linked Open
Data [21, 22, 23], more recent methods followed the wave of deep learning. Knowledge-aware
Hybrid Factorization Machines (KaHFM) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] obtained very competitive results by extending
classic factorization machines by using semantics information encoded in a KG. Another
interesting approach regards the application of Graph Convolutional Neural Networks [24] to KGs.
Here, Wang et al. [25] present Knowledge Graph Convolutional Networks (KGCN) for RS, which
learn a representation based on user-item interactions (encoded into a matrix) and descriptive
properties (encoded in a KG). Similarly, KGAT (Knowledge Graph Attention Network) [25] uses
an attention mechanisms to model the high-order connectivities in KG. Most of these work were
considered as baselines in our experiments. In this case, the novelty of our work lies in the
integration of explicit knowledge encoded in the KG with background knowledge encoded as
logical rules in our knowledge-aware RS. Up to our knowledge, this kind of hybridization has
been poorly investigated.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Description of the Framework</title>
      <p>In this section, we introduce the modules composing our framework; it relies on a knowledge
graph (see Figure 2) encoding information about users, items and descriptive properties.</p>
      <p>Basics of First-Order Logic. The logical rules [26] we exploit are typically referred to
as Horn clauses2, since they are composed by several atoms connected by means of
logical connectives (e.g., ∨, ∧, ⇒ . . .), in which at most one of them is positive. Each atom is
composed of variables (entities) and predicates (relations). An example of logical rules is:
∀, ,  : (, )∧(, ) ⇒ (, ). A specific formula is called ground when
every variable is replaced by a suitable entity in the graph. An example of ground rule based on the
graph in Figure 2 is: (ℎ,  ) ∧ ( ,  ) ⇒
(ℎ,  ).</p>
      <sec id="sec-3-1">
        <title>2https://en.wikipedia.org/wiki/Horn_clause</title>
        <p>Mining First-Order Logical Rules. The first task carried out by our framework consists in
mining FOL rules that hold in a KG to extract background knowledge concerning typical pattern
encoded in the graph. The process of rule mining returns a set of FOL rules in Horn form, each
with confidence score, expressing to what degree the rule holds; these scores can be used to
rank the rules and encode in the model only the most promising ones. It is worth to point out
that any FOL rules mining method can be used at this step. An example of the rules extracted
at this step is provided in the previous paragraph.</p>
        <p>Learning Graph Embeddings. Once the rules are extracted, a joint learning based on triples
in the KG and FOL rules is carried out. In this work, we used KALE [14] as graph embedding
technique, which is inspired by TransE [20] and extends it by encoding symbolic knowledge
given by FOL rules.</p>
        <p>The neuro-symbolic nature of KALE lies in the fact that the embeddings for each entity in
the KG are learnt by exploiting: (i) explicit knowledge, expressed by triples encoded in KG;
(ii) background knowledge, expressed by FOL rules. Explicit and background knowledge are
represented in a unified framework that learns a comprehensive representation based on both
information sources. Based on work by Rocktäschel et al. [27, 28], joint training is possible
since triples from a KG can be seen as atoms in FOL (e.g., likes(Alice,Kill Bill)); given that also
rules are expressed in a logical form, it is possible to exploit first-order logic as the common
framework that allows to unify the representations and to carry out a joint learning. Due to
space reasons, it is not possible to provide more details about KALE.</p>
        <p>Recommendation Framework. Learnt embeddings are then used to feed a deep learning
architecture that provides users with recommendations. Given a set of users and a set of items,
our architecture aims at identify suitable recommendations, by predicting the interest of a user
in an items and by ranking the items based on descending relevance score. As shown in Figure 3,
we designed a simple yet efective architecture based on the combination of concatenation layers
and dense layers which is inspired by previous work in the area [29] that obtained competitive
results in the top-k recommendations task.</p>
        <p>In particular, the recommendation process starts with the embeddings which are obtained as
output of the graph embedding process. Each embedding is passed through three dense layers
and is then merged through concatenation layer. After a second passage through three dense
layers, a sigmoid activation function is used to return a score between 0 and 1 which estimates
the probability that the item  is relevant for user . Before making predictions, the architecture
is trained by exploiting all the the ratings in the form (, ) available in the dataset and by
using binary cross-entropy as loss function. See next section for more details on parameters.
Once the model is learned, it can predict to what extent user  would like unseen items  ∈  .
After this step, items are ranked based on predicted relevance score and top-k are returned as
recommendations.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Evaluation</title>
      <p>In the experimental session we evaluated the efectiveness of our methodology in the task of
item recommendation to answer to the following research questions:
RQ1: Which is the best strategy to select FOL rules to be included in the embedding process? Is
there any diference in the accuracy, novelty and diversity when logical rules are encoded in the
model?
RQ2: How does our approach based on neuro-symbolic graph embeddings perform w.r.t.
competitive baselines?
4.1. Experimental Design
Datasets. Experiments were carried out in a movie recommendation and in a book
recommendation scenario. In the former case, MovieLens 1M (ML1M)3 was exploited as dataset, while
in the latter we used DBbook dataset4. As KG, we exploited DBpedia. To extract information
from DBpedia and populate our KG, we exploited a mapping already available online5. Table 1
depicts some statistics about the datasets.</p>
      <p>ML1M
DBbook</p>
      <p>Users
6,040
6,181</p>
      <p>Items
3,883
6,733</p>
      <p>Ratings
1,000.209
72,372
%Positive
57.51%
45.85%</p>
      <p>Sparsity
96.42%
99.83%</p>
      <p>Entities
26,858
17,505
Protocol. For both the datasets, we used a 80%-20% training-test split. Data were split in order
to maintain the ratio between positive and negative ratings. As for MovieLens-1M we considered
as positive only the ratings equal to 4 and 5 out of 5. As for DBbook, ratings were provided in a
binary format (positive/negative). The predictive accuracy of the algorithms was evaluated on
top-5 recommendation list, calculated by following the TestRatings strategy [30].
Source Code and Parameters. Rule mining was carried out by exploiting the latest version
of AMIE6, while a recent Java implementation of KALE7 was used to learn graph embeddings.
Finally, the source code of our deep recommendation framework is available on GitHub8 and is
3http://grouplens.org/datasets/movielens/
4http://challenges.2014.eswc-conferences.org/index.php/RecSys
5https://github.com/sisinflab/LODrecsys-datasets
6https://github.com/lajus/amie
7https://github.com/iieir-km/KALE
8https://github.com/giuspillo/RepoNeSyRecSys2022
inspired by the implementation made available by the authors of [29] released on Github9. As
regards the parameters of the tools, as for AMIE, we set maximum length rules to 4 atoms, while
for each rule minimum confidence is set equal to to 0.001 and minimum coverage equal to 0.1.
As regards KALE, we learnt embeddings having size=512, 768 and we learnt the representation
with mini batches equal to 100 for both the datasets. Margin separating positive and negative
examples = 0.1, entity learning rate = 0.05, relations learning rate = 0.05, iterations = 1000.
All these parameters were set through a grid search. Finally, as regards our deep architecture,
our models were trained for 25 epochs, by setting batch sizes to 512 for ML1M and to 1536
for DBbook, respectively. The parameter  is set to 0.9 and learning rate is set to 0.001. As
optimizer, we used ADAM for ML1M and RMSprop for for DBbook. All the parameters were
tuned through a grid search.</p>
      <p>Configurations. Throughout the experimental protocol, we compared five diferent variant of
our framework: two basic configurations (which do not invole FOL rules) and three rule-based
configurations. In particular, we compared the embeddings learnt on the simple user-item and
user-item-properties graphs (i.e., graph encoding only ratings and graph encoding descriptive
properties too). Next, rule-based configuration were run over the user-item-properties graph,
and we defined three diferent heuristics to rank the rules returned by AMIE: (i) like rules,
having a like relation in the head (e.g., (, ) ∧ (, ) ⇒ (, )); (ii) top-k
like rules, having a like relation in the head based on the coverage of the rules; (iii) high
confidence rules, having a confidence score higher than 0.75. Of course, other strategies to
select FOL rules can be applied in this step.</p>
      <p>
        Baselines and Evaluation Metrics. To ensure the complete reproducibility, we calculated
our evaluation metrics and we selected our baselines by using the Elliot10 framework [31]. As
regards the metrics, to assess the accuracy of the recommendations we used F1 score [32], Mean
Average Precision (MAP), and the normalized discounted cumulative gain (nDGC) [33, 34] scores;
we also considered diversity and novelty of the recommendations by calculating Gini Index and
Expected Popularity Complement (EPC) [35]. Finally, we also compared our methodology to
ten baselines available in Elliot: three matrix factorization techniques (SLIM [36], BPRMF [37]
and PureSVD), three methods based on deep learning models, (MultiVAE [38], CFGAN [39] and
NGCF [40]) and four algorithms implementing knowledge-aware techniques (Item-KNN and
User-KNN [41], KaHFM [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and LightGCN [42]). All the algorithms are run with their optimal
parameters, selected by the Elliot framework through a grid search.
4.2. Results
(RQ1) Performance of the Framework and Selection of Logical Rules. For RQ1, we
compared performances obtained by basic configurations ( user-item and user-item-properties),
treated as baselines, with those using FOL rules. As shown in Table 2, baselines are almost
always outperformed by the other configurations. For ML1M, for  = 512, all rule-based
configuration overcome the baselines with a statistically significant gap; configurations based
on like rules got the best results for all accuracy and novelty metrics. For  = 768 we got good
results too, but with a less significant gap. For Dbbook we got similar findings: the overall
      </p>
      <sec id="sec-4-1">
        <title>9https://github.com/swapUniba/Deep_CBRS_Amar 10https://github.com/sisinflab/elliot</title>
        <p>Baseline
BPRMF
PureSVD
Slim
MultiVAE
CFGAN
NGCF
AttributeItemKnn
AttributeUserKnn
KaHFM
LightGCN
Our best</p>
        <p>MAP
nDCG
Diversity</p>
        <p>Gini</p>
        <p>Novelty</p>
        <p>EPC</p>
        <p>Diversity</p>
        <p>Gini</p>
        <p>Novelty</p>
        <p>
          EPC
best results are obtained for  = 512, and the best-performing configuration is top-k likes rules.
Moreover, our best-performing configuration improves the baselines in terms of diversity and
novelty as well. As regards  = 768, the behavior we noted is in line with that we observed
for ML1M. We can also notice that user-item often overcomes user-item-properites: this means
that descriptive properties are poorly informative; however, injecting background knowledge
encoded as FOL rules is able to overcome this issue and leads to the overall best results.
(RQ2) Comparison to Baselines. To answer RQ2, we compared our best-performing
conifguration with some competitive baselines. Table 3 show that our framework significantly
overcomes those baselines. For ML1M (Table 3), all metrics but diversity have been outperformed
with a significant gap (  &lt; 0.01). It is worth to notice that KARS have been outperformed by
methods for matrix factorization, diferently by our expectations. Results on Dbbook confirm
these findings: all metrics, except for novelty, are overcame by our framework. The best baseline
in this case is KaHFM [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], and this should not surprise since the high sparsity of the dataset
emphasized the importance of the descriptive features of the items. We can also note that our
approach obtained the best results for novelty and diversity on ML1M and DBbook, respectively,
so further analyses will be carried out to better understand this behavior.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Work</title>
      <p>In this paper we discussed a knowledge-aware recommendation framework based on
neurosymbolic graph embeddings encoding first-order logical rules ; it combines explicit knowledge
provided by triples in the graph and background knowledge provided FOL rules. Our
experiments provided us with the following findings: (i) joint learning based on FOL rules and KGs
generates more precise embeddings and more accurate recommendations, in particular using
rules with only like relation in the head; (ii) comparisons with several baselines confirmed the
accuracy of our framework. A good impact is also noted in terms of novelty and diversity of the
recommendations. Such promising results represent a first step in the direction of developing
neuro-symbolic RSs exploiting neural and symbolic reasoning. However, given the novelty of
the current work, we are aware that several limitations exist. As an example, more accurate
methods for selecting the rules are needed, together with a more extensive analysis of the
informative power of diferent FOL rules. As future work, we will also strengthen the level of
the baselines we consider in our experiments and the number of datasets, and by evaluating the
system in diferent domains (i.e., food recommendations [ 43]) and by modeling user preferences
by exploiting diferent sets of features [44].
for item recommendation, in: European Semantic Web Conference, Springer, 2018, pp.
478–490.
[9] Z. Sun, J. Yang, J. Zhang, A. Bozzon, L.-K. Huang, C. Xu, Recurrent knowledge graph
embedding for efective recommendation, in: Proceedings of the 12th ACM Conference
on Recommender Systems, 2018, pp. 297–305.
[10] W. Song, Z. Duan, Z. Yang, H. Zhu, M. Zhang, J. Tang, Explainable knowledge graph-based
recommendation via deep reinforcement learning, arXiv preprint arXiv:1906.09506 (2019).
[11] H. Cai, V. W. Zheng, K. C.-C. Chang, A comprehensive survey of graph embedding:
Problems, techniques, and applications, IEEE Transactions on Knowledge and Data
Engineering 30 (2018) 1616–1637.
[12] C. Musto, P. Lops, M. de Gemmis, G. Semeraro, Context-aware graph-based
recommendations exploiting personalized pagerank, Knowledge-Based Systems 216 (2021) 106806.
[13] M. K. Sarker, L. Zhou, A. Eberhart, P. Hitzler, Neuro-symbolic artificial intelligence, AI</p>
      <p>Communications (2021) 1–13.
[14] S. Guo, Q. Wang, L. Wang, B. Wang, L. Guo, Jointly embedding knowledge graphs and
logical rules, in: Proceedings of the 2016 conference on empirical methods in natural
language processing, 2016, pp. 192–202.
[15] S. Guo, Q. Wang, L. Wang, B. Wang, L. Guo, Knowledge graph embedding with iterative
guidance from soft rules, in: Proceedings of the AAAI Conference on Artificial Intelligence,
volume 32, 2018.
[16] E. Palumbo, G. Rizzo, R. Troncy, Entity2rec: Learning user-item relatedness from
knowledge graphs for top-n item recommendation, in: Proceedings of the Eleventh ACM
Conference on Recommender Systems, ACM, 2017, pp. 32–36.
[17] C. Musto, P. Basile, G. Semeraro, Embedding knowledge graphs for semantics-aware
recommendations based on dbpedia, in: Adjunct Publication of the 27th Conference on
User Modeling, Adaptation and Personalization, 2019, pp. 27–31.
[18] S. Forouzandeh, K. Berahmand, M. Rostami, Presentation of a recommender system with
ensemble learning and graph embedding: a case on movielens, Multimedia Tools and
Applications (2020) 1–28.
[19] L. Grad-Gyenge, A. Kiss, P. Filzmoser, Graph embedding based recommendation techniques
on the knowledge graph, in: Adjunct Publication of the 25th Conference on User Modeling,
Adaptation and Personalization, 2017, pp. 354–359.
[20] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, O. Yakhnenko, Translating embeddings
for modeling multi-relational data, in: Neural Information Processing Systems (NIPS),
2013, pp. 1–9.
[21] C. Musto, G. Semeraro, P. Lops, M. d. Gemmis, F. Narducci, Leveraging social media sources
to generate personalized music playlists, in: International Conference on Electronic
Commerce and Web Technologies, Springer, 2012, pp. 112–123.
[22] C. Musto, F. Narducci, P. Lops, G. Semeraro, M. d. Gemmis, M. Barbieri, J. Korst, V. Pronk,
R. Clout, Enhanced semantic tv-show representation for personalized electronic program
guides, in: International Conference on User Modeling, Adaptation, and Personalization,
Springer, 2012, pp. 188–199.
[23] G. Piao, J. G. Breslin, Measuring semantic distance for linked open data-enabled
recommender systems, in: Proceedings of the 31st Annual ACM Symposium on Applied
Computing, 2016, pp. 315–320.
[24] S. Zhang, H. Tong, J. Xu, R. Maciejewski, Graph convolutional networks: a comprehensive
review, Computational Social Networks 6 (2019) 1–23.
[25] X. Wang, X. He, Y. Cao, M. Liu, T.-S. Chua, KGAT: Knowledge graph attention network
for recommendation, in: Proceedings of the 25th ACM SIGKDD International Conference
on Knowledge Discovery &amp; Data Mining, 2019, pp. 950–958.
[26] R. M. Smullyan, First-order logic, Courier Corporation, 1995.
[27] T. Rocktäschel, M. Bosnjak, S. Singh, S. Riedel, Low-dimensional embeddings of logic, in:</p>
      <p>Proceedings of the ACL 2014 workshop on semantic parsing, 2014, pp. 45–49.
[28] T. Rocktäschel, S. Singh, S. Riedel, Injecting logical background knowledge into embeddings
for relation extraction, in: Proceedings of the 2015 conference of the north American
Chapter of the Association for Computational Linguistics: Human Language Technologies,
2015, pp. 1119–1129.
[29] M. Polignano, C. Musto, M. de Gemmis, P. Lops, G. Semeraro, Together is better: Hybrid
recommendations combining graph embeddings and contextualized word representations,
in: Fifteenth ACM Conference on Recommender Systems, 2021, pp. 187–198.
[30] A. Bellogin, P. Castells, I. Cantador, Precision-oriented evaluation of recommender systems:
an algorithmic comparison, in: Proceedings of the fifth ACM conference on Recommender
systems, 2011, pp. 333–336.
[31] V. W. Anelli, A. Bellogín, A. Ferrara, D. Malitesta, F. A. Merra, C. Pomo, F. M. Donini, T. D.</p>
      <p>Noia, Elliot: a comprehensive and rigorous framework for reproducible recommender
systems evaluation, CoRR abs/2103.02590 (2021).
[32] F. H. Del Olmo, E. Gaudioso, Evaluation of recommender systems: A new approach, Expert</p>
      <p>Systems with Applications 35 (2008) 790–804.
[33] K. Järvelin, J. Kekäläinen, Ir evaluation methods for retrieving highly relevant documents,
in: ACM SIGIR Forum, volume 51, ACM New York, NY, USA, 2017, pp. 243–250.
[34] Y. Wang, L. Wang, Y. Li, D. He, W. Chen, T.-Y. Liu, A theoretical analysis of ndcg ranking
measures, in: Proceedings of the 26th annual conference on learning theory (COLT 2013),
volume 8, 2013, p. 6.
[35] S. Vargas, P. Castells, Rank and relevance in novelty and diversity metrics for recommender
systems, in: Proceedings of the fifth ACM conference on Recommender systems, 2011, pp.
109–116.
[36] X. Ning, G. Karypis, Slim: Sparse linear methods for top-n recommender systems, in: 2011</p>
      <p>IEEE 11th International Conference on Data Mining, IEEE, 2011, pp. 497–506.
[37] S. Rendle, C. Freudenthaler, Z. Gantner, L. Schmidt-Thieme, Bpr: Bayesian personalized
ranking from implicit feedback, arXiv preprint arXiv:1205.2618 (2012).
[38] D. Liang, R. G. Krishnan, M. D. Hofman, T. Jebara, Variational autoencoders for
collaborative filtering, in: Proceedings of the 2018 world wide web conference, 2018, pp.
689–698.
[39] D.-K. Chae, J.-S. Kang, S.-W. Kim, J.-T. Lee, Cfgan: A generic collaborative filtering
framework based on generative adversarial networks, in: Proceedings of the 27th ACM
international conference on information and knowledge management, 2018, pp. 137–146.
[40] X. Wang, X. He, M. Wang, F. Feng, T.-S. Chua, Neural graph collaborative filtering, in:
Proceedings of the 42nd international ACM SIGIR conference on Research and development
in Information Retrieval, 2019, pp. 165–174.
[41] Z. Gantner, S. Rendle, C. Freudenthaler, L. Schmidt-Thieme, Mymedialite: A free
recommender system library, in: Proceedings of the fifth ACM conference on Recommender
systems, 2011, pp. 305–308.
[42] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, M. Wang, LightGCN: Simplifying and powering
graph convolution network for recommendation, in: Proceedings of the 43rd International
ACM SIGIR conference on research and development in Information Retrieval, 2020, pp.
639–648.
[43] C. Musto, C. Trattner, A. Starke, G. Semeraro, Towards a knowledge-aware food
recommender system exploiting holistic user models, in: Proceedings of the 28th ACM
conference on user modeling, adaptation and personalization, 2020, pp. 333–337.
[44] C. Musto, M. Polignano, G. Semeraro, M. de Gemmis, P. Lops, Myrror: a platform for
holistic user modeling, User Modeling and User-Adapted Interaction 30 (2020) 477–511.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Spillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Musto</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. De Gemmis</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Lops</surname>
          </string-name>
          , G. Semeraro,
          <article-title>Knowledge-aware recommendations based on neuro-symbolic graph embeddings and first-order logical rules</article-title>
          ,
          <source>in: Proceedings of the 16th ACM Conference on Recommender Systems</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>616</fpage>
          -
          <lpage>621</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jannach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zanker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Felfernig</surname>
          </string-name>
          , G. Friedrich,
          <source>Recommender systems: an introduction</source>
          , Cambridge University Press,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <article-title>A survey on knowledge graphbased recommender systems</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Musto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lops</surname>
          </string-name>
          , M. de Gemmis, G. Semeraro,
          <article-title>Semantics-aware recommender systems exploiting linked open data and graph-based features</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>136</volume>
          (
          <year>2017</year>
          )
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V. W.</given-names>
            <surname>Anelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. Di</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. Di</given-names>
            <surname>Sciascio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ragone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Trotta</surname>
          </string-name>
          ,
          <article-title>How to make latent factors interpretable by feeding factorization machines with knowledge graphs</article-title>
          , in: International Semantic Web Conference, Springer,
          <year>2019</year>
          , pp.
          <fpage>38</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Musto</surname>
          </string-name>
          , M. d. Gemmis,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lops</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Narducci</surname>
          </string-name>
          , G. Semeraro,
          <article-title>Semantics and content-based recommendations</article-title>
          ,
          <source>in: Recommender systems handbook</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>251</fpage>
          -
          <lpage>298</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          , G. Kobilarov,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cyganiak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ives</surname>
          </string-name>
          ,
          <article-title>Dbpedia: A nucleus for a web of open data, in: The semantic web</article-title>
          , Springer,
          <year>2007</year>
          , pp.
          <fpage>722</fpage>
          -
          <lpage>735</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Palumbo</surname>
          </string-name>
          , G. Rizzo,
          <string-name>
            <given-names>R.</given-names>
            <surname>Troncy</surname>
          </string-name>
          , E. Baralis,
          <string-name>
            <given-names>M.</given-names>
            <surname>Osella</surname>
          </string-name>
          , E. Ferro, Translational models
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>