<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Graph Augmentation with LLMs for Knowledge-Aware Recommender Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Giuseppe Spillo</string-name>
          <email>giuseppe.spillo@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cataldo Musto</string-name>
          <email>cataldo.musto@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matteo Mannavola</string-name>
          <email>m.mannavola15@alumni.uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco de Gemmis</string-name>
          <email>marco.degemmis@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pasquale Lops</string-name>
          <email>pasquale.lops@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanni Semeraro</string-name>
          <email>giovanni.semeraro@uniba.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bari Aldo Moro, Dept. of Computer Science</institution>
          ,
          <addr-line>Via Edoardo Orabona, 4, 70125, Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2091</year>
      </pub-date>
      <abstract>
        <p>In this paper, we propose a recommendation model exploiting a graph augmentation technique based on Large Language Models (LLMs) to enrich information in its underlying Knowledge Graph (KG). We assume KG triples can be noisy or incomplete, leading to sub-optimal modeling of item characteristics and user preferences. Graph augmentation can thus improve data quality and provide high-quality recommendations. Accordingly, we propose our framework, that starts with a KG and designs prompts for querying an LLM to augment the graph by incorporating: (a) further item features; (b) further nodes describing user preferences, obtained by reasoning over liked items. The augmented KG is then passed through a Knowledge Graph Encoder, which learns user and item embeddings. These embeddings are used to train a recommendation model, providing personalized suggestions. Experiments show LLM-based graph augmentation significantly improves our recommendation model's predictive accuracy, confirming its efectiveness and the validity of our intuitions.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Recommender Systems</kwd>
        <kwd>Large Language Models</kwd>
        <kwd>Knowledge Graph Augmentation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Nowadays, Recommender Systems (RSs) efectively handle information overload and support user
decision-making [15]. Knowledge-Aware RSs (KARSs) exploit side information, often Knowledge Graphs
(KGs) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], to learn efective item representations and provide precise recommendations [
        <xref ref-type="bibr" rid="ref11 ref13">14, 13, 11</xref>
        ].
      </p>
      <p>
        Despite their efectiveness [
        <xref ref-type="bibr" rid="ref1">32, 1, 22, 24, 21, 25</xref>
        ], KGs have flaws: first, they may overlook descriptive
item features [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Second, KGs in KARS typically overlook user preference into item groups (e.g., fantasy
movies) or specific characteristics (e.g., a director).
      </p>
      <p>
        This paper1 [18] proposes a methodology using Large Language Models [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] (LLMs) to augment the
original KG. Our graph augmentation strategy aims to incorporate: (a) missing item features; (b) user
preferences into item features. Our contributions are:
      </p>
      <sec id="sec-1-1">
        <title>1. We design a novel framework for LLM-based graph augmentation in KARS. 2. We design prompts to extract new item features and user preference KG triples from LLMs. 3. We conduct extensive experiments, including ablation studies, and release the source code.</title>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        Lately, KGs have enhanced KARS performance. CKE [32] and CFKG [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] enriched Collaborative
Filtering (CF) data with item features learned from KGs using TransE [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Later methods leveraged
graph neural networks. KGCN [22] used Graph Convolutional Networks (GCNs) [34] to aggregate KG
information, capturing higher-order relationships. KGAT [24] employed Graph Attention Networks
to model high-order connections. None of them focus on encoding more detailed user preferences
inferred from liked items. KTUP [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] used TransH [26] for KG completion, transferring embeddings to
user modeling. KGNNLS [21] learned user-specific item embeddings by transforming the KG into a
user-specific graph.
      </p>
      <p>
        Most graph augmentation for recommendation uses contrastive learning by perturbing the user-item
graph [
        <xref ref-type="bibr" rid="ref9">9, 31, 29, 33</xref>
        ]. In contrast, our method leverages pre-trained LLMs for KG augmentation. While
LLMs have been explored for graphs [30, 35], their application in recommendation remains limited.
KAR [28] and LLMRec [27] are the only related works. KAR infers textual knowledge for sequential
recommendation, later encoded via BERT [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], making direct comparison dificult. LLMRec uses
LLMbased feature augmentation via OpenAI APIs. Unlike these, we use an open-weight LLM (LLama)
to generate new triples, guided by tailored prompts that enrich the KG with unseen relations (e.g., a
movie’s mood, a book’s style), enhancing recommendation quality.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Problem Formulation</title>
      <sec id="sec-3-1">
        <title>We define the data used and formalize the recommendation task.</title>
        <p>CF Data. Given users  and items ℐ, the user-item interactions are represented by a binary matrix
 ∈ R× , where , = 1 if user  liked item , and 0 otherwise. This can be modeled as an interaction
graph  with nodes for users/items and edges labeled like/dislike.</p>
        <p>Knowledge Graph. Each item  ∈ ℐ is described by triples from a knowledge graph , such as (item,
relation, entity) (e.g., (Tender is the Night, author, Fitzgerald)).</p>
        <p>Graph Augmentation. We use LLMs to generate additional triples describing item features and user
preferences. For each item , a prompt  is used to produce triples  =  () (e.g., (Nixon, theme,
political corruption)), forming a new KG ℐ . Similarly, for each user , we prompt the LLM with their
liked items to generate a KG  describing preferences (e.g., (user83, fav_setting, post-apocalyptic)).
We then build an augmented KG by merging , , and either or both of ℐ and  .
Representation Learning. We encode the augmented KG using a KG encoder to learn embeddings ⃗
and ⃗ for users and items, which are fed into a neural network for recommendation.
Problem Definition. Given aug and model parameters  , we learn a function ̃︀ (,  | aug,  ) to
predict user ’s interest in item . We evaluate using a top-k recommendation setting based on predicted
scores.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>The augmentation workflow is shown in Figure 1. Starting from the user-item graph  (Section 3),
we use LLMs for Item Feature Inference and User Preference Inference, generating triples of the form
(user/item, relation, entity). Merging these outputs forms the augmented KG used in our KARS. A
Knowledge Graph Encoder then learns user/item embeddings, which are fed into the RecSys module to
generate recommendations. We detail each module below.</p>
      <p>Dataset
DBbook
ML1M
Item Feature Inference Module. To infer item features as KG triples, we use a zero-shot prompting
approach with an LLM. Our prompt takes an item’s name and returns descriptive triples. The prompt has
three parts: a System Prompt (instructing the LLM about the task), a User Prompt (providing task-specific
instance details), and the Model Output (the LLM’s response). In the System Prompt, we ask the LLM
to generate relevant item features for a given domain and specify the output format. In particular,
we request both common KG features (e.g., author, topic, genre) and novel ones (e.g., writing style,
mood) typically absent from KGs. This prompt allows the LLM to complete missing knowledge in the
original KG, addressing data sparsity on one side, and incorporating new, pre-trained features from the
LLM, enriching the data model, on the other side. This process is repeated for all items  ∈ ℐ, and the
combined triples form ℐ , representing all LLM-generated item features.</p>
      <p>User Preference Inference Module. Next, a similar process is designed to infer user preferences in
the form of KG triples. To this end, we zero-shot prompt an LLM in such a way that, given the list of
items the user likes, the LLM returns a set of triples encoding her preferences. The structure of the
prompt is similar, but rather than incorporating further knowledge about the items, the goal of this
part of the augmentation process is to introduce new edges in the original KG. In our vision, these
edges may improve the quality of the underlying data model, thus improving the performance in a
downstream recommendation task as well. Regarding the choice of the elements included in the prompt,
we point out again that we mixed features encoded in the original KG (i.e., preferred genre or authors)
and more fine-grained characteristics, such as the mood of movies liked by the user. Also in this case,
the prompt is generated for all the users  ∈  based on the items they like, and the triples returned by
the LLM are merged in the graph  , which encodes all the user features obtained through the graph
augmentation process.</p>
      <p>
        Knowledge Graph Encoder and RecSys Modules. After prompting the LLMs, we build an
augmented graph that merges  and  with either one between ℐ and  , or both of them Based
on this data model, a Knowledge Graph Encoder comes into play to learn embeddings representing
users and items in the augmented graph. In this work, we used CompGCN [20] as a Graph Encoder.
This choice is justified by the competitive performance shown by other models exploiting GCNs for
recommendation tasks [
        <xref ref-type="bibr" rid="ref8">8, 16, 23, 17</xref>
        ]. More details about this encoder can be found in the original
papers [20, 18]. After the encoding, the resulting user and item embeddings are used to train a deep
recommendation architecture; in particular, this is trained on a subset of ratings from  using binary
cross-entropy as the loss function. During inference, scores are computed for all test set items, ranked,
and the top-k items form the recommendation list
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental Evaluation</title>
      <p>Our experiments aimed to answer this Research Questions: How does each KG configuration contribute
to the overall performance of the model?</p>
      <sec id="sec-5-1">
        <title>5.1. Experimental Design</title>
        <p>
          Datasets and Knowledge Graphs. We considered DBbook and MovieLens1M (ML1M) for our
experiments. We used DBpedia [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] as base KG, by using publicly available mappings. We augmented
such KGs according to the methodology previously introduced. More details are on our full paper [18].
Protocol. Graph augmentation relies on training data: positive ratings are used to infer user preferences
( ), and training items to infer item features (ℐ ). User embeddings are learned on the training
set, and top-k recommendations are generated by ranking predicted scores on the test set.
Implementation Details. We use Llama3-ChatQA-1.5-8B with 4-bit quantization via BitsAndBytes
for eficiency. Graph encoding is done using Pykeen’s CompGCN, and the recommendation model is
implemented in PyTorch (available in our repository). We release more detail on our repository2.
Experimental Settings. The Graph Encoder is trained for 15 epochs, with embedding size  = 64 and
3 GCN layers. Recommendation models are trained for 30 epochs (batch size 512, learning rate 0.01,
 = 0.9, Adam optimizer). Dropout is set to 0.2 for DBbook and 0.4 for ML1M.
        </p>
        <p>Ablation Studies. To evaluate our method’s efectiveness, we compare recommendation performance
across eight graph variants: (1) CF-only (), (2) original KG ( + ), (3) item-only LLM graph (ℐ ),
(4) user-only LLM graph ( ), (5) full LLM-generated KG (ℐ +  ), (6) item-augmented KG (
+ ℐ ), (7) user-augmented KG ( +  ), and (8) fully augmented KG ( +  + ℐ +  ). We
analyze and compare results across these setups.</p>
        <p>
          Evaluation Metrics. We evaluate with ClayRS [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] for reproducibility. Accuracy metrics include
Precision, Recall, F1, and nDCG; diversity and novelty are assessed with Gini Index, EPC [19], and
APLT. Paired t-tests assess statistical significance.
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Discussion of the Results</title>
        <p>Ablation Study. Table 2 reports ablation results across combinations of , ℐ , and  , with ∅
in+ performed best, highlighting the value of LLM-inferred
dicating no side information. On DBbook, 
user preferences in sparse settings. On ML1M, ℐ+ led, suggesting item features are more beneficial in
denser datasets—supported by the strong performance of ∅. Combining all sources (+) sometimes
hurt performance, likely due to noise from excessive entities and relations. Graph augmentation also
improved novelty, especially on DBbook. Notably, LLM-generated KGs (ℐ ,  ) performed on par
with the original KG, underscoring LLMs’ potential for knowledge creation. Overall, RQ2 confirms
that graph augmentation improves performance, with optimal gains depending on dataset
characteristics.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>We proposed a general graph augmentation methodology for KARSs using LLM prompting to infer
missing item features and user preferences. Focusing on DBpedia, our approach enriches KGs with
LLMgenerated knowledge. Experiments, including ablation studies and baselines, validated its efectiveness.
While LLM limitations like hallucinations exist, carefully designed prompts helped mitigate them, as
supported by our results. Future work will explore adding richer context (e.g., item abstracts or plots)
to improve prompt quality.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This research is partially funded by the PNRR project FAIR—Future AI Research (PE00000013),
Spoke 6—Symbiotic AI, under the NRRP MUR program supported by NextGenerationEU (CUP
H97G22000210007), and the PHaSE project — Promoting Healthy and Sustainable Eating through
Interactive and Explainable AI Methods, funded by MUR under the PRIN 2022 program - Finanziato
dall’Unione europea - NextGeneration EU, Missione 4 Componente 1 (CUP H53D23003530006). in</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <sec id="sec-8-1">
        <title>During the preparation of this work, the author did not use any AI tool.</title>
        <p>In Adjunct Publication of the 26th conference on user modeling, adaptation and personalization, pages
239–244, 2018.
[14] Cataldo Musto, Pasquale Lops, Marco de Gemmis, and Giovanni Semeraro. Context-aware
graphbased recommendations exploiting personalized pagerank. Knowledge-Based Systems, 216:106806,
2021.
[15] Paul Resnick and Hal R Varian. Recommender systems. Communications of the ACM, 40(3):56–58,
1997.
[16] Giuseppe Spillo, Cataldo Musto, Marco Polignano, Pasquale Lops, Marco de Gemmis, and Giovanni
Semeraro. Combining graph neural networks and sentence encoders for knowledge-aware
recommendations. In Proceedings of the 31st ACM Conference on User Modeling, Adaptation and
Personalization, UMAP ’23, page 1–12, New York, NY, USA, 2023. Association for Computing Machinery.</p>
        <p>ISBN 9781450399326. doi: 10.1145/3565472.3592965. URL https://doi.org/10.1145/3565472.3592965.
[17] Giuseppe Spillo, Francesco Bottalico, Cataldo Musto, Marco De Gemmis, Pasquale Lops, and
Giovanni Semeraro. Evaluating content-based pre-training strategies for a knowledge-aware
recommender system based on graph neural networks. In Proceedings of the 32nd ACM Conference
on User Modeling, Adaptation and Personalization, pages 165–171, 2024.
[18] Giuseppe Spillo, Cataldo Musto, Matteo Mannavola, Marco de Gemmis, Pasquale Lops, and
Giovanni Semeraro. Gal-kars: Exploiting llms for graph augmentation in knowledge-aware
recommender systems. In Proceedings of the 33rd ACM Conference on User Modeling, Adaptation
and Personalization, pages 73–82, 2025.
[19] Saúl Vargas and Pablo Castells. Rank and relevance in novelty and diversity metrics for
recommender systems. In Proceedings of the fifth ACM conference on Recommender systems , pages
109–116, 2011.
[20] Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. Composition-based
multi-relational graph convolutional networks. arXiv preprint arXiv:1911.03082, 2019.
[21] Hongwei Wang, Fuzheng Zhang, Mengdi Zhang, Jure Leskovec, Miao Zhao, Wenjie Li, and
Zhongyuan Wang. Knowledge-aware graph neural networks with label smoothness regularization
for recommender systems. In Proceedings of the 25th ACM SIGKDD International Conference
on Knowledge Discovery &amp; Data Mining, KDD ’19, page 968–977, New York, NY, USA, 2019.
Association for Computing Machinery. ISBN 9781450362016. doi: 10.1145/3292500.3330836. URL
https://doi.org/10.1145/3292500.3330836.
[22] Hongwei Wang, Miao Zhao, Xing Xie, Wenjie Li, and Minyi Guo. Knowledge graph convolutional
networks for recommender systems. In The world wide web conference, pages 3307–3313, 2019.
[23] Hongwei Wang, Miao Zhao, Xing Xie, Wenjie Li, and Minyi Guo. Knowledge graph convolutional
networks for recommender systems. In The world wide web conference, pages 3307–3313, 2019.
[24] Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. Kgat: Knowledge graph
attention network for recommendation. In Proceedings of the 25th ACM SIGKDD international
conference on knowledge discovery &amp; data mining, pages 950–958, 2019.
[25] Xiang Wang, Tinglin Huang, Dingxian Wang, Yancheng Yuan, Zhenguang Liu, Xiangnan He, and
Tat-Seng Chua. Learning intents behind interactions with knowledge graph for recommendation.</p>
        <p>In Proceedings of the web conference 2021, pages 878–887, 2021.
[26] Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. Knowledge graph embedding by
translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 28,
2014.
[27] Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin,
and Chao Huang. Llmrec: Large language models with graph augmentation for recommendation.
In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pages
806–815, 2024.
[28] Yunjia Xi, Weiwen Liu, Jianghao Lin, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang,
Weinan Zhang, and Yong Yu. Towards open-world recommendation with knowledge augmentation
from large language models. In Proceedings of the 18th ACM Conference on Recommender Systems,
pages 12–22, 2024.
[29] Lixiang Xu, Yusheng Liu, Tong Xu, Enhong Chen, and Yuanyan Tang. Graph augmentation
empowered contrastive learning for recommendation. ACM Transactions on Information Systems,
2024.
[30] Liang Yao, Jiazhen Peng, Chengsheng Mao, and Yuan Luo. Exploring large language models for
knowledge graph completion. arXiv preprint arXiv:2308.13916, 2023.
[31] Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Lizhen Cui, and Quoc Viet Hung Nguyen. Are graph
augmentations necessary? simple graph contrastive learning for recommendation. In Proceedings
of the 45th international ACM SIGIR conference on research and development in information retrieval,
pages 1294–1303, 2022.
[32] Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. Collaborative
knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and data mining, pages 353–362, 2016.
[33] Qianru Zhang, Lianghao Xia, Xuheng Cai, Siu-Ming Yiu, Chao Huang, and Christian S Jensen.</p>
        <p>Graph augmentation for recommendation. In 2024 IEEE 40th International Conference on Data
Engineering (ICDE), pages 557–569. IEEE, 2024.
[34] Si Zhang, Hanghang Tong, Jiejun Xu, and Ross Maciejewski. Graph convolutional networks: a
comprehensive review. Computational Social Networks, 6(1):1–23, 2019.
[35] Yichi Zhang, Zhuo Chen, Lingbing Guo, Yajing Xu, Wen Zhang, and Huajun Chen. Making large
language models perform better in knowledge graph completion. In Proceedings of the 32nd ACM
International Conference on Multimedia, pages 233–242, 2024.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Qingyao</given-names>
            <surname>Ai</surname>
          </string-name>
          , Vahid Azizi, Xu Chen,
          <string-name>
            <given-names>and Yongfeng</given-names>
            <surname>Zhang</surname>
          </string-name>
          .
          <article-title>Learning heterogeneous knowledge base embeddings for explainable recommendation</article-title>
          .
          <source>Algorithms</source>
          ,
          <volume>11</volume>
          (
          <issue>9</issue>
          ):
          <fpage>137</fpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Sören</given-names>
            <surname>Auer</surname>
          </string-name>
          , Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and
          <string-name>
            <given-names>Zachary</given-names>
            <surname>Ives</surname>
          </string-name>
          .
          <article-title>Dbpedia: A nucleus for a web of open data</article-title>
          .
          <source>In international semantic web conference</source>
          , pages
          <fpage>722</fpage>
          -
          <lpage>735</lpage>
          . Springer,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Antoine</given-names>
            <surname>Bordes</surname>
          </string-name>
          , Nicolas Usunier, Alberto Garcia-Duran,
          <string-name>
            <given-names>Jason</given-names>
            <surname>Weston</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Oksana</given-names>
            <surname>Yakhnenko</surname>
          </string-name>
          .
          <article-title>Translating embeddings for modeling multi-relational data</article-title>
          .
          <source>Advances in neural information processing systems</source>
          ,
          <volume>26</volume>
          :
          <fpage>2787</fpage>
          -
          <lpage>2795</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Yixin</given-names>
            <surname>Cao</surname>
          </string-name>
          , Xiang Wang, Xiangnan He,
          <string-name>
            <surname>Zikun Hu</surname>
          </string-name>
          , and
          <string-name>
            <surname>Tat-Seng Chua</surname>
          </string-name>
          .
          <article-title>Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences</article-title>
          .
          <source>In The world wide web conference</source>
          , pages
          <fpage>151</fpage>
          -
          <lpage>161</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Zefeng</given-names>
            <surname>Chen</surname>
          </string-name>
          , Wensheng Gan, Jiayang Wu, Kaixia Hu, and
          <string-name>
            <given-names>Hong</given-names>
            <surname>Lin</surname>
          </string-name>
          .
          <article-title>Data scarcity in recommendation systems: A survey</article-title>
          .
          <source>ACM Trans. Recomm</source>
          . Syst.,
          <volume>3</volume>
          (
          <issue>3</issue>
          ),
          <year>March 2025</year>
          . doi:
          <volume>10</volume>
          .1145/3639063. URL https://doi.org/10.1145/3639063.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Janneth</given-names>
            <surname>Chicaiza</surname>
          </string-name>
          and
          <string-name>
            <given-names>Priscila</given-names>
            <surname>Valdiviezo-Diaz</surname>
          </string-name>
          .
          <article-title>A comprehensive survey of knowledge graph-based recommender systems: Technologies, development, and contributions</article-title>
          .
          <source>Information</source>
          ,
          <volume>12</volume>
          (
          <issue>6</issue>
          ):
          <fpage>232</fpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Jacob</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Ming-Wei</surname>
            <given-names>Chang</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Kenton</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Kristina</given-names>
            <surname>Toutanova</surname>
          </string-name>
          . Bert:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          .
          <source>arXiv preprint arXiv:1810.04805</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Xiangnan</given-names>
            <surname>He</surname>
          </string-name>
          , Kuan Deng, Xiang Wang,
          <string-name>
            <given-names>Yan</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Yongdong</given-names>
            <surname>Zhang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Meng</given-names>
            <surname>Wang</surname>
          </string-name>
          .
          <article-title>Lightgcn: Simplifying and powering graph convolution network for recommendation</article-title>
          .
          <source>In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval</source>
          , pages
          <fpage>639</fpage>
          -
          <lpage>648</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Prannay</given-names>
            <surname>Khosla</surname>
          </string-name>
          , Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and
          <string-name>
            <given-names>Dilip</given-names>
            <surname>Krishnan</surname>
          </string-name>
          .
          <article-title>Supervised contrastive learning</article-title>
          .
          <source>Advances in neural information processing systems</source>
          ,
          <volume>33</volume>
          :
          <fpage>18661</fpage>
          -
          <lpage>18673</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Pranjal</given-names>
            <surname>Kumar</surname>
          </string-name>
          .
          <article-title>Large language models (llms): survey, technical frameworks, and future challenges</article-title>
          .
          <source>Artificial Intelligence Review</source>
          ,
          <volume>57</volume>
          (
          <issue>10</issue>
          ):
          <fpage>260</fpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Pasquale</surname>
            <given-names>Lops</given-names>
          </string-name>
          , Marco de Gemmis, Giovanni Semeraro, Cataldo Musto, Fedelucio Narducci, and
          <string-name>
            <given-names>Massimo</given-names>
            <surname>Bux</surname>
          </string-name>
          .
          <article-title>A semantic content-based recommender system integrating folksonomies for personalized access</article-title>
          .
          <source>Web Personalization in Intelligent Environments</source>
          , pages
          <fpage>27</fpage>
          -
          <lpage>47</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Pasquale</surname>
            <given-names>Lops</given-names>
          </string-name>
          , Marco Polignano, Cataldo Musto, Antonio Silletti, and
          <string-name>
            <given-names>Giovanni</given-names>
            <surname>Semeraro</surname>
          </string-name>
          .
          <article-title>Clayrs: An end-to-end framework for reproducible knowledge-aware recommender systems</article-title>
          .
          <source>Information Systems</source>
          ,
          <volume>119</volume>
          :
          <fpage>102273</fpage>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Cataldo</surname>
            <given-names>Musto</given-names>
          </string-name>
          , Tiziano Franza, Giovanni Semeraro, Marco De Gemmis, and
          <string-name>
            <given-names>Pasquale</given-names>
            <surname>Lops</surname>
          </string-name>
          .
          <article-title>Deep content-based recommender systems exploiting recurrent neural networks and linked open data</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>