<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>June</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>The Role Evolution of KGs in Synthesizing with LLMs: From Background Knowledge to Joint Reasoning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Chuangtao Ma</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Aalborg University</institution>
          ,
          <addr-line>Aalborg</addr-line>
          ,
          <country country="DK">Denmark</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Knowledge Graphs</institution>
          ,
          <addr-line>Large Language Models, KG-RAG, Knowledge Augmentation</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>02</volume>
      <issue>2025</issue>
      <abstract>
        <p>Knowledge Graphs (KGs), as graph-based structured knowledge, maintain the rich relationships among the trackable and verifiable facts and evidence, which have been investigated to address the inherent limitations of large language models (LLMs), such as hallucinations, limited reasoning capabilities, and interoperability. Recent years have witnessed the role of KGs in synthesizing with LLMs evolving from background knowledge to joint reasoning. This work aims to give a brief introduction to the recent works in augmenting LLMs with KGs and highlights the evolving role of KGs, i.e., from KGs serving as passive background knowledge to actively getting involved in joint reasoning processes with LLMs. It summarizes the key techniques, strengths, limitations, and KG requirements of the approaches with diferent KG roles in augmenting LLMs with KGs, and their applications in several downstream tasks. It also discusses the open challenges and future directions for developing more eficient and trustworthy reasoning over LLMs and KGs.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org
Feature
Data Structure
Knowledge Type
Processing Style
Primary Use Case</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>Recent years have witnessed significant achievements and wide applications of large language models
(LLMs) in natural language understanding and generation, such as question answering (QA), context
generation, text summarization, etc. Knowledge Graphs (KGs), as graph-based structured knowledge,
maintain the rich relationships among the factual entity, causality event, and other entities, which
provides factual and trackable knowledge. There are several diferences between KGs and LLMs in
terms of data structure, knowledge type, processing style, and use case, as summarized in Table 1, which
poses their limitations and strengths. For instance, LLMs encounter the challenges of hallucinations
LLMs</p>
      <p>KGs
Unstructured text-based, sequential tokens</p>
      <p>Structured, graph-based (triples)
Implicit, parametric, commonsense knowledge</p>
      <p>Explicit, factual, domain-specific knowledge
Intuitive, implicit, next token prediction
QA, content generation, text summarization</p>
      <p>Logical reasoning, graph query, path traversal
KGQA, recommendation, entity disambiguation
and poor explainability due to a lack of up-to-date domain knowledge and struggle with complex and
multi-hop reasoning. In contrast, KGs ofer reliable and factual knowledge from commonsense and
domain-specific domains, which enables hallucination mitigation and explainable responses based on
symbolic reasoning and graph traversal.</p>
      <p>
        To take advantage of their strengths and mutually benefit from LLMs and KGs, a roadmap for
unifying LLMs and KGs was designed [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] by bidirectionally enhancing and augmenting with each other.
Motivated by this roadmap, an increasing number of works in synthesizing LLMs and KGs have been
investigated to address the inherent limitations of LLMs [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The role of KGs in synthesizing with LLMs
has evolved from background knowledge for unidirectional enhancement of LLMs to joint reasoning with
LLMs for mutual collaboration. This extended abstract is based on the talk at the KG-STAR@ESWC2025
workshop, which aims to give a brief overview of KGs’ role evolution when synthesizing with LLMs
and summarize the strengths, limitations, and KG requirements of the approaches with diferent KG
roles in augmenting LLMs with KGs, and their applications.
      </p>
    </sec>
    <sec id="sec-3">
      <title>2. Role Evolution of KGs in Synthesizing with LLMs</title>
      <p>The role of KGs in synthesizing with LLMs is evolving from passive background knowledge to active
involvement in reasoning with LLMs to augment their capabilities.</p>
      <sec id="sec-3-1">
        <title>2.1. Background Knowledge</title>
        <p>The early synthesis paradigm of using KGs as background knowledge for augmenting LLMs, where the
factual knowledge retrieved from KGs was directly incorporated into LLMs via pre-training, fine-tuning,
and KG-based RAG for knowledge-intensive tasks.</p>
        <sec id="sec-3-1-1">
          <title>2.1.1. Pre-traing and Fine-tuning</title>
          <p>
            The initial phase of synthesizing LLMs with KGs, where KGs serve as the background knowledge, has
been exploited in the paradigm of pretraining and fine-tuning LLMs with KGs. In the paradigm of
pretraining, text-KG pairs were retrieved and created based on entity linking, and then a cross-modal
encoder with the modality interaction token was introduced to bidirectionally fuse the text-KG pair for
joint learning and reasoning [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ]. Unlike the pretraining paradigm, where retraining is required when
updating knowledge, KG fine-tuning aims to fine-tune LLMs with domain-specific knowledge for the
specific knowledge-intensive task in a cost-efective manner [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ]. To integrate KGs with LLMs at the
parameter level, a parameter-eficient fine-tuning (PEFT) method was proposed [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ] by introducing a
KG adapter layer to bidirectional fusion and updating the token representations for joint reasoning.
          </p>
        </sec>
        <sec id="sec-3-1-2">
          <title>2.1.2. KG-based RAG</title>
          <p>
            With the help of prompt engineering, the well-crafted prompts can guide the LLMs to efectively
utilize the external knowledge from KGs for hallucination mitigation [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. Thereby, KG-based retrieval
augmentation generation (KG-RAG) [
            <xref ref-type="bibr" rid="ref7 ref8 ref9">7, 8, 9</xref>
            ] is designed to initially retrieve the relevant knowledge
from KGs and then feed the retrieved knowledge to LLMs in the form of a prompt. For instance,
KG2RAG [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] expands the textual chunks with the retrieved KG by leveraging BFS search over KGs and
then incorporates the expanded chunks with the prompt for augmenting the generation. Similarly, a
KG-RAG-based fake news detection method [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ] was proposed to retrieve the evidence from constructed
KGs for augmenting LLMs in veracity prediction. To mitigate the hallucination of LLMs, KG-Infused
RAG [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ] augments RAG with KGs by integrating retrieved relevant triples based on query-based entity
retrieval and iterative triples expansion and providing reasoning paths for the generated response.
          </p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. From Background Knowledge to Joint Reasoning</title>
        <p>Although the hallucination of LLMs can be mitigated by synthesizing the relevant factual knowledge
from KGs with LLMs via the early synthesis paradigm of LLMs and KGs, i.e., joint learning, fine-tuning,
and KG-based RAG, it faces several challenges and limitations. The limitations of the early synthesis
paradigm, where KGs serve as background knowledge, lie in: (1) Re-training or fine-tuning is required
when KGs are updated. In the approaches of LLMs and KGs joint training and fine-tuning, KGs were
passively integrated with LLMs, as an external knowledge context where the factual knowledge from
KGs was injected into LLMs via modality interaction and fusion layer. (2) Fail to fully leverage structural
knowledge of KGs. In the approaches of KG-RAG, the relevant subgraphs from KGs were retrieved and
incorporated into LLMs as parts of the prompt, but they treated retrieved KGs as flat textual knowledge
rather than as a graph structure. (3) KGs passive involvement with LLMs for unidirectional integration.
The knowledge integration in KG-RAG-based approaches is unidirectional, where KGs are passively
involved with LLMs. It fails to support the sophisticated reasoning that combines both implicit and
explicit knowledge from LLMs and KGs. To address the above challenges and limitations, the paradigms
of joint reasoning with LLMs and KGs have been primarily exploited in rule-based reasoning,
agentbased reasoning, and collaborative reasoning etc, where KGs are actively involved in the reasoning
with LLMs via explicitly incorporating the graph structure from KGs into their reasoning process.</p>
        <sec id="sec-3-2-1">
          <title>2.2.1. Rule-based Reasoning</title>
          <p>The rule-based reasoning aims to guide LLMs to reasoning over KGs by mining the logical rules [11]
from KGs or designing instruction-based prompting [12], where the reasoning of the KGs and LLMs is
synthesized. For instance, ChatRule [11] ranks the high-quality logical rules that were mined from KGs
via LLMs and then incorporates the facts induced by logical rules from KGs to support the reasoning of
LLMs. CoT (Chain-of-Thoughts) prompting was designed to fully leverage the reasoning capabilities of
LLMs by employing instruction-based prompting to guide LLMs to think step-by-step and decompose
the complex task into multiple intermediate steps. Inspired by this, a KG-CoT [12] was proposed by
generating responsible knowledge chains over the reasoning paths from KGs based on CoT-based
prompting with LLMs to enable knowledge-aware reasoning.</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>2.2.2. Agent-based Reasoning</title>
          <p>By introducing an AI agent to the synthesis of LLMs and KGs, KGs can actively interact with LLMs for
joint reasoning. For example, ToG [13] treats the LLM as an agent that interacts with KGs by iteratively
executing beam search on KGs to explore the related entities and relations, and performing reasoning
based on the retrieved knowledge. KG-Agent [14] integrates the reasoning capabilities of small LLMs
with an agent-based KG toolbox to autonomously execute the tool selection and knowledge updating
for solving complex reasoning tasks. ODA [15] observes and retrieves the relevant knowledge from
the KG environment based on the AI agent and then incorporates the retrieved observations into LLM
reasoning for synthesizing the reasoning of KG and LLMs.</p>
        </sec>
        <sec id="sec-3-2-3">
          <title>2.2.3. Collaborative Reasoning</title>
          <p>The collaborative reasoning between LLMs and KGs has been investigated by introducing an adaptive
knowledge retrieval [16, 17] and iterative path exploration [18, 19]. To bridge structured knowledge in
KGs with unstructured knowledge in LLMs, graph-constrained reasoning (GCR) [16] was proposed
to incorporate the reasoning of LLMs and the reasoning over KG-Trie, a trie-based index encoding
the reasoning paths retrieved from KGs. CRF [17] introduces a collaborative reasoning method by
introducing reinforcement learning to a hierarchical agent that retrieves the path from KGs for
supporting reward-based reasoning. Rather than viewing KGs as a static repository of facts that can be
referenced by LLMs, PoG [19] treats KGs as a dynamic knowledge resource that can be explored and
updated during reasoning. It initially decomposes questions into several sub-objectives and then repeats
the process of adaptively exploring reasoning paths, updating memory, and reflecting on the need to
self-correct erroneous reasoning paths until arriving at the correct answer. TOG-2 [18] leverages KGs
as a navigational tool to guide knowledge retrieval and then iteratively utilizes LLMs to evaluate the
retrieved clues from KGs to ensure logical coherence and completeness of factual evidence.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Comparison and Application</title>
      <p>The various approaches and applications of augmenting LLMs with KGs are compared and summarized.</p>
      <sec id="sec-4-1">
        <title>3.1. Comparison of Diferent Approaches</title>
        <p>As previously discussed, the role evolution of KGs in synthesizing with LLMs demonstrates that the
structural information contained in KGs is not merely factual background knowledge, but a rich and
structured knowledge that can actively guide and get involved in complex reasoning for
knowledgeintensive task scenarios. Table 2 gives a summary and comparison of approaches with role evolution of
KGs from background knowledge to joint reasoning.</p>
        <p>By demonstrating the benefits of incorporating factual knowledge into LLMs, the early approaches
established the foundation for collaborative reasoning that would iteratively explore the reasoning
paths from KGs to support multi-hop reasoning and reward-based reasoning for complex
knowledgeintensive tasks. Although the collaborative reasoning of LLM and KG shows advantages in complex
and multi-hop reasoning in comparison with the early approaches, it still meets several limitations
and challenges, including the incompleteness of KG, path explosion, low reasoning eficiency, high
reasoning complexity and computing overhead, and unreliable reasoning results.</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Application</title>
        <p>
          The application of augmenting LLMs with KGs are widely investigated in question answering (QA) [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ],
while the other applications, such as personalized recommendation, healthcare copilot, system
diagnostics and detection, and data management, have been recently investigated.
        </p>
        <sec id="sec-4-2-1">
          <title>3.2.1. Recommendation</title>
          <p>The intention of the recommendation difers from QA, as personalized and relevant recommendations
that align with users’ preferences are expected for the recommendation system, while precise and
reliable responses that answer the user’s question are expected for the QA system. To facilitate the
personalized recommendation, LLMRG [20] introduces the adaptive reasoning module with LLMs to
construct the personalized reasoning graphs from the semantic relationships of user behaviors and
profiles and further augments the final recommendations via the user’s next item prediction based on
reasoning over LLMs and the reasoning graph. To address the issue of lack of up-to-date knowledge in
LLMs, K-RagRec [21] designs a popularity selective and similarity-based ranking pipeline to retrieve
the relevant subgraphs from an item KGs, and then leverages GNN and projector to align the retrieved
subgraphs into the semantic space of LLM for knowledge-augmented recommendation.</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>3.2.2. Fake News Detection</title>
          <p>
            The current LLMs-based fake news detection sufers a lot from the semantic ambiguity in understanding
news, interpretability, and explainability, etc. [22]. Therefore, KGs have been introduced to augment
the LLMs-based fake news detection [
            <xref ref-type="bibr" rid="ref8">23, 8</xref>
            ]. DKFND [23] evaluates the relevance and authenticity
of the given news with the help of the retrieved relevant knowledge and related news from KGs and
guides LLMs in detecting and providing the explainable results of news veracity. To mitigate the poor
explainability of existing evidence-based fake news detections, a KG-RAG-based fake news detection
method [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ] was proposed to retrieve the evidence from KGs for augmenting LLMs in veracity prediction.
          </p>
        </sec>
        <sec id="sec-4-2-3">
          <title>3.2.3. Healthcare Copilot</title>
          <p>To enhance the reliability and explainability of results that were generated by the healthcare copilot, the
applications have been investigated in augmenting clinical decision-making with the joint reasoning of
LLMs and medical KGs. In order to mitigate the hallucination of LLMs in knowledge-intensive tasks of
medical chatbots, CMedRAGBot [24] was designed to enhance LLMs in generating accurate answers for
the given medical questions with retrieved relevant knowledge from the medical KGs. MedRAG [25]
generates the final answers and follow-up questions for precise diagnosis based on the reasoning elicited
by KG that was retrieved from healthcare KGs via multi-level matching and upward traversal.</p>
        </sec>
        <sec id="sec-4-2-4">
          <title>3.2.4. System Diagnostics and Detection</title>
          <p>The applications of augmenting LLMs with KGs in industrial system diagnostics have been studied in
nuclear power plants [26], cybersecurity compliance analysis [27], cybersecurity detection [28], etc.
KGDML [26] was proposed to enhance system diagnostics in high-reliability environments by integrating
KGs to LLMs based on model interaction and cause-efect reasoning. To improve the accuracy of IoT
security compliance analysis, KGs are introduced to the RAG pipeline based on vector-based document
retrieval and graph query-based KG retrieval [27]. Similarly, a CyKG-RAG [28] was designed to enhance
the reliability of cybersecurity detection by retrieving the relevant knowledge from cybersecurity KG.</p>
        </sec>
        <sec id="sec-4-2-5">
          <title>3.2.5. Data Management</title>
          <p>The several challenges of data management have been exploited [29], such as, schema matching [30],
column type annotation (CTA) [31], vector search [32], etc. Given that the existing similarity-based and
LLMs-based schema matching methods are incapable of resolving semantic ambiguities and conflicts in
complex schema matching, KG-RAG4SM [30] investigated a KG-based RAG for schema matching and
heterogeneous data integration. To mitigate the challenges of existing LLM-based methods for CTA in
semantic label assignment, RACOON [31] augments LLMs with the retrieved subgraphs from external
KGs for CTA. TigerVector [32] integrates vector search and graph query based on the massively parallel
processing (MPP) index to accelerate the vector search for vector-based graph retrieval.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Open Challenges and Future Directions</title>
      <p>The open challenges in synthesizing LLMs with KGs for explainable and reliable reasoning remain in
reasoning path explosion and dynamic knowledge interaction.</p>
      <sec id="sec-5-1">
        <title>4.1. Reasoning Path Explosion</title>
        <p>The number of paths can grow exponentially by using an LLM-based agent for reasoning path
exploration, while the challenge is to optimize the generation of reasoning paths and iterative path
exploration over large and sparse KGs. To mitigate the challenges of path explosion and reasoning
complexity over large KGs in joint reasoning with LLMs, hybrid neuro-symbolic reasoning over LLMs
and KGs should be investigated for more eficient and trustworthy reasoning.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Dynamic Knowledge Interaction</title>
        <p>Enabling the flexible knowledge interaction between KGs and LLMs and incremental updates to KGs
are essential for ensuring the completeness of KGs in dynamic environments. Developing mechanisms
to seamlessly integrate new knowledge into KGs without disrupting existing structures or reasoning
presents significant challenges. In order to facilitate the deep interactions between LLMs and KGs and
keep reasoning over the up-to-date factual knowledge from KG, building semantically rich,
schemaaware, dynamic knowledge adaptation KGs and developing a KG system that supports a flexible
interaction with LLMs and incremental updates will also be crucial.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusions</title>
      <p>Recent years have witnessed the evolution of KGs’ role in synthesizing with LLMs from approaches
where KGs serve as background knowledge to KGs actively involving the reasoning of LLMs. This work
gives a brief overview of recent advances in augmenting LLMs with KGs, highlighting the evolving
role of KGs through a comparative summary of diferent approaches and applications. Despite the
advancements, future work should focus on developing a neuro-symbolic reasoning system for more
eficient reasoning and a KG system that enables flexible interaction with LLMs and incremental update.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was supported by the Novo Nordisk Foundation grant (NNF22OC0072415) and the GOBLIN
COST Action (CA23147). I would like to thank Prof. Arijit Khan for his inspiration and support.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used Grammarly to check spelling and grammar. After
using the tool, the author reviewed and edited the content as needed and take full responsibility for the
publication’s content.
[11] L. Luo, J. Ju, B. Xiong, Y.-F. Li, G. Hafari, S. Pan, ChatRule: Mining logical rules with large language
models for knowledge graph reasoning, in: Proceedings of the 29th Pacific-Asia Conference on
Knowledge Discovery and Data Mining, 2025.
[12] R. Zhao, F. Zhao, L. Wang, X. Wang, G. Xu, KG-CoT: Chain-of-thought prompting of large language
models over knowledge graphs for knowledge-aware question answering, in: Proceedings of the
Thirty-Third International Joint Conference on Artificial Intelligence, 2024, pp. 6642–6650.
[13] J. Sun, C. Xu, L. Tang, S. Wang, C. Lin, Y. Gong, L. Ni, H.-Y. Shum, J. Guo, Think-on-Graph: Deep
and responsible reasoning of large language model with knowledge graph, in: Proceedings of the
Twelfth International Conference on Learning Representations, 2024.
[14] J. Jiang, K. Zhou, W. X. Zhao, Y. Song, C. Zhu, H. Zhu, J.-R. Wen, Kg-Agent: An eficient autonomous
agent framework for complex reasoning over knowledge graph, arXiv:2402.11163 (2024).
[15] L. Sun, Z. Tao, Y. Li, H. Arakawa, ODA: Observation-driven agent for integrating LLMs and
knowledge graphs, in: Findings of the Association for Computational Linguistics: ACL 2024, 2024,
pp. 7417–7431.
[16] L. Luo, Z. Zhao, G. Hafari, Y.-F. Li, C. Gong, S. Pan, Graph-constrained reasoning: Faithful
reasoning on knowledge graphs with large language models, in: Proceedings of the 42st International
Conference on Machine Learning, 2025.
[17] Z. Zhang, W. Zhao, A collaborative reasoning framework powered by reinforcement learning and
large language models for complex questions answering over knowledge graph, in: Proceedings
of the 31st International Conference on Computational Linguistics, 2025, pp. 10672–10684.
[18] S. Ma, C. Xu, X. Jiang, M. Li, H. Qu, J. Guo, Think-on-Graph 2.0: Deep and interpretable large
language model reasoning with knowledge graph-guided retrieval, in: Proceedings of the Thirteenth
International Conference on Learning Representations, 2025.
[19] X. Tan, X. Wang, Q. Liu, X. Xu, X. Yuan, W. Zhang, Paths-over-graph: Knowledge graph empowered
large language model reasoning, in: Proceedings of the ACM on Web Conference, 2025, pp.
3505–3522.
[20] Y. Wang, Z. Chu, X. Ouyang, S. Wang, H. Hao, Y. Shen, J. Gu, S. Xue, J. Y. Zhang, Q. Cui, et al.,
LLMRG: Improving recommendations through large language model reasoning graphs, in:
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024, pp. 19189–19196.
[21] S. Wang, W. Fan, Y. Feng, X. Ma, S. Wang, D. Yin, Knowledge graph retrieval-augmented generation
for LLM-based recommendation, in: Proceedings of the 63rd Annual Meeting of the Association
for Computational Linguistics, 2025.
[22] J. Yi, Z. Xu, T. Huang, P. Yu, Challenges and innovations in LLM-powered fake news detection:
A synthesis of approaches and future directions, in: Proceedings of the 2025 2nd International
Conference on Generative Artificial Intelligence and Information Security, 2025, pp. 87–93.
[23] Y. Liu, J. Zhu, K. Zhang, H. Tang, Y. Zhang, X. Liu, Q. Liu, E. Chen, Detect, investigate, judge
and determine: A novel LLM-based framework for few-shot fake news detection, arXiv preprint
arXiv:2407.08952 (2024).
[24] D. Zhang, H. Du, X. Wang, M. Zhu, X. Pang, D. Wei, X. Wang, CMedRAGBot: A chinese medical
chatbot based on graph RAG and large language models, Interdisciplinary Sciences: Computational
Life Sciences (2025) 1–16.
[25] X. Zhao, S. Liu, S.-Y. Yang, C. Miao, MedRAG: Enhancing retrieval-augmented generation with
knowledge graph-elicited reasoning for healthcare copilot, in: Proceedings of the ACM on Web
Conference 2025, 2025, pp. 4442–4457.
[26] S. Marandi, Y.-S. Hu, M. Modarres, Complex system diagnostics using a knowledge graph-informed
and large language model-enhanced framework, arXiv preprint arXiv:2505.21291 (2025).
[27] M. Islam, L. Elluri, K. P. Joshi, et al., Integrating knowledge graphs with retrieval-augmented
generation to automate iot device security compliance, in: 2025 IEEE International Conference on
Intelligence and Security Informatics, 2025.
[28] K. Kurniawan, E. Kiesling, A. Ekelhart, CyKG-RAG: Towards knowledge-graph enhanced retrieval
augmented generation for cybersecurity, in: Proceedings of the International Workshop on
Retrieval-Augmented Generation Enabled by Knowledge Graphs, 2024.
[29] A. Khan, T. Wu, X. Chen, Data management opportunities in unifying large language
models+knowledge graphs, in: Proceedings of the International Workshop on Data Management
Opportunities in Unifying Large Language Models+Knowledge Graphs, 2024.
[30] C. Ma, S. Chakrabarti, A. Khan, B. Molnár, Knowledge graph-based retrieval-augmented generation
for schema matching, arXiv preprint arXiv:2501.08686 (2025).
[31] L. Wei, G. Xiao, M. Balazinska, RACOON: An LLM-based Framework for Retrieval-Augmented
Column Type Annotation with a Knowledge Graph, in: Proceedings of the Third Table Representation
Learning Workshop, 2024.
[32] S. Liu, Z. Zeng, L. Chen, A. Ainihaer, A. Ramasami, S. Chen, Y. Xu, M. Wu, J. Wang, TigerVector:
Supporting vector search in graph databases for advanced RAGs, arXiv preprint arXiv:2501.11216
(2025).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <article-title>Unifying large language models and knowledge graphs: A roadmap</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>36</volume>
          (
          <year>2024</year>
          )
          <fpage>3580</fpage>
          -
          <lpage>3599</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Ma</surname>
          </string-name>
          , Y. Chen,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Large language models meet knowledge graphs for question answering: Synthesis and opportunities</article-title>
          ,
          <source>arXiv preprint arXiv:2505.20099</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Yasunaga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bosselut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          ,
          <article-title>Deep bidirectional language-knowledge graph pretraining</article-title>
          ,
          <source>in: Proceedings of the 36th Conference on Neural Information Processing Systems</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>37309</fpage>
          -
          <lpage>37323</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>P.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bhatia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          , J. Han,
          <string-name>
            <surname>KG</surname>
          </string-name>
          -FIT:
          <article-title>Knowledge graph fine-tuning upon open-world knowledge</article-title>
          ,
          <source>in: Proceedings of the 38th Conference on Neural Information Processing Systems</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>136220</fpage>
          -
          <lpage>136258</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          , KG-Adapter:
          <article-title>Enabling knowledge graph integration in large language models through parameter-eficient fine-tuning, in: Findings of the Association for Computational Linguistics: ACL</article-title>
          <year>2024</year>
          ,
          <year>2024</year>
          , pp.
          <fpage>3813</fpage>
          -
          <lpage>3828</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Soman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. W.</given-names>
            <surname>Rose</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Morris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Akbas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Peetoom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Villouta-Reyes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cerono</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rizk-Jackson</surname>
          </string-name>
          , et al.,
          <article-title>Biomedical knowledge graph-optimized prompt generation for large language models</article-title>
          ,
          <source>Bioinformatics</source>
          <volume>40</volume>
          (
          <year>2024</year>
          )
          <article-title>btae560</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Knowledge graph-based entity alignment with unified representation for auditing</article-title>
          ,
          <source>Complex &amp; Intelligent Systems</source>
          <volume>11</volume>
          (
          <year>2025</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Thomas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.-C.</given-names>
            <surname>Mihailescu</surname>
          </string-name>
          ,
          <article-title>An explainable KG-RAG-based approach to evidence-based fake news detection using LLMs</article-title>
          ,
          <source>in: Proceedings of the AAAI Symposium Series</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <surname>KG-Infused</surname>
            <given-names>RAG</given-names>
          </string-name>
          :
          <article-title>Augmenting corpus-based RAG with external knowledge graphs</article-title>
          ,
          <source>arXiv preprint arXiv:2506.09542</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <article-title>Knowledge graph-guided retrieval augmented generation</article-title>
          ,
          <source>arXiv preprint arXiv:2502.06864</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>