<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards Transparent Knowledge Graphs: A Position on Explainability in Link Prediction</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vidhya Kamakshi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chandramani Chaudhary</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science &amp; Engineering, National Institute of Technology Calicut</institution>
          ,
          <addr-line>Kerala - 673601</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Knowledge Graphs (KGs) have improved structured knowledge representation by encoding real-world entities and their relationships, enabling multi-hop reasoning for answering complex queries. However, state-of-the-art deep learning models applied to KGs lack interpretability, creating a challenge in understanding their decision-making processes. This paper presents an idea to integrate Explainable AI (XAI) techniques with knowledge graph embeddings to enhance transparency in link prediction models. We employ SHAP (SHapley Additive exPlanations), a game-theoretic approach, to quantify the influence of individual entities in predictions. Furthermore, we introduce an explanation-driven training framework that aligns model predictions with the underlying KG structure. By incorporating an explainability-aware loss function, our approach may provide high-quality link predictions and human-understandable explanations. This research contributes to developing more transparent AI systems, fostering trust in real-world applications where interpretability is crucial.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable AI</kwd>
        <kwd>Natural Language Processing</kwd>
        <kwd>Knowledge Graphs</kwd>
        <kwd>Interpretable AI</kwd>
        <kwd>Trustworthy AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Natural Language Processing (NLP) is a sub-domain of Artificial Intelligence that deals with encoding
world knowledge that is often expressed in Natural Languages like English, French, Hindi, etc., into a
vector representation that can be processed by the models. Adequate representation is essential to enable
the model to respond appropriately to the queries of the human users. The advent of deep models that
base their prediction on the transformation of the sequences into semanticity preserving vector spaces
has enhanced their capabilities in processing natural language human queries. Google introduced an
intermediate representation called Knowledge Graph (KG) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] that structured the semantic information
available in the web. The graph has a set of real-world entities that are the nodes, and the relationships
between these entities are encoded in its directed edges. This gave a novel perspective to processing
natural language queries through a multi-hop traversal on the knowledge graph to extract related triples
of the form (head entity, relationship, tail entity) that enables the model to respond to natural language
queries. For instance, a query "Where is the captain of the Indian Cricket team born?" is successfully
retrieved following multiple hops, retrieving triples (, Captain, Indian Cricket Team), (, Birth Place, ).
The deep models that ofer state-of-the-art performances bring in a novel problem of opacity, rendering
the working mechanism of these underlying models uninterpretable to the end users [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        The need for interpretability is increasing following the mandates from legal frameworks [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] that
facilitate the user to know the rationale behind the decision of an AI model concerning the user. Eliciting
explanations is necessary to identify biases [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], thereby assessing the suitability of deploying an AI
model for real-world applications. Explanations can help spot the erroneous facts employed by the AI
model, thereby guiding ways to correct these errors [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ] to inculcate the right rationale into the model.
      </p>
      <p>In this paper, we explore the integration of explainable AI (XAI) techniques with knowledge graphs,
addressing the need for transparency in link prediction models. Our approach leverages knowledge
1st International Workshop on Explainable AI and Knowledge Graphs (XAI+KG), JUNE 1-5, 2025, co-located with ESWC 2025,
Portoroz, Slovenia
†The authors contributed equally.
$ vidhyakamakshi@nitc.ac.in (V. Kamakshi); chandramanic@nitc.ac.in (C. Chaudhary)
0000-0001-7588-6318 (V. Kamakshi); 0009-0006-3497-1309 (C. Chaudhary)</p>
      <p>
        © 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
graph embeddings [
        <xref ref-type="bibr" rid="ref10 ref7 ref8 ref9">7, 8, 9, 10</xref>
        ], to learn structured representations of entities and relations. Additionally,
we incorporate SHAP (SHapley Additive exPlanations) [11], a game-theoretic method, to quantify the
influence of individual entities in the prediction process. By introducing explanation-driven training,
we enforce that our model eficiently leverages the underlying KG structure. The proposed framework
improves optimal traversal, thus exhibiting increased interpretability.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Knowledge Graphs (KGs) provide structured semantic representations and are central to many AI
applications. Open KGs such as Freebase [12], DBpedia [13], and YAGO [14] have spurred research
on KG embeddings for link prediction. Early methods, including translational models and semantic
matching approaches such as tensor decomposition, project entities into vector spaces to infer missing
links [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Recent deep learning approaches, such as Graph Convolutional Networks (GCN) [15], Graph
Auto-Encoder Attention Networks, and Relational GCNs, integrate KG structure directly into
end-toend models [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Despite state-of-the-art successes exhibited by the deep NLP models, their opacity
inhibiting the understanding of its rationale may prove detrimental if blindly employed in safety-critical
applications [16]. This calls for developing tools and techniques to open up these accurate black boxes
and investigate their working mechanisms.
      </p>
      <p>
        Explainable AI (XAI) aim to demystify the black box models. These techniques can be broadly classified
into antehoc or explainable by design approaches and posthoc approaches. Antehoc techniques inculcate
the ability to explain the action a model takes from the design phase of the model. They are applied
when a model is yet to be constructed and faithfulness is of utmost concern [17]. On the other hand
posthoc techniques construct a simpler explainer that mimics the working mechanism of a black box
model leaving it undisturbed. When a model is already deployed, posthoc techniques [18] are usually the
desired mode of incorporating explanations into the model pipeline. There have been domain-specific
and model-specific [ 19] techniques that have been proposed to extract explanations from the deployed
models in a posthoc manner. Alternately XAI community has also proposed model-agnostic techniques
[11, 20] that can be leveraged for any data modality and models. These techniques have been applied to
various NLP tasks [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Transformer architectures [21] which leverage self attention mechanism, are
designed to handle long range dependencies. There have been attempts to leverage these attention
maps [22, 23] as an explanation to the model’s working mechanism, which may spark debates in the
research community [24] concerning their suitability to faithfully explain the black boxes.
      </p>
      <p>An alternate way to incorporate explainability into the NLP models is to relate the rationale of
the black box model with the knowledge encoded in the knowledge graph representations. While
prominent works in the community [25, 26] explore the direction of leveraging and aligning NLP models
with known knowledge encoded in the knowledge graph, this paper calls for an idea to leverage XAI
techniques for tracing the path traversed by the model and reinforce the model to traverse optimal
paths in the knowledge graph while performing link prediction. Rossi et al. [27], whose intent is close
to ours, propose generating a local explanation by identifying necessary and suficient entities that
determine the prediction. Our proposed approach relies on Shapley values [11] with a strong game
theoretic backing to globally rank the entities based on their influence in the prediction.</p>
    </sec>
    <sec id="sec-3">
      <title>3. An Idea to Optimize Knowledge Graph Traversal using XAI</title>
      <sec id="sec-3-1">
        <title>3.1. Knowledge Graph Representation</title>
        <p>The knowledge graph (KG) is modeled as a labeled directed graph  = (, ), where  represents
entities manifesting as nodes of the graph and  represents relations manifesting as the directed edges
between the entites. The graph structure is used to learn node embeddings that capture semantic
relationships between entities. Typically, KGs are represented with triplets, (ℎ, , ), where ℎ is the
head entity,  is the tail entity, and  is the relation between them.</p>
        <p>(ℎ,,)∈ (ℎ′,,′)∈</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Knowledge Graph Embedding Model</title>
        <p>
          The proposal is flexible to accommodate any knowledge graph embedding models, such as ComplEx
[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], TransE [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], DistMult [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], or RotatE [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. For learning the KG embeddings, a margin-based ranking
loss that refines the embeddings may be adopted, whose formulation is as follows:
        </p>
        <p>∑︁ ∑︁
ℒ =</p>
        <p>max(0, (ℎ′, , ′) − (ℎ, , ) +  )</p>
        <p>Here,  is the score function given by the embedding model,  is the set of positive triplets, 
denotes the set of negative triplets, and  denotes the tolerable margin that controls separation between
the triplets of opposing polarity.
(1)
(2)
′ =  (1 ·  ())</p>
        <p>′′ = 2 ·  (′)
where 1 and 2 are learnable weight matrices, and  is a non-linear activation function. The refined
embeddings are projected onto the relation space (the projection is characterized by 3), followed by
the application of softmax function to predict the most likely relation type for a given entity pair:
ˆ =  (3′′)</p>
        <p>A categorical cross-entropy loss may be applied to ensure alignment between predicted (ˆ) and
ground truth relation () defined as:</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Link Prediction Model</title>
        <p>A Graph Convolutional Network (GCN) [15] can be leveraged to predict the missing links in a KG
by processing the learned entity and relation embeddings as a composition of non-linear activations
applied on linearly combined features. This can be mathematically expressed as follows:
ℒpred = −
∑︁
||
∑︁  log(ˆ )
(ℎ,,)∈ =1
where, || is the total number of relation types,  is the one-hot encoded ground truth relation, ˆ
is the predicted probability for relation .</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Calculation of SHAP()</title>
        <p>The computation of SHAP values [11] in Game theory to quantify the importance of each player in a
game proceeds through simulations where a player is removed from the team (set) and the efective
score of the team (subset) with the deletion is used to estimate the contribution of that player to the
game. The translation of this phenomenon in the KG lingua is discussed in this section.</p>
        <sec id="sec-3-4-1">
          <title>3.4.1. Model Input Representation</title>
          <p>The SHAP explainer takes as input the node embeddings learned through the knowledge graph
embedding model, which efectively captures the semantic relationships between entities. For a given entity
pair (ℎ, ), the embeddings corresponding to the head entity ℎ and the tail entity  are extracted. For
each entity pair (ℎ, ), we select a subgraph that captures the local structural context of the KG. This
subgraph is determined by extracting nodes within a predefined radius of both ℎ and . In cases where
multiple shortest paths exist between ℎ and , our approach will detail one of the following strategies:
• Aggregation: Compute SHAP values for each shortest path separately and then aggregate (e.g.,
via averaging) the contributions.
• Selection: Use a heuristic (e.g., the path with the highest cumulative link prediction score) to
select the most representative path.</p>
        </sec>
        <sec id="sec-3-4-2">
          <title>3.4.2. Perturbation-Based Feature Importance</title>
          <p>SHAP employs a perturbation-based approach to determine feature importance by systematically
modifying input features, specifically the node embeddings, and analyzing their efect on the link
prediction model. This is achieved by masking or removing diferent subsets of nodes within the
selected subgraph to observe how these alterations influence the model’s predictions. The trained link
predictor is then used to recompute predictions for each perturbed version of the input, allowing for
the quantification of the contribution of individual nodes to the final prediction. This process helps in
understanding how diferent nodes in the knowledge graph influence the model’s decision-making.</p>
        </sec>
        <sec id="sec-3-4-3">
          <title>3.4.3. Shapley Value Estimation</title>
          <p>The SHAP framework approximates Shapley values, which quantify the contribution of each node to
the final link prediction decision. Let  denote the set of all nodes in the knowledge graph, and let
 ⊆  ∖ {} represent a subset of nodes excluding node . The contribution of each node  to the link
prediction task is computed using the Shapley value formula:
 () =
∑︁
||!(| | − | | − 1)! ( ( ) −  ())</p>
          <p>| |!
⊆  ∖{}
where  () denotes the link predictor’s score when only the nodes in subset  are included, and
 ( ) denotes the link predictor’s score when all the nodes are used. The term  ( ) −  () captures
the marginal impact of adding node  to subset . The weighting factor ||!(| |−| |− 1)! ensures a fair
| |!
distribution of contributions across all possible subsets. By systematically evaluating the marginal
contribution of each node across diferent subsets, this method provides a robust measure of the
importance of individual nodes in influencing the link prediction outcomes.</p>
          <p>Since computing exact Shapley values is computationally expensive [28], we approximate them
using Kernel SHAP or Deep SHAP, which eficiently estimates contributions using a smaller subset of
perturbations [29]. The Shapley values signifying the extent of influence of a node are normalized to
facilitate comparability across diferent entity pairs.</p>
        </sec>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Explainability-Driven Training Framework</title>
        <p>To leverage the explanations for iterative model improvement a score that assesses the explanations
(i.e. contribution scores of each node) with respect to a shortest path  between head entity ℎ and tail
entity  in the ground-truth KG is formulated as follows:
 =
1 ∑︁  ()
| | ∈
(3)
A lower score indicates poor alignment between predictions and the actual KG structure.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6. Loss Function</title>
        <p>A composite loss function that balances classification accuracy, explainability, and embedding
optimization may be formulated as:</p>
        <p>ℒ =  · ℒ  +  · ℒ  +  · ℒ 
where, ℒ as formulated in equation 1 ensures learning high-quality KG embeddings, ℒ is the
cross-entropy loss for relation prediction (softmax output) as formulated in equation 2, ℒ = 1 − 
(formulated in equation 3) penalizes traversing sub-optimal paths, and , , and  control the trade-of
between embedding optimization, accuracy, and interpretability .</p>
        <p>By integrating explainability into the learning process, the model not only predicts links accurately
but also provides interpretable insights into its decisions. This approach ensures that the learned
embeddings and model predictions remain aligned with the intrinsic structure of the knowledge graph.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Summary</title>
      <p>The paper reviews the scientific literature and identifies a symbiotic relationship between Knowledge
Graphs and Explainable AI research communities. A framework to incorporate explainability techniques
as a guiding mechanism towards steering the NLP model to faithfully traverse through optimal paths in
the knowledge graph is suggested. An illustration using commonly used knowledge graph embedding
and link prediction model with their corresponding mathematical formulations has been presented to
encourage the research community to investigate this incorporation. Modification of these techniques
with other state-of-the-art algorithms is an open arena that may yield novel insights.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>Thanks to the developers of ACM consolidated LaTeX styles https://github.com/borisveytsman/acmart
and to the developers of Elsevier updated LATEX templates https://www.ctan.org/tex-archive/macros/
latex/contrib/els-cas-templates. The authors place their heartfelt gratitude to the resources provided by
the institute they are afiliated and their department which is sponsored by the prestigious DST-FIST
Initiative of the Government of India. We sincerely thank the reviewers whose constructive feedback
has helped shape the camera ready draft of the paper.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Chat-GPT-3.5 for Grammar and spelling check.
After using this tool/service, the authors reviewed and edited the content as needed and take full
responsibility for the publication’s content.
[11] S. M. Lundberg, S.-I. Lee, A unified approach to interpreting model predictions, Advances in</p>
      <p>Neural Information Processing Systems 30 (2017).
[12] K. Bollacker, C. Evans, P. Paritosh, T. Sturge, J. Taylor, Freebase: a collaboratively created graph
database for structuring human knowledge, in: Proceedings of the ACM SIGMOD International
Conference on Management of Data, 2008, pp. 1247–1250.
[13] D. Ringler, H. Paulheim, One knowledge graph to rule them all? analyzing the diferences between
dbpedia, yago, wikidata &amp; co., in: KI 2017: Advances in Artificial Intelligence: 40th Annual German
Conference on AI, Dortmund, Germany, September 25–29, 2017, Proceedings 40, Springer, 2017,
pp. 366–372.
[14] F. M. Suchanek, G. Kasneci, G. Weikum, Yago: a core of semantic knowledge, in: Proceedings of
the 16th International Conference on World Wide Web, 2007, pp. 697–706.
[15] H. Zhang, G. Lu, M. Zhan, B. Zhang, Semi-supervised classification of graph convolutional
networks with laplacian rank constraints, Neural Processing Letters (2022) 1–12.
[16] B. Koopman, G. Zuccon, Dr chatgpt tell me what i want to hear: How diferent prompts impact
health answer correctness, in: Proceedings of the Conference on Empirical Methods in Natural
Language Processing, 2023, pp. 15012–15022.
[17] C. Rudin, Stop explaining black box machine learning models for high stakes decisions and use
interpretable models instead, Nature Machine Intelligence 1 (2019) 206–215.
[18] V. Kamakshi, N. C Krishnan, Sce: Shared concept extractor to explain a cnn’s classification
dynamics, in: Proceedings of the 7th Joint International Conference on Data Science &amp; Management
of Data (11th ACM IKDD CODS and 29th COMAD), 2024, pp. 109–117.
[19] N. Mylonas, I. Mollas, G. Tsoumakas, An attention matrix for every decision: Faithfulness-based
arbitration among multiple attention-based interpretations of transformers in text classification,
Data Mining and Knowledge Discovery 38 (2024) 128–153.
[20] M. T. Ribeiro, S. Singh, C. Guestrin, " why should i trust you?" explaining the predictions of any
classifier, in: Proceedings of the 22nd ACM SIGKDD international conference on knowledge
discovery and data mining, 2016, pp. 1135–1144.
[21] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin,</p>
      <p>Attention is all you need, Advances in Neural Information Processing Systems 30 (2017).
[22] E. A. Shams, J. Carson-Berndsen, Attention to phonetics: A visually informed explanation of
speech transformers, in: International Conference on Text, Speech, and Dialogue, Springer, 2024,
pp. 81–93.
[23] E. A. Shams, I. Gessinger, J. Carson-Berndsen, Uncovering syllable constituents in the
self-attentionbased speech representations of whisper, in: Proceedings of the 7th BlackboxNLP Workshop:
Analyzing and Interpreting Neural Networks for NLP, 2024, pp. 238–247.
[24] A. Bibal, R. Cardon, D. Alfter, R. Wilkens, X. Wang, T. François, P. Watrin, Is attention explanation?
an introduction to the debate, in: Proceedings of the 60th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), 2022, pp. 3889–3900.
[25] A. Füßl, V. Nissen, Interpretability of knowledge graph-based explainable process analysis, in:
2022 IEEE Fifth International Conference on Artificial Intelligence and Knowledge Engineering
(AIKE), IEEE, 2022, pp. 9–17.
[26] R. Dwivedi, D. Dave, H. Naik, S. Singhal, R. Omer, P. Patel, B. Qian, Z. Wen, T. Shah, G. Morgan,
et al., Explainable ai (xai): Core ideas, techniques, and solutions, ACM Computing Surveys 55
(2023) 1–33.
[27] A. Rossi, D. Firmani, P. Merialdo, T. Teofili, Explaining link prediction systems based on knowledge
graph embeddings, in: Proceedings of the International Conference on Management of Data, 2022,
pp. 2062–2075.
[28] M. Arenas, P. Barceló, L. Bertossi, M. Monet, The tractability of shap-score-based explanations for
classification over deterministic and decomposable boolean circuits, in: Proceedings of the AAAI
Conference on Artificial Intelligence, volume 35, 2021, pp. 6670–6678.
[29] S. Akkas, A. Azad, Gnnshap: Scalable and accurate gnn explanation using shapley values, in:
Proceedings of the ACM Web Conference, 2024, pp. 827–838.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>A survey on knowledge graph embeddings for link prediction</article-title>
          ,
          <source>Symmetry</source>
          <volume>13</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gurrapu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Lourentzou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Batarseh</surname>
          </string-name>
          ,
          <article-title>Rationalization for explainable nlp: a survey</article-title>
          ,
          <source>Frontiers in Artificial Intelligence</source>
          <volume>6</volume>
          (
          <year>2023</year>
          )
          <fpage>1225093</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>[3] Council of European Union, 2018 reform of eu data protection rules</article-title>
          ,
          <year>2018</year>
          . https://ec.europa.eu/commission/sites/beta-political/
          <article-title>files/data-protection-factsheet-changes_en</article-title>
          . pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.</given-names>
            <surname>Ahsan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Amir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Wallace</surname>
          </string-name>
          ,
          <article-title>Elucidating mechanisms of demographic bias in llms for healthcare</article-title>
          ,
          <source>arXiv preprint arXiv:2502.13319</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Cheng</surname>
          </string-name>
          , N. Zhang,
          <string-name>
            <given-names>B.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Editing language model-based knowledge graph embeddings</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          ,
          <volume>16</volume>
          ,
          <year>2024</year>
          , pp.
          <fpage>17835</fpage>
          -
          <lpage>17843</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , X. Han, N. Liu,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          , Pokemqa:
          <article-title>Programmable knowledge editing for multi-hop question answering</article-title>
          ,
          <source>in: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <year>2024</year>
          , pp.
          <fpage>8069</fpage>
          -
          <lpage>8083</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Trouillon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Welbl</surname>
          </string-name>
          , S. Riedel, É. Gaussier, G. Bouchard,
          <article-title>Complex embeddings for simple link prediction</article-title>
          ,
          <source>in: International Conference on Machine Learning, PMLR</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>2071</fpage>
          -
          <lpage>2080</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Weston</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bordes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Yakhnenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Usunier</surname>
          </string-name>
          ,
          <article-title>Connecting language and knowledge bases with embedding models for relation extraction</article-title>
          ,
          <source>in: Proceedings of the Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>1366</fpage>
          -
          <lpage>1371</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Yang</surname>
          </string-name>
          , S. W.-t. Yih,
          <string-name>
            <given-names>X.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <article-title>Embedding entities and relations for learning and inference in knowledge bases</article-title>
          ,
          <source>in: International Conference on Learning Representations</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.-H.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-Y.</given-names>
            <surname>Nie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>Rotate: Knowledge graph embedding by relational rotation in complex space</article-title>
          ,
          <source>in: International Conference on Learning Representations</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>