<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploring LLM To Extract Knowledge Graph From Academic Abstracts</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Victor Eiti Yamamoto</string-name>
          <email>eitiyamamoto@nii.ac.jp</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Othmane Kabal</string-name>
          <email>othmane.kabal@univ-nantes.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lakshan Karunathilake</string-name>
          <email>lakshan@nii.ac.jp</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kotaro Nishigori</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vicente Lermanda</string-name>
          <email>vlermanda@uc.cl</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shixiong Zhao</string-name>
          <email>shixiong@nii.ac.jp</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hiroki Uematsu</string-name>
          <email>uematsu@alchembright.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yanming He</string-name>
          <email>yanming@nii.ac.jp</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hideaki Takeda</string-name>
          <email>takeda@nii.ac.jp</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>340</institution>
          ,
          <addr-line>Santiago, Santiago Metropolitan Region</addr-line>
          ,
          <country country="CL">Chile</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Nantes University</institution>
          ,
          <addr-line>LS2N, Nantes 44300</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>National Insitute of Informatics</institution>
          ,
          <addr-line>2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430</addr-line>
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Pontifical Catholic University of Chile, Avenida Libertador General Bernardo O'Higgins</institution>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>The Graduate University for Advanced Studies</institution>
          ,
          <addr-line>SOKENDAI, Shonan Village, Hayama, Kanagawa 240-0193</addr-line>
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Knowledge graphs (KGs) are a powerful tool for representing semantic information. Existing methods depend on the use of human annotation or semi-structured automated methods based on basic metadata. However, academic papers and their abstracts are still the main way to carry academic information. The development of LLM leads to new tools to solve semantically heavy problems, so LLM can help to create KGs from texts automatically. This study comparatively evaluated LLMGraphTransformer, KGGen, and GT2KG, three LLM-based KG construction methods, using three computer science abstracts. We assessed performance via precision, recall, and F1-score against a gold standard and analyzed diferences in knowledge representation. Our findings revealed a trade-of between precision and recall in the extracted triples. Furthermore, GT2KG extracted hierarchical and definitional triples, whereas LLMGraphTransformer and KGGen Pro identified causal and functional relationships. Divergent predicate structures-simple in the gold standard vs. complex in some LLM outputs-suggest varied KG objectives, from traditional knowledge sharing to Retrieval-Augmented Generation (RAG) context capture. These results indicate that LLM-based KG construction is promising but requires further research to enhance accuracy and robustness, emphasizing that methodology choice should align with the intended application.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Knowledge Graph extraction</kwd>
        <kwd>Knowledge Graph construction</kwd>
        <kwd>Large language model</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Knowledge graphs (KGs) are essential for structuring information to support complex problem-solving
in intelligent systems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Within the academic domain, however, existing KGs are often constructed
from basic metadata like authors and institutions, neglecting the rich scientific discourse—including
methods, findings, and hypotheses—embedded within the full text of articles. This limitation hinders
the development of systems that can truly comprehend and reason about scientific contributions.
      </p>
      <p>The recent advancements in Large Language Models (LLMs) ofer a powerful solution for extracting
granular entities and relationships directly from unstructured text. This paper presents a comparative
evaluation of state-of-the-art LLM-based methods for automatically constructing KGs from scientific
papers, bridging the gap between metadata-level graphs and deep content understanding.</p>
      <p>
        We assessed three distinct KG construction methods: LLMGraphTransformer [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], KGGen [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and
GT2KG [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. To evaluate their performance, we adopted the framework proposed by Kebal et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Following this, we manually aligned the triples generated by each method against a gold standard
dataset and computed their precision, recall, and F1-score. Furthermore, we conducted a comparative
analysis of the generated triples. We specifically analyzed the resulting KG to elucidate the fundamental
diferences in how each tool defines and structures knowledge, revealing their distinct representational
approaches.
      </p>
      <p>Our evaluation reveals a clear trade-of between precision and recall across the tested methods.
While GT2KG achieved the highest precision, LLMGraphTransformer demonstrated superior recall.
Ultimately, LLMGraphTransformer (with Gemini Flash) and KGGen (with Gemini Pro) obtained the
highest F1-scores, representing the most efective balance of precision and recall.</p>
      <p>The comparable performance across all methods indicates that while LLM-based KG construction is
promising, significant challenges remain. This highlights a critical need for further research to improve
the accuracy and robustness of automated knowledge extraction from complex scholarly texts.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        Various approaches have been proposed to convert natural language into knowledge graphs. Early
work by Hearst, developed in the early 1990s, introduced rule-based and mechanical transformation
methods for information extraction [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This line of research focused on deterministic patterns to
identify relations in natural language, providing the foundation for later approaches. In the late 2000s,
Mintz et al. expanded on this direction by applying syntactic parsing combined with machine learning
techniques, such as distant supervision, to automatically generate training data for relation extraction
tasks [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This significantly reduced the need for manual annotation and enabled scalable learning.
By the mid-2010s, Zeng et al. introduced deep learning-based models for relation classification. Their
use of convolutional neural networks (CNNs) demonstrated that automatically learned features could
outperform traditional hand-crafted ones, marking a shift towards end-to-end neural methods [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Pretrained language models (PLMs) have been employed for knowledge graph construction, as
demonstrated in works such as [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In addition, generative models have been utilized for
knowledge graph completion (KGC), as explored in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], and [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. GraphRAG[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] constructs
knowledge graphs from unstructured text to curate and summarize information, enabling more accurate
and semantically grounded retrieval.
      </p>
      <p>Alongside construction, a parallel line of work asks how to evaluate the resulting KGs.
Benchmarkbased evaluations rely on curated datasets: CARB provides a crowdsourced OpenIE benchmark for
triple extraction quality [14]; WebNLG measures graph–text fidelity via RDF-to-text generation [ 15];
DocRED [16] and TACRED [17] target the relation extraction task.</p>
      <p>Beyond static benchmarks, other evaluation approaches were proposed. Diferential testing [ 18]
trains multiple KG embedding models, runs head-prediction, and computes a diferential score from
the proximity of model outputs. Another, downstream-utility evaluation [19], judges a KG by how
much it improves fixed tasks (e.g., classification, clustering, recommendation). Finally, an LLM-as-judge
approach has been proposed, where GraphJudge [20] first filters noise with an entity-centric strategy
and then uses a fine-tuned LLM to assess the correctness and consistency of triples and entities.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Approach</title>
      <p>
        We evaluated several LLM-based tools for knowledge graph construction from text using three randomly
selected abstracts (Document IDs 23, 438, and 519) from the G-T2KG Computer Science benchmark [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
This benchmark comprises 12 curated abstracts (108 sentences) selected from diverse topics to ensure a
variety of terminology and writing styles. For each abstract, we ran the tools to generate triples (subject,
predicate, object) and compared the output against the gold-standard triples from the benchmark.
      </p>
      <p>The evaluation was conducted manually, focusing on the semantic equivalence of the generated
triples. Specifically, a predicted triple was considered correct if it semantically matched a gold-standard
triple, regardless of diferences in wording (e.g., synonyms), structure (e.g., active/passive voice), or
morphology (e.g., plural/singular forms).</p>
      <p>
        We select three methods to compare: LLMGraphTransformer, KGGen and G-T2KG.
LLMGraphTransformer from LangChain transforms text into a KG by employing a pipeline of predefined prompts to
extract entities as nodes and their corresponding relationships as edges [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. These tools were selected
based on three criteria: (1) their foundation in LLMs; (2) the replicability of their tests in our
environment; and (3) the structural alignment of the generated KG with the gold standard. Specifically,
both employ a triple structure where each element consists of a few words, not a phrase. KGGen also
leverages an LLM for KG extraction but introduces a clustering step to produce a denser graph [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Its
process involves predicting triples with an LLM-based extractor, clustering these triples for refinement,
and performing entity resolution to merge nodes that refer to the same concept (e.g., handling plurals
and capitalization). Similarly, G-T2KG combines the OpenIE framework [21] with noun phrase-based
cleaning and LLM-based validation to reduce irrelevant triples and mitigate LLM hallucinations [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>Our experimental setup involved the following models: Gemini 2.5 Flash was used for
LLMGraphTransformer, while KGgen was tested with both Gemini 2.5 Flash and Gemini 2.5 Pro. For a baseline
comparison, the results for the G-T2KG method, which employed the GPT-4 model, were extracted
from its source publication.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Result</title>
      <p>Model</p>
      <sec id="sec-4-1">
        <title>Precision</title>
      </sec>
      <sec id="sec-4-2">
        <title>Recall F-measure</title>
        <p>26
438
519</p>
        <sec id="sec-4-2-1">
          <title>LLMGraphTransformer</title>
        </sec>
        <sec id="sec-4-2-2">
          <title>KG-Gen Flash KG-Gen Pro GT2KG</title>
        </sec>
        <sec id="sec-4-2-3">
          <title>LLMGraphTransformer</title>
        </sec>
        <sec id="sec-4-2-4">
          <title>KG-Gen Flash KG-Gen Pro GT2KG</title>
        </sec>
        <sec id="sec-4-2-5">
          <title>LLMGraphTransformer</title>
        </sec>
        <sec id="sec-4-2-6">
          <title>KG-Gen Flash KG-Gen Pro GT2KG</title>
          <p>0.60
0.15
0.30
0.14
0.47
0.40
0.54
0.57
0.48
0.37
0.41
0.82
0.55
0.18
0.55
0.09
0.53
0.40
0.47
0.27
0.43
0.48
0.74
0.39
0.57
0.17
0.39
0.11
0.50
0.40
0.50
0.36
0.45
0.42
0.53
0.53</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>As shown in Table 1, LLMGraphTransformer and KG-Gen Pro demonstrate the best balance between
precision and recall, achieving the highest recall in most cases while maintaining comparable precision.
Conversely, G-T2KG exhibits the highest overall precision but sufers from low recall in most instances.</p>
      <p>A comparison between KGGen Flash and KGGen Pro suggests that larger LLM models can lead to
improved performance. However, LLMGraphTransformer, which also utilizes Gemini Flash, achieved
similar results, indicating that the choice of methodology is as crucial as the model size.</p>
      <p>G-T2KG, relying on an initial extraction phase with OpenIE, ofers the key advantage of strictly
adhering to information explicitly stated in the text, thereby efectively preventing hallucinations.
However, it struggles with the accurate identification of entities and predicates, often leading to
truncated or overly extended entity spans. This limitation persists even after cleaning algorithms are
applied, particularly in complex sentences involving conjunctions or coreference. In contrast, LLMs,
leveraging their deep semantic understanding, are generally more efective at capturing relevant entities
in diverse and intricate contexts. However, this capability can come at the cost of lower precision and
potential hallucinations.</p>
      <p>
        A manual comparison of the generated triples against the gold standard is necessary, as the
terminology adopted by each system can vary significantly. For instance, the gold standard almost exclusively
employs predicates consisting of a single verb. In contrast, systems such as KGGen and G-T2KG
generate more complex, compound-verb predicates. These are challenging to normalize and unify
for downstream applications, including knowledge graph-based search systems. This divergence in
predicate structure likely originates from the fundamental purpose for which KGs are created. As
discussed by Hogan et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], KGs were traditionally created to facilitate knowledge sharing within
specific organizations or communities. Conversely, recent LLM-based approaches often generate KGs
with the primary goal of capturing contextual information for applications like Retrieval-Augmented
Generation (RAG) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Therefore, the intended application should guide the selection of a KG creation
methodology, taking this fundamental diference in purpose into account.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this research, we investigated three diferent models that leverage LLMs to generate KGs from text.
We evaluated their performance using abstracts from scientific papers to determine their capability
to capture rich semantic context. Our findings indicate that these models show considerable promise,
though there are clear areas for improvement. Furthermore, we discovered that diferent triple extraction
methods generate distinct yet complementary sets of triples. This suggests that LLM-based approaches
can serve a diferent purpose in KG creation compared to conventional methods. For future work, we
plan to extend our evaluation by incorporating a wider range of methods and LLMs, testing on abstracts
from diverse scientific domains, and extending the analysis to multiple languages.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Gemini and Grammarly in order to: paraphrase
and reword, improve writing style, and grammar and spelling checking. After using these tools, the
authors reviewed and edited the content as needed and take full responsibility for the publication’s
content.
[14] S. Bhardwaj, S. Aggarwal, et al., Carb: A crowdsourced benchmark for open ie, in:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the
9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp.
6262–6267.
[15] C. Gardent, A. Shimorina, S. Narayan, L. Perez-Beltrachini, The webnlg challenge: Generating
text from rdf data, in: Proceedings of the 10th International Conference on Natural Language
Generation, ACL Anthology, 2017, pp. 124–133.
[16] Y. Yao, D. Ye, P. Li, X. Han, Y. Lin, Z. Liu, Z. Liu, L. Huang, J. Zhou, M. Sun, Docred: A large-scale
document-level relation extraction dataset, arXiv preprint arXiv:1906.06127 (2019).
[17] Y. Zhang, V. Zhong, D. Chen, G. Angeli, C. D. Manning, Position-aware attention and supervised
data improve slot filling, in: Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, 2017.
[18] J. Tan, D. Wang, J. Sun, Z. Liu, X. Li, Y. Feng, Towards assessing the quality of knowledge graphs
via diferential testing, Information and Software Technology 174 (2024) 107521.
[19] N. Heist, S. Hertling, H. Paulheim, Kgreat: A framework to evaluate knowledge graphs via
downstream tasks, in: Proceedings of the 32nd ACM International Conference on Information
and Knowledge Management, 2023, pp. 3938–3942.
[20] H. Huang, C. Chen, C. He, Y. Li, J. Jiang, W. Zhang, Can llms be good graph judger for knowledge
graph construction?, arXiv preprint arXiv:2411.17388 (2024).
[21] J. L. Martinez-Rodriguez, I. López-Arévalo, A. B. Rios-Alvarado, Openie-based approach for
knowledge graph construction from text, Expert Systems with Applications 113 (2018) 339–355.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>X.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Academic paper knowledge graph, the construction and application</article-title>
          .,
          <source>in: ICBASE</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>[2] GitHub - langchain-ai/langchain: Build context-aware reasoning applications - github</article-title>
          .com, https://github.com/langchain-ai/langchain,
          <year>2025</year>
          . [Accessed 28-
          <fpage>07</fpage>
          -2025].
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Mo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kazdan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mpala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cundy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kanatsoulis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Koyejo</surname>
          </string-name>
          , Kggen:
          <article-title>Extracting knowledge graphs from plain text with language models</article-title>
          ,
          <source>arXiv preprint arXiv:2502.09956</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O.</given-names>
            <surname>Kabal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Harzallah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Guillet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ichise</surname>
          </string-name>
          ,
          <article-title>Enhancing domain-independent knowledge graph construction through openie cleaning and llms validation</article-title>
          ,
          <source>Procedia Computer Science</source>
          <volume>246</volume>
          (
          <year>2024</year>
          )
          <fpage>2617</fpage>
          -
          <lpage>2626</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Hearst</surname>
          </string-name>
          ,
          <article-title>Automatic acquisition of hyponyms from large text corpora</article-title>
          ,
          <source>in: COLING 1992 Volume 2: The 14th International Conference on Computational Linguistics</source>
          ,
          <year>1992</year>
          . URL: https: //aclanthology.org/C92-2082/.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mintz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bills</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Snow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <article-title>Distant supervision for relation extraction without labeled data</article-title>
          , in: K.
          <string-name>
            <surname>-Y. Su</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Su</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wiebe</surname>
          </string-name>
          , H. Li (Eds.),
          <source>Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Suntec, Singapore,
          <year>2009</year>
          , pp.
          <fpage>1003</fpage>
          -
          <lpage>1011</lpage>
          . URL: https://aclanthology.org/P09-1113/.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <article-title>Relation classification via convolutional deep neural network</article-title>
          , in: J.
          <string-name>
            <surname>Tsujii</surname>
          </string-name>
          , J. Hajic (Eds.),
          <source>Proceedings of COLING</source>
          <year>2014</year>
          ,
          <source>the 25th International Conference on Computational Linguistics: Technical Papers</source>
          , Dublin City University and Association for Computational Linguistics, Dublin, Ireland,
          <year>2014</year>
          , pp.
          <fpage>2335</fpage>
          -
          <lpage>2344</lpage>
          . URL: https://aclanthology.org/ C14-1220/.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. P.</given-names>
            <surname>Xing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hu</surname>
          </string-name>
          , Bertnet:
          <article-title>Harvesting knowledge graphs from pretrained language models</article-title>
          ,
          <source>arXiv preprint arXiv:2206.14268</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Swamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Romanou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jaggi</surname>
          </string-name>
          ,
          <article-title>Interpreting language models through knowledge graph extraction</article-title>
          ,
          <source>arXiv preprint arXiv:2111.08546</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Andrus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nasiri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Cullen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Fulda</surname>
          </string-name>
          ,
          <article-title>Enhanced story comprehension for large language models through dynamic document-based knowledge graphs</article-title>
          ,
          <source>in: Proceedings of the AAAI conference on artificial intelligence</source>
          , volume
          <volume>36</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>10436</fpage>
          -
          <lpage>10444</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Lv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ye</surname>
          </string-name>
          , Sac-kg:
          <article-title>Exploiting large language models as skilled automatic constructors for domain knowledge graphs</article-title>
          ,
          <source>arXiv preprint arXiv:2410.02811</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lairgi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Moncla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cazabet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Benabdeslem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cléau</surname>
          </string-name>
          ,
          <article-title>itext2kg: Incremental knowledge graphs construction using large language models</article-title>
          ,
          <source>in: International Conference on Web Information Systems Engineering</source>
          , Springer,
          <year>2024</year>
          , pp.
          <fpage>214</fpage>
          -
          <lpage>229</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hogan</surname>
          </string-name>
          , E. Blomqvist,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cochez</surname>
          </string-name>
          , C. d'Amato,
          <string-name>
            <given-names>G. D.</given-names>
            <surname>Melo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gutierrez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kirrane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E. L.</given-names>
            <surname>Gayo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Navigli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Neumaier</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Knowledge</surname>
            <given-names>graphs</given-names>
          </string-name>
          ,
          <source>ACM Computing Surveys (Csur) 54</source>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>