<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Knowledge Base-enhanced Multilingual Relation Extraction with Large Language Models</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Tong</forename><surname>Chen</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of AI and Advanced Computing</orgName>
								<orgName type="institution">Xi&apos;an Jiaotong-Liverpool University</orgName>
								<address>
									<postCode>215123</postCode>
									<settlement>Suzhou</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Liverpool</orgName>
								<address>
									<postCode>L69 3BX</postCode>
									<settlement>Liverpool</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Procheta</forename><surname>Sen</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Liverpool</orgName>
								<address>
									<postCode>L69 3BX</postCode>
									<settlement>Liverpool</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Zimu</forename><surname>Wang</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Liverpool</orgName>
								<address>
									<postCode>L69 3BX</postCode>
									<settlement>Liverpool</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="department">School of Advanced Technology</orgName>
								<orgName type="institution">Xi&apos;an Jiaotong-Liverpool University</orgName>
								<address>
									<postCode>215123</postCode>
									<settlement>Suzhou</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Zhengyong</forename><surname>Jiang</surname></persName>
							<email>jiang02@xjtlu.edu.cn</email>
							<affiliation key="aff0">
								<orgName type="department">School of AI and Advanced Computing</orgName>
								<orgName type="institution">Xi&apos;an Jiaotong-Liverpool University</orgName>
								<address>
									<postCode>215123</postCode>
									<settlement>Suzhou</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jionglong</forename><surname>Su</surname></persName>
							<email>jionglong.su@xjtlu.edu.cn</email>
							<affiliation key="aff0">
								<orgName type="department">School of AI and Advanced Computing</orgName>
								<orgName type="institution">Xi&apos;an Jiaotong-Liverpool University</orgName>
								<address>
									<postCode>215123</postCode>
									<settlement>Suzhou</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Knowledge Base-enhanced Multilingual Relation Extraction with Large Language Models</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">FF25E735FB0D76C2EEF48B6C273F3C47</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Multilingual</term>
					<term>Relation Extraction</term>
					<term>Knowledge Bases</term>
					<term>Large Language Models</term>
					<term>Natural Language Processing</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Relation Extraction (RE) is an essential task that involves comprehending relational facts between entities from natural language texts. However, existing research in RE, particularly those based on large language models (LLMs), is proven to fall short in the task due to their context unawareness (lack of fine-grained understanding), schema misalignment (misaligned with human-defined schema), and world knowledge ignorance (relying solely on internal parametric knowledge). In this paper, we propose a novel framework to address the aforementioned challenges. The framework consists of two stages, including 1) entity linking and 2) relation inference, by fully leveraging the efficacy of external knowledge bases (KBs) and LLMs in this task. We conduct extensive experiments in a multilingual setting and achieve state-of-the-art performance on the experimented datasets. The LLMs with external knowledge can typically outperform those without knowledge by a significant margin, indicating the effectiveness of our proposed framework.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Relation Extraction (RE) is an essential task in information extraction (IE) that aims to comprehend relational facts between entities in natural language texts <ref type="bibr" target="#b0">[1]</ref>. For the first example in Table <ref type="table">1</ref>, given an original input and an entity pair of interest (Apple Inc., iPhone), an RE model should be able to predict the relationship between them, i.e., Apple Inc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑝𝑟𝑜𝑑𝑢𝑐𝑡 𝑝𝑟𝑜𝑑𝑢𝑐𝑒𝑑</head><p>− −−−−−−−−− → iPhone. The structured knowledge obtained from RE models can support a variety of downstream applications, such as knowledge graph construction or completion <ref type="bibr" target="#b0">[1]</ref>, question answering <ref type="bibr" target="#b1">[2]</ref>, and dialogue systems <ref type="bibr" target="#b2">[3]</ref>.</p><p>Previous research usually formulates RE as a pairwise classification task with pre-trained language models (PLMs), in which novel methods have been proposed <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>. Recently, large language models (LLMs) demonstrate promising performance in a variety of downstream tasks <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref> across several paradigms, such as in-context learning (ICL) <ref type="bibr" target="#b7">[8]</ref>, chain-of-thought (CoT) prompting <ref type="bibr" target="#b8">[9]</ref>, and fine-tuning. However, they fall short in multiple specification-heavy tasks, including RE, whose performance under particularly ICL is much behind state-of-the-art PLM-based methods <ref type="bibr" target="#b9">[10]</ref>. Table <ref type="table">1</ref> gives some examples of mispredicted entity relationships using LLMs. Overall, the reasons why LLMs cannot perform well in RE include their context unawareness, schema misalignment, and world knowledge ignorance:</p><p>1. Context Unawareness. The completion of RE requires a thorough and fine-grained comprehension of the information in given contexts. However, LLMs with ICL usually lack fine-grained context awareness, which results in disregarded or erroneous relation prediction <ref type="bibr" target="#b9">[10]</ref>. In the first example in Table <ref type="table">1</ref>, LLMs should first thoroughly appreciate the context and the connection between "Apple Inc.", "device", and "iPhone"; otherwise, they are unable to determine the relationship between "Apple Inc." and "iPhone".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Examples of mispredicted relationships by large language models (LLMs), consisting of three categories: context unawareness, schema misalignment, and world knowledge ignorance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Error Type Example</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Context Unawareness</head><p>Input: Apple Inc. is an American multinational corporation [. 2. Schema Misalignment. RE models are required to predict the relationships between entities from a human-labeled, pre-defined schema. However, the number of candidate relationships is typically lengthy, and some relation types are misaligned between LLMs and human expectations <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11</ref>]. In the second example in Table <ref type="table">1</ref>, LLMs may confuse the two relation types, "work for" and "part of ", and make incorrect predictions on the relationship between "Armstrong" and "NASA Astronaut Corps". 3. World Knowledge Ignorance. World knowledge usually plays a vital role in RE, particularly in understanding implicit relationships <ref type="bibr" target="#b11">[12]</ref> and domain-specific knowledge <ref type="bibr" target="#b12">[13]</ref>. However, LLMs suffer in tasks that require rich world knowledge <ref type="bibr" target="#b13">[14]</ref> and solely rely on their internal parametric knowledge <ref type="bibr" target="#b9">[10]</ref>. In the third example in Table <ref type="table">1</ref>, LLMs may predict the relationship as "inventor" rather than "discoverer" without thoroughly understanding the knowledge of "Albert Einstein" and the "theory of relativity".</p><p>Knowledge bases (KBs) have been extensively employed in previous RE research. For example, researchers leverage the relationships obtained from Freebase <ref type="bibr" target="#b14">[15]</ref> and Wikipedia infoboxes <ref type="bibr" target="#b15">[16]</ref> to classify the relationships between entities in texts. However, such relationships are typically noisy and are not faithful to what is described in the given contexts <ref type="bibr" target="#b16">[17]</ref>. The following research focuses on denoising and learning context-dependent relationships, such as utilizing natural language inference (NLI) with entailment prediction <ref type="bibr" target="#b17">[18]</ref>. Nevertheless, as LLMs have demonstrated their abilities in NLI <ref type="bibr" target="#b18">[19]</ref> and natural language reasoning <ref type="bibr" target="#b19">[20]</ref>, the capability of the combination of KBs and LLMs requires further exploration to design contextual, aligned, and knowledgeable RE models. Moreover, previous research on knowledge-enhanced RE primarily focuses on the English corpus, which limits the adaptability of RE models to different linguistic contexts. This shortage hinders the development of comprehensive IE systems in the multilingual setting.</p><p>In this paper, we propose a novel framework for RE to address the aforementioned challenges by making the process contextually aware, schema-aligned, world knowledge-considered. The framework consists of two stages, entity linking and relation inference, that fully leverage the efficacy of KBs and LLMs in this task. As shown in Figure <ref type="figure" target="#fig_0">1</ref>, given an original document and two entities of interest, we first link the entities to Wikidata <ref type="bibr" target="#b20">[21]</ref>, a large-scale multilingual KB, to ascertain the relationship between the entities in the world knowledge and regard it as the candidate relationship in the document. Subsequently, in the second stage, we use the ICL strategy on LLMs to determine whether the candidate relationship actually takes place in the given context.</p><p>We conduct extensive experiments in a multilingual setting using three widely used RE datasets: DocRED <ref type="bibr" target="#b21">[22]</ref>, REBEL <ref type="bibr" target="#b17">[18]</ref>, and REDFM <ref type="bibr" target="#b22">[23]</ref>, with three LLMs: GPT-3.5, Llama 2 <ref type="bibr" target="#b23">[24]</ref>, and Flan-T5-XL <ref type="bibr" target="#b24">[25]</ref>. Experimental results demonstrate the effectiveness of our framework on all datasets, where the performance of zero-shot RE on the models outperforms the cases without knowledge by a significant margin. Additionally, it also achieves state-of-the-art performance on all three datasets and outperforms fine-tuned PLM-based methods, validating the efficacy of our proposed framework. We also conduct additional analysis on the effectiveness of knowledge, the impact of scaling up model parameters, and the coverage of knowledge in multilingualism to further demonstrate the effectiveness and generalizability of our proposed method.</p><p>The key contributions of this work are summarized as follows:</p><p>• We review the key literature on LLM-based RE thoroughly, and we argue that well-behaved RE models should be contextually aware, schema-aligned, and world knowledge-considered. • We propose a novel framework for RE, consisting of two stages: entity linking and relation inference, to fully leverage the efficacy of KBs and LLMs in the RE task. • Experimental results under a multilingual setting demonstrate the effectiveness and generalizability of our method across diverse linguistic contexts with substantial improvements over state-of-the-art baselines.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Relation Extraction</head><p>RE has been extensively studied over the past years due to its potency in various downstream applications.</p><p>Early research in RE focuses on sentence-level RE <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b26">27]</ref>, while some later approaches shift to the document level, aiming to comprehend the relationships between entities across multiple sentences <ref type="bibr" target="#b21">[22]</ref>. The most commonly used methods for RE are sequence-based techniques, which essentially rely on LSTM-or Transformer-based architectures <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>, modeling complicated interactions between entities while implicitly capturing long-distance relationships. Furthermore, graph neural networks (GNNs) are also employed in RE due to their efficacy in representing and interacting with structured data. In this process, researchers construct relevant graphs using words, mentions, entities, or sentences as nodes and predict relationships by reasoning on the graph <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b28">29]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Knowledge-enhanced RE</head><p>Knowledge-enhanced RE incorporates external knowledge information to comprehensively understand the relations between entities. Some existing work utilizes external knowledge bases like Freebase and Wikidata to improve the representation by using entity and relation information. Liu et al. <ref type="bibr" target="#b29">[30]</ref> injects triples from knowledge graphs into texts, transforming sentences into knowledge-enhanced sentence trees. Chen et al. <ref type="bibr" target="#b30">[31]</ref> proposes a knowledge-aware prompt-tuning approach with synergistic optimization that incorporates knowledge from relation labels into RE. External knowledge can bridge the gap between general domain data and domain-specific data, while general domain RE methods are applied in specific domains. Roy and Pan <ref type="bibr" target="#b31">[32]</ref> uses an entity-level knowledge graph in pre-trained BERT for clinical RE, integrating medical information.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">LLM-based RE</head><p>LLM-based RE has also been studied by researchers motivated by the generalized intelligence of LLMs in various downstream tasks, such as information extraction <ref type="bibr" target="#b32">[33]</ref>, machine translation <ref type="bibr" target="#b6">[7]</ref>, and adversarial attacks <ref type="bibr" target="#b5">[6]</ref>. However, previous research concludes that LLMs typically fall short in the RE task, whose performance is much behind PLM-based approaches <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b33">34,</ref><ref type="bibr" target="#b34">35]</ref>. To overcome this, Zhang et al. <ref type="bibr" target="#b35">[36]</ref> proposes QA4RE, a framework to improve the performance of LLM by aligning RE with question </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Entity Linking Relation Inference</head><p>Yes.</p><p>Question: Is there such a relationship "product produced" between "Apple Inc." and "iPhone"? answering (QA) tasks. Wan et al. <ref type="bibr" target="#b36">[37]</ref> proposes GPT-RE that utilizes task-aware representations and reasoning logic to improve entity-relationship relevance and the capability of explaining input-label mapping. Li et al. <ref type="bibr" target="#b37">[38]</ref> suggests integrating LLM with an NLI module to construct relation triples in response to the abundance of pre-defined relation types and the uncontrollability of LLMs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Problem Formulation</head><p>We define our RE task as follows: Given a document 𝐷 consisting of 𝑁 sentences {𝑠 1 , 𝑠 2 , ..., 𝑠 𝑁 } (𝑁 is the number of sentences within the document, and 𝑁 = 1 indicates sentence-level RE) and an entity pair of interest (𝑒 ℎ , 𝑒 𝑡 ), in which 𝑒 ℎ represents the head entity and 𝑒 𝑡 refers to the tail entity, the RE model aims to determine the potential relationship between 𝑒 ℎ and 𝑒 𝑡 from a pre-defined schema. In our task, a KB 𝒦 is leveraged with world knowledge, and an LLM is utilized to identify the existence of the relationship 𝑟 𝒦 retrieved form 𝒦 between 𝑒 ℎ and 𝑒 𝑡 in the given document.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Entity Linking and Querying</head><p>In the first stage of our proposed framework, we conduct entity linking and querying to obtain the candidate relationships between the entities of interest, which are regarded as supervision of world knowledge to the given entity pair. Entity linking is the process of linking recognized entity words to an entity in a KB, which is a pioneering step in extracting construction information from unstructured text <ref type="bibr" target="#b38">[39]</ref>. In our framework, we link the labeled entity mentions to Wikidata <ref type="bibr" target="#b20">[21]</ref>, a large-scale multilingual KB. Once the entities are linked, we introduce a query based on SPARQL <ref type="foot" target="#foot_0">1</ref> to retrieve the relationships between the linked entities and regard it as the candidate relationship between them. For the datasets whose entities are annotated with coreference chains, we iterate the head and tail entities until a pair of entities can be linked to Wikidata.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Relation Inference using LLMs</head><p>After obtaining the candidate relationship between the entity pair of interest, in the second stage of our proposed framework, we adopt LLMs to identify whether the relationship actually occurs in the given context. Specifically, we leverage the ICL strategy <ref type="bibr" target="#b8">[9]</ref> that conditions LLMs on a natural language instruction and formulate the task as a QA task due to the capacity of LLMs to answer natural questions.</p><p>In accordance with the entity linking results in the first stage, we design separate prompts for the entity  pairs that have or have not been found potential relationships, and the prompts with separate examples are illustrated in Tables <ref type="table" target="#tab_3">2 and 3</ref>. For the entity pairs that have been found candidate relationships in the KB, we ask LLMs to determine whether they actually exist in the given context. Otherwise, we ask the LLMs to classify the relationships between the entities from the schemas directly. This framework enables us to carry out a contextual, aligned, and knowledgeable RE process: it regards the knowledge in KBs as supervision, and the inference with LLMs makes the predictions with respect to the given contexts. Furthermore, since KBs are human-constructed world knowledge, their candidate knowledge also conforms to human-defined schemas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments and Analysis</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Datasets</head><p>We conduct our experiments on the following three datasets, whose dataset statistics are organized in Table <ref type="table" target="#tab_5">4</ref>: • DocRED <ref type="bibr" target="#b21">[22]</ref> is a document-level human-annotated RE dataset constructed from Wikipedia and Wikidata. Since at least 40.7% of relational facts in DocRED can only be extracted from multiple sentences, it requires models to comprehensively model the whole document to determine the relationships between entities. • REBEL <ref type="bibr" target="#b17">[18]</ref> is a distantly supervised dataset, hyperlinking with Wikidata and Wikipedia for relation extraction. It employs an NLI model to filter noise and address relations that are not entailed by the Wikipedia text through entailment prediction.</p><p>• REDFM <ref type="bibr" target="#b22">[23]</ref> is constructed for multilingual RE that involves seven languages. Different from the REBEL dataset, REDFM not only applies NLI to filter noise but also conducts manual filtering to ensure the annotation quality. We select the English (EN), Spanish (ES), and German (DE) subsets to validate the performance of our framework in a multilingual setting.</p><p>Following the previous work in LLM-based RE <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b33">34]</ref>, we sample a subset from the validation set of DocRED and the test set of REBEL and REDFM to validate the performance of our method against baselines. We evaluate the performance of the experimented models using micro F1-score.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Baselines</head><p>We compare the performance of our proposed method on RE against the following baselines:</p><p>• KD-DocRE <ref type="bibr" target="#b3">[4]</ref> is a semi-supervised framework for document-level RE that incorporates axial attention, adaptive focal loss, and knowledge distillation to capture the interdependency among entity-pairs. It addresses the class imbalance problem and the differences between human annotated and distantly supervised data in document-level RE. • DREEAM <ref type="bibr" target="#b4">[5]</ref> is a memory-efficient approach for improving document-level RE by incorporating evidence and offering a self-training strategy, addressing high memory consumption and limited annotated data availability in document-level RE.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Experimental Setup</head><p>We conduct our experiments on three commonly used multilingual LLMs: GPT-3.5, Llama 2 <ref type="bibr" target="#b23">[24]</ref>, and Flan-T5 <ref type="bibr" target="#b24">[25]</ref>, and we access the models with different approaches and settings. For GPT-3.5, we call the API by OpenAI<ref type="foot" target="#foot_1">2</ref> and select the gpt-3.5-turbo-instruct checkpoint due to its ability to interpret and execute human instructions seamlessly. For Llama 2 (Llama-2-7b-chat-hf) and Flan-T5 (flan-t5-xl), the models are retrieved from the HuggingFace repository<ref type="foot" target="#foot_2">3</ref> . To mimic the randomness of human reasoning and produce relatively stable outputs, we set the temperature of GPT-3.5 and Llama 2 as 0.2.</p><p>All experiments are conducted on a single NVIDIA GeForce RTX 4090 graphics card. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">Main Results</head><p>The experimental results of our proposed framework under different LLMs with and without external knowledge are given in Table <ref type="table" target="#tab_6">5</ref>. From the table, we make the following observations: First, without the incorporation of external knowledge, LLMs have been shown to fall short in the RE task and their performances are much behind those of the state-of-the-art baseline models. The results are also consistent with the previous work <ref type="bibr" target="#b9">[10]</ref>, indicating the correctness of our implementation. Among the three LLMs, Flan-T5 achieves the best performance and is remarkably close to the deliberated baseline models, indicating its excellent document-level understanding and relation reasoning ability. Llama 2 achieves the worst performance, with its results close to zero. We sample 50 outputs of Llama 2 and compare them with the ground truths. We conclude that this phenomenon is attributed to the excessively uncontrollable and flexible nature of its output compared to the rest of the models.</p><p>Second, after incorporating the external knowledge into the models, LLMs exhibit remarkable performance across all datasets, in which the average improvements of GPT-3.5, Llama 2, and Flan-T5 are 52.90, 45.90, and 11.82, respectively. Notably, the performance of Flan-T5 under the zero-shot setting achieves state-of-the-art results on all datasets, which is also better than the deliberated, finetuned PLM-based methods. GPT-3.5 improves the most among the models, but there is still room between the performance and the PLM-based methods. These results demonstrate that the performances of LLMs with external knowledge in a zero-shot setting can be comparable to or even surpass the finetuned PLM-based method on the RE task. They also underscore the effectiveness of our approach in multilingual settings, which is not limited to the English context.</p><p>Finally, the performance of the experimented models is consistent regardless of the language and the existence of external knowledge. Flan-T5 consistently achieve the best performance across all datasets, and Llama 2 exhibits comparatively lower performance, indicating that Flan-T5 has a better performance and a robust generalization advantage when dealing with the RE task and can be regarded as an ideal model in real-world application, while Llama 2 requires additional improvements for higher performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5.">Additional Analysis</head><p>Effectiveness of External Knowledge First, we analyze the effectiveness of the external KB in our proposed method. Since not all entity pairs can be linked to Wikidata, we calculate the percentage of correct prediction of LLMs with and without the incorporation of external knowledge, denoted as 𝑃 𝑤/𝑘𝑛𝑜𝑤 and 𝑃 𝑤/𝑜𝑘𝑛𝑜𝑤 , calculated as: </p><formula xml:id="formula_0">𝑃 𝑤/𝑘𝑛𝑜𝑤 = # of Correct Prediction # of Entity Pairs Linked to Wikidata ,<label>(1)</label></formula><p>We visualize the calculation in Figures <ref type="figure">2 and 3</ref>, respectively. Our findings show a significant difference in performance with and without incorporating knowledge across all datasets and LLMs. Specifically, the correct prediction of LLMs with external knowledge is as high as more than 80%, while the results without knowledge are inferior, among which only Flan-T5 can exceed 40%. Because of the flexible and uncontrollable nature of Llama 2, its correct predictions without external knowledge are nearly zero, while after incorporating knowledge, its results improve to over 80%. The performance difference indicates that a performance gap exists across different models-while all models can achieve similar performance with external knowledge, their results are dominated by the relation classification result without external knowledge, indicating that LLMs are good inferencers but not classifiers for entity relationships. Moreover, although the performance of LLMs is better on the REBEL dataset with the incorporation of external knowledge, it becomes worse on the models without knowledge due to the large relation schema of the dataset. This remains a challenge for future research to design better methods to deal with the entity pairs that cannot link to the KBs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Scaling Law</head><p>We also analyze whether the performance of LLM-based RE can benefit from scaling up the model parameters. Specifically, we select the Flan-T5 series models with four different model sizes:  Flan-T5-Small (80M), Flan-T5-Base (250M), Flan-T5-Large (780M), and Flan-T5-XL (3B) and evaluate the performance of the models with and without external knowledge. As shown in Figure <ref type="figure">4</ref>, a clear positive scaling effect exists in LLM-based RE, i.e., fine-tuned larger models achieve better performance in the RE task. We can also observe the role of external knowledge. After incorporating external knowledge into the LLM, the increase in the number of model parameters has a smaller impact on the results. Moreover, with external knowledge, Flan-T5-Small can surpass Flan-T5-Large, and Flan-T5-Base can exceed Flan-T5-XL's performance without external knowledge. This validates the effectiveness of both LLMs and external knowledge when handling the RE task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Coverage of Knowledge in Multilingualism</head><p>Given the multilingual support of the chosen LLMs, we extend our investigation to include multilingual RE experiments using the REDFM dataset. The experimental outcomes, as summarized in Table <ref type="table" target="#tab_6">5</ref>, reveal subpar performance when the LLMs attempt multilingual RE tasks directly. However, integrating external knowledge significantly enhances performance, prompting us to explore the coverage of Wikidata for the selected multilingual dataset. To this end, we conduct supplementary experiments on REDFM-EN, REDFM-DE, and REDFM-ES to assess the percentage of samples that could be linked to Wikidata for external knowledge, as illustrated in Figure <ref type="figure" target="#fig_2">5</ref>. The results indicate that a relatively high proportion of samples across the three languages could be covered by Wikidata, with coverages nearing or exceeding 85%, specifically 85.5%, 84.5%, and 84.1% for REDFM-EN, REDFM-DE, and REDFM-ES, respectively. The remaining 14.5%, 15.5%, and 15.9% are attributed to entries not indexed by Wikidata, with a small fraction being inaccessible due to unstable network connections.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion and Future Work</head><p>In this paper, we propose a novel framework to address the current challenges of LLMs falling short in RE tasks because of their context-unawareness and schema-misalignment, with world knowledge ignorance. It consists of two stages: entity linking and relation inference, fully leveraging the efficacy of KBs and LLMs in this task. We conduct experiments in a multilingual setting using three datasets and three LLMs to validate the effectiveness of our framework, where the zero-shot RE with world knowledge outperforms those without that by a significant margin and achieves state-of-the-art performance on all experimental datasets, even better than fine-tuned PLM-based methods, indicating the effectiveness of our proposed framework. We also conduct additional analysis on the effectiveness of knowledge, the impact of scaling up model parameters, and the coverage of knowledge in multilingualism to further demonstrate the effectiveness and generalizability of our proposed method. In the future, we will conduct more detailed analysis on other related tasks, such as event relation extraction, to further validate the effectiveness and generalizability of our proposed method.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>(Figure 1 :</head><label>1</label><figDesc>Figure 1: Overall framework of the proposed method, consisting of two stages: (1) entity linking and (2) relation inference using large language models (LLMs).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :Figure 3 :Figure 4 :</head><label>234</label><figDesc>Figure 2: Percentage of correct relation prediction with and without external knowledge on the DocRED dataset.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Knowledge coverage in the portion of the dataset we chose for the three languages.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>theory of relativity, Albert Einstein Prediction: inventor Label: discoverer</head><label></label><figDesc>..] Devices include the iPhone, iPad, Mac, [...].</figDesc><table><row><cell></cell><cell>Entities: Apple Inc., iPhone</cell></row><row><cell></cell><cell>Prediction: N/A</cell></row><row><cell></cell><cell>Label: product produced</cell></row><row><cell></cell><cell>Input: Armstrong joined the NASA Astronaut Corps in the second group, which</cell></row><row><cell>Schema Misalignment</cell><cell>was selected in 1962. Entities: Armstrong, NASA Astronaut Corps Prediction: work for</cell></row><row><cell></cell><cell>Label: part of</cell></row><row><cell></cell><cell>Input: The theory of relativity usually encompasses two interrelated physics theories</cell></row><row><cell>Knowledge Ignorance</cell><cell>by Albert Einstein. Entities:</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2</head><label>2</label><figDesc>Instruction and an example for relation inference for the entity pair with world knowledge retrieved from Wikidata. Is there such a relationship {relationship} between {head_entity} and {tail_entity}?</figDesc><table><row><cell>Instruction:</cell></row><row><cell>Given information: {source_text}</cell></row><row><cell>Example:</cell></row><row><cell>Coburg Peak is the rocky peak rising to 783m in Erul Heights on Trinity Peninsula in Graham Land, Antarctica.</cell></row><row><cell>Head Entity: Trinity Peninsula</cell></row><row><cell>Tail Entity: Graham Land</cell></row><row><cell>Relationship: part of</cell></row><row><cell>Output:</cell></row><row><cell>Yes.</cell></row><row><cell>Answer:</cell></row><row><cell>(Trinity Peninsula, part of , Graham Land)</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3</head><label>3</label><figDesc>Instruction and an example for relation inference for the entity pair without world knowledge retrieved from Wikidata.</figDesc><table><row><cell>Head Entity: Trakiya Heights</cell></row><row><cell>Tail Entity: Antarctica</cell></row><row><cell>Output:</cell></row><row><cell>continent</cell></row><row><cell>Answer:</cell></row><row><cell>(Trakiya Heights</cell></row></table><note>Instruction:Given information: {source_text} Options of relations: {relation_list} Which relationship between {head_entity} and {tail_entity} can be inferred from given options? (Please answer in English and only output the option) Example: Source Text: Utus Peak is the rocky peak rising to 1217m in Trakiya Heights on Trinity Peninsula in Graham Land, Antarctica. The peak is named after the ancient Roman town of Utus in Northern Bulgaria. Relaiton List: head of government, country, place of death, sibling,[...]   </note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>, continent, Antarctica)</head><label></label><figDesc></figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 4</head><label>4</label><figDesc>Dataset statistics.</figDesc><table><row><cell>Dataset</cell><cell cols="2">#Types Train</cell><cell>Val</cell><cell>Test</cell></row><row><cell>DocRED</cell><cell>96</cell><cell cols="3">3, 053 1, 000 1, 000</cell></row><row><cell>REBEL</cell><cell>1146</cell><cell cols="3">3.13M 173K 174K</cell></row><row><cell>REDFM-EN</cell><cell>32</cell><cell>1.88K</cell><cell>449</cell><cell>446</cell></row><row><cell>REDFM-ES</cell><cell>32</cell><cell>1.87K</cell><cell>228</cell><cell>281</cell></row><row><cell>REDFM-DE</cell><cell>32</cell><cell>2.07K</cell><cell>252</cell><cell>285</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 5</head><label>5</label><figDesc>Experimental results on F1-score of our proposed method under different large language models (LLMs) with and without external knowledge against baselines, in which the best and the second-best results are highlighted in bold and underlined, respectively.</figDesc><table><row><cell>Model</cell><cell cols="5">DocRED REBEL REDFM-EN REDFM-ES REDFM-DE</cell></row><row><cell>KD-DocRE</cell><cell>68.79</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>-</cell></row><row><cell>DREEAM</cell><cell>69.55</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>-</cell></row><row><cell>GPT-3.5</cell><cell>22.45</cell><cell>23.65</cell><cell>19.22</cell><cell>9.88</cell><cell>10.03</cell></row><row><cell>w/ Knowledge</cell><cell>62.52</cell><cell>56.68</cell><cell>68.17</cell><cell>60.27</cell><cell>69.39</cell></row><row><cell>LLaMA 2</cell><cell>0.00</cell><cell>0.72</cell><cell>0.00</cell><cell>0.00</cell><cell>0.00</cell></row><row><cell>w/ Knowledge</cell><cell>27.24</cell><cell>54.51</cell><cell>52.83</cell><cell>33.33</cell><cell>51.53</cell></row><row><cell>Flan-T5</cell><cell>62.79</cell><cell>60.65</cell><cell>70.76</cell><cell>61.05</cell><cell>67.86</cell></row><row><cell>w/ Knowledge</cell><cell>73.90</cell><cell>70.40</cell><cell>79.32</cell><cell>73.84</cell><cell>81.97</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.w3.org/TR/sparql12-query/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://platform.openai.com/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://huggingface.co/</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Acknowledgments</head><p>This research is funded by the Postgraduate Research Scholarship (PGRS) at Xi'an Jiaotong-Liverpool University, contract number FOSSP221001.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Learning from Context or Names? An Empirical Study on Neural Relation Extraction</title>
		<author>
			<persName><forename type="first">H</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhou</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2020.emnlp-main.298</idno>
		<ptr target="https://aclanthology.org/2020.emnlp-main.298.doi:10.18653/v1/2020.emnlp-main.298" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of EMNLP</title>
				<meeting>EMNLP</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="3661" to="3672" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Entity-relation extraction as multi-turn question answering</title>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P19-1129</idno>
		<ptr target="https://aclanthology.org/P19-1129.doi:10.18653/v1/P19-1129" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL</title>
				<meeting>ACL</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1340" to="1350" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Mem2Seq: Effectively incorporating knowledge bases into endto-end task-oriented dialog systems</title>
		<author>
			<persName><forename type="first">A</forename><surname>Madotto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-S</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fung</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P18-1136</idno>
		<ptr target="https://aclanthology.org/P18-1136.doi:10.18653/v1/P18-1136" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL</title>
				<meeting>ACL</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1468" to="1478" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">Q</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Ng</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2022.findings-acl.132</idno>
		<ptr target="https://aclanthology.org/2022.findings-acl.132.doi:10.18653/v1/2022.findings-acl.132" />
		<title level="m">Document-level relation extraction with adaptive focal loss and knowledge distillation</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1672" to="1681" />
		</imprint>
	</monogr>
	<note>Findings of ACL</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">DREEAM: Guiding attention with evidence for improving documentlevel relation extraction</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Okazaki</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.eacl-main.145</idno>
		<ptr target="https://aclanthology.org/2023.eacl-main.145.doi:10.18653/v1/2023.eacl-main.145" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of EACL</title>
				<meeting>EACL</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1971" to="1983" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nguyen</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2311.11861</idno>
		<title level="m">Generating valid and natural adversarial examples with large language models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Na</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Maimaiti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chen</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2402.10699</idno>
		<title level="m">Rethinking human-like translation strategy: Integrating drift-diffusion model with large language models for machine translation</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Language models are few-shot learners</title>
		<author>
			<persName><forename type="first">T</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ryder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subbiah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Neelakantan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shyam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Askell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Herbert-Voss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krueger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Henighan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Child</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ziegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Winter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hesse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sigler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Litwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Berner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mccandlish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amodei</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of NeurIPS</title>
				<meeting>NeurIPS</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="1877" to="1901" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Chainof-thought prompting elicits reasoning in large language models</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schuurmans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bosma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ichter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Chi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><surname>Zhou</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of NeurIPS</title>
				<meeting>NeurIPS</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="24824" to="24837" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2311.08993</idno>
		<title level="m">When does in-context learning fall short and why? a study on specification-heavy tasks</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Measuring inductive biases of in-context learning with underspecified demonstrations</title>
		<author>
			<persName><forename type="first">C</forename><surname>Si</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Friedman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>He</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.acl-long.632</idno>
		<ptr target="https://aclanthology.org/2023.acl-long.632.doi:10.18653/v1/2023.acl-long.632" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL</title>
				<meeting>ACL</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="11289" to="11310" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Knowledge-enriched event causality identification via latent structure induction networks</title>
		<author>
			<persName><forename type="first">P</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zuo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Peng</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.acl-long.376</idno>
		<ptr target="https://aclanthology.org/2021.acl-long.376.doi:10.18653/v1/2021.acl-long.376" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL-IJCNLP</title>
				<meeting>ACL-IJCNLP</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="4862" to="4872" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Joint biomedical entity and relation extraction with knowledgeenhanced collective inference</title>
		<author>
			<persName><forename type="first">T</forename><surname>Lai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">H</forename><surname>Tran</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.acl-long.488</idno>
		<ptr target="https://aclanthology.org/2021.acl-long.488.doi:10.18653/v1/2021.acl-long.488" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL-IJCNLP</title>
				<meeting>ACL-IJCNLP</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="6248" to="6260" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">When not to trust language models: Investigating effectiveness of parametric and non-parametric memories</title>
		<author>
			<persName><forename type="first">A</forename><surname>Mallen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Asai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Zhong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Khashabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hajishirzi</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.acl-long.546</idno>
		<ptr target="https://aclanthology.org/2023.acl-long.546.doi:10.18653/v1/2023.acl-long.546" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL</title>
				<meeting>ACL</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="9802" to="9822" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Distant supervision for relation extraction without labeled data</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mintz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bills</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Snow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Jurafsky</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/P09-1113" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL-AFNLP</title>
				<meeting>ACL-AFNLP</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="1003" to="1011" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Learning 5000 relational extractors</title>
		<author>
			<persName><forename type="first">R</forename><surname>Hoffmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Weld</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/P10-1030" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL</title>
				<meeting>ACL</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="286" to="295" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">New frontiers of information extraction</title>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Roth</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2022.naacl-tutorials.3</idno>
		<ptr target="https://aclanthology.org/2022.naacl-tutorials.3.doi:10.18653/v1/2022.naacl-tutorials.3" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of NAACL-HLT (Tutorials)</title>
				<meeting>NAACL-HLT (Tutorials)</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="14" to="25" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">REBEL: Relation extraction by end-to-end language generation</title>
		<author>
			<persName><forename type="first">P.-L</forename><surname>Huguet Cabot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Navigli</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.findings-emnlp.204</idno>
		<ptr target="https://aclanthology.org/2021.findings-emnlp.204.doi:10.18653/v1/2021.findings-emnlp.204" />
	</analytic>
	<monogr>
		<title level="m">Findings of EMNLP</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="2370" to="2381" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">HW-TSC at SemEval-2023 task 7: Exploring the natural language inference capabilities of ChatGPT and pre-trained language model for clinical trial</title>
		<author>
			<persName><forename type="first">X</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Qiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Ma</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.semeval-1.221</idno>
		<ptr target="https://aclanthology.org/2023.semeval-1.221.doi:10.18653/v1/2023.semeval-1.221" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of SemEval-2023</title>
				<meeting>SemEval-2023</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1603" to="1608" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Logic-LM: Empowering large language models with symbolic solvers for faithful logical reasoning</title>
		<author>
			<persName><forename type="first">L</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Albalak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.findings-emnlp.248</idno>
		<ptr target="https://aclanthology.org/2023.findings-emnlp.248.doi:10.18653/v1/2023.findings-emnlp.248" />
	</analytic>
	<monogr>
		<title level="m">Findings of EMNLP</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="3806" to="3824" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Wikidata: a free collaborative knowledgebase</title>
		<author>
			<persName><forename type="first">D</forename><surname>Vrandečić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Krötzsch</surname></persName>
		</author>
		<idno type="DOI">10.1145/2629489</idno>
		<ptr target="https://doi.org/10.1145/2629489.doi:10.1145/2629489" />
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="78" to="85" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">DocRED: A large-scale document-level relation extraction dataset</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sun</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P19-1074</idno>
		<ptr target="https://aclanthology.org/P19-1074.doi:10.18653/v1/P19-1074" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL</title>
				<meeting>ACL</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="764" to="777" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">RED fm : a filtered and multilingual relation extraction dataset</title>
		<author>
			<persName><forename type="first">L</forename><surname>Huguet Cabot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tedeschi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-C. Ngonga</forename><surname>Ngomo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Navigli</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.acl-long.237</idno>
		<ptr target="https://aclanthology.org/2023.acl-long.237.doi:10.18653/v1/2023.acl-long.237" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL</title>
				<meeting>ACL</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="4326" to="4343" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Touvron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Stone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Albert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Almahairi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Babaei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Bashlykov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Batra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bhargava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bhosale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bikel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Blecher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Ferrer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Cucurull</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Esiobu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fernandes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Fuller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Goswami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hartshorn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hosseini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Inan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kardas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kerkez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Khabsa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kloumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Korenev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Koura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-A</forename><surname>Lachaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lavril</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Liskovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Martinet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mihaylov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mishra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Molybog</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Nie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Poulton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Reizenstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rungta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Saladi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schelten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Subramanian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">E</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Taylor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Williams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">X</forename><surname>Kuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Zarov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kambadur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Narang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Stojnic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Edunov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Scialom</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2307.09288</idno>
		<title level="m">Llama 2: Open foundation and fine-tuned chat models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">W</forename><surname>Chung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Longpre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zoph</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Fedus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dehghani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Brahma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Webson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Gu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Suzgun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chowdhery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Castro-Ros</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pellat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Robinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Valter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Narang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Mishra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Petrov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">H</forename><surname>Chi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dean</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roberts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wei</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2210.11416</idno>
		<title level="m">Scaling instructionfinetuned language models</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Simultaneously self-attending to all mentions for full-abstract biological relation extraction</title>
		<author>
			<persName><forename type="first">P</forename><surname>Verga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Strubell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mccallum</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N18-1080</idno>
		<ptr target="https://aclanthology.org/N18-1080.doi:10.18653/v1/N18-1080" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of NAACL-HLT</title>
				<meeting>NAACL-HLT</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="872" to="884" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Matching the blanks: Distributional similarity for relation learning</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">Baldini</forename><surname>Soares</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fitzgerald</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kwiatkowski</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P19-1279</idno>
		<ptr target="https://aclanthology.org/P19-1279.doi:10.18653/v1/P19-1279" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ACL</title>
				<meeting>ACL</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="2895" to="2905" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Double graph based reasoning for document-level relation extraction</title>
		<author>
			<persName><forename type="first">S</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2020.emnlp-main.127</idno>
		<ptr target="https://aclanthology.org/2020.emnlp-main.127.doi:10.18653/v1/2020.emnlp-main.127" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of EMNLP</title>
				<meeting>EMNLP</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1630" to="1640" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">SIRE: Separate intra-and inter-sentential reasoning for document-level relation extraction</title>
		<author>
			<persName><forename type="first">S</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chang</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.findings-acl.47</idno>
		<ptr target="https://aclanthology.org/2021.findings-acl.47.doi:10.18653/v1/2021.findings-acl.47" />
	</analytic>
	<monogr>
		<title level="m">Findings of ACL-IJCNLP</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="524" to="534" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Enabling language representation with knowledge graph</title>
		<author>
			<persName><forename type="first">W</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Ju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K-Bert</forename></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v34i03.5681</idno>
		<ptr target="https://ojs.aaai.org/index.php/AAAI/article/view/5681.doi:10.1609/aaai.v34i03.5681" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of AAAI</title>
				<meeting>AAAI</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="2901" to="2908" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction</title>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Si</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1145/3485447.3511998</idno>
		<idno>doi:10.1145/3485447.3511998</idno>
		<ptr target="https://doi.org/10.1145/3485447.3511998" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of WWW</title>
				<meeting>WWW</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="2778" to="2788" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Incorporating medical knowledge in BERT for clinical relation extraction</title>
		<author>
			<persName><forename type="first">A</forename><surname>Roy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pan</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.emnlp-main.435</idno>
		<ptr target="https://aclanthology.org/2021.emnlp-main.435.doi:10.18653/v1/2021.emnlp-main.435" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of EMNLP</title>
				<meeting>EMNLP</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="5357" to="5366" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">OmniEvent: A comprehensive, fair, and easy-to-use toolkit for event understanding</title>
		<author>
			<persName><forename type="first">H</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.emnlp-demo.46</idno>
		<ptr target="https://aclanthology.org/2023.emnlp-demo.46.doi:10.18653/v1/2023.emnlp-demo.46" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of EMNLP (Demo)</title>
				<meeting>EMNLP (Demo)</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="508" to="517" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wan</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2305.14450</idno>
		<title level="m">Is information extraction solved by chatgpt? an analysis of performance, evaluation criteria, robustness and errors</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2304.11633</idno>
		<title level="m">Evaluating chatgpt&apos;s information extraction capabilities: An assessment of performance, explainability, calibration, and faithfulness</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Aligning instruction tasks unlocks large language models as zero-shot relation extractors</title>
		<author>
			<persName><forename type="first">K</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">Jimenez</forename><surname>Gutierrez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Su</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.findings-acl.50</idno>
		<ptr target="https://aclanthology.org/2023.findings-acl.50.doi:10.18653/v1/2023.findings-acl.50" />
	</analytic>
	<monogr>
		<title level="m">Findings of ACL</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="794" to="812" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">GPT-RE: In-context learning for relation extraction using large language models</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Wan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kurohashi</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.emnlp-main.214</idno>
		<ptr target="https://aclanthology.org/2023.emnlp-main.214.doi:10.18653/v1/2023.emnlp-main.214" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of EMNLP</title>
				<meeting>EMNLP</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="3534" to="3547" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Semi-automatic data enhancement for document-level relation extraction with distant supervision from large language models</title>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Jia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zheng</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.emnlp-main.334</idno>
		<ptr target="https://aclanthology.org/2023.emnlp-main.334.doi:10.18653/v1/2023.emnlp-main.334" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of EMNLP</title>
				<meeting>EMNLP</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="5495" to="5505" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Entity linking with a knowledge base: Issues, techniques, and solutions</title>
		<author>
			<persName><forename type="first">W</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Han</surname></persName>
		</author>
		<idno type="DOI">10.1109/TKDE.2014.2327028</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="443" to="460" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
